[RERELEASE] What is the perception of information security - part 1

In the second episode of the refreshed edition of the Exploring Information Security (EIS) podcast (wow, that's a mouthful), I talk with Chris Maddalena about the perception of information security.

Chris recently gave a talk on FUD at BSides Detroit and CircleCityCon this past Summer, prompting me to explore the topic of information security perception with him. I think perception is something very important to the infosec community, especially, now that it is becoming more relevant in the public eye.

In part one of this two part series we talk about perception

  • What is the perception of infosec in business?

  • How do we change the perception of security?

  • We start getting into where security fits in an organization

What is the perception of information security - part 1
With Chris Maddalena

Exploring the Quantum Horizon: Why We Need CBOMs Today

Summary:

In this episode, host Timothy De Block sits down with John Morello to dive into the world of Cryptography Bill of Materials (CBOM) and the looming transition to Post-Quantum Cryptography (PQC). They discuss why tracking cryptographic assets is becoming a critical security requirement, how CBOMs are being integrated into existing SBOM standards, and why organizations need to start future-proofing their encrypted data against quantum computing threats today.

Key Topics Discussed

  • What is a CBOM? A Cryptography Bill of Materials provides a trustworthy, structured, and machine-readable way to represent what cryptographic components exist in your software and how they are configured.

  • Beyond the Basic SBOM: While a standard SBOM might tell you that a component like OpenSSL is present, a CBOM details the specific algorithms, key lengths, and operational modes in use.

  • The Consolidation of Standards: CBOMs are actively being merged into broader SBOM frameworks like CycloneDX and SPDX. Over the coming months, CBOM data will simply become a subset of the tags and artifacts within standard SBOM files, reducing complexity for developers and security teams.

  • The Post-Quantum Threat: The mathematical foundations of common encryption algorithms like RSA, DES, and SHA will eventually be defeatable by quantum computers.

  • "Harvest Now, Decrypt Later": Adversaries may already be recording encrypted traffic today with the intention of decrypting it years down the line once quantum computing becomes viable.

  • NIST and Regulatory Standards: NIST has been running a Post-Quantum Cryptography (PQC) project for several years and is expected to finalize approved algorithms soon. This guidance will likely be codified into future standards, such as a FIPS 140-4 update.

  • Who Owns the CBOM? DevOps and developer teams should be responsible for creating and maintaining the CBOM data alongside their existing SBOM processes. Security teams will then consume this data to understand exposure, measure adoption of quantum-resistant algorithms, and prioritize risk mitigation.

Memorable Quotes

  • On the need for CBOMs: "It's less about dealing with cryptographic based vulnerabilities. It's more to help you inventory what you've got to find whether you have weak algorithms in weak key links in place and to be able to do that discovery in a consistent way."

  • On preparing for the future: "If you wait to move to postquantum or quantum resistant algorithms only after those quantum computers are widely available or at least available to your adversaries... basically everything that you've encrypted before with these non-resistant algorithms is subject for decryption in the future."

Resources & Links Mentioned

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

What are CBOMs?
John Morello


Exploring the Risks of Model Context Protocol (MCP) with Casey Bleeker

Summary:

Timothy De Block sits down with Casey Bleeker from SurePath AI to demystify the Model Context Protocol (MCP). They discuss how this emerging standard allows Large Language Models (LLMs) to interact with external tools and why it represents a significant, often invisible, exposure risk for enterprises. Casey explains why MCP should be viewed like the HTTP protocol—ubiquitous and fundamental—and outlines the critical security controls needed to prevent data exfiltration and malicious code execution without blocking AI adoption.

Key Topics Discussed

  • What is MCP?

    • MCP is a standard for creating a "natural language definition" of an API, allowing an LLM to intelligently determine when to call a specific tool rather than just generating text.

    • It acts as a translation layer between a REST interface and the AI model, enabling the model to execute tasks like updating a CloudFormation stack or querying a database.

  • The "HTTP" Analogy & Exposure Risk:

    • Casey argues that MCP should be thought of as a protocol (like HTTP) rather than a specific tool. It is being implemented broadly across many open-source tools and providers, often hidden behind the scenes when users add "connectors" or extensions.

    • Because it functions as a protocol, it creates a broad exposure risk where users grant AI agents permissions to create, update, or delete resources on their behalf.

  • Vulnerabilities to Watch for in the MCP:

    • Malicious Payloads: Downloading an external MCP resource (e.g., via npm) can lead to unvalidated code execution on a local machine before the model even calls the tool.

    • Data Exfiltration: Users effectively grant their identity permissions to untrusted code controlled by external third parties (the LLM), allowing the AI to act as a proxy for the user on internal systems.

  • Defense Strategies:

    • Central Management: Organizations need a central MCP management gateway authenticated via Single Sign-On (SSO) with role-based permissions to control which tools are authorized.

    • Deep Payload Inspection: The only true control point is the interaction between the user/agent and the AI model. Security teams must inspect the payloads in real-time to steer usage away from unapproved resources or prevent destructive actions.

  • Authentication Specs: DCR vs. CIMD:

    • Casey warns against the Dynamic Client Registration (DCR) flow, citing complexity and vulnerabilities in many implementations.

    • He highly recommends demanding vendors support the CIMD (Client-Initiated Management Data) specification, which allows for proper validation of destinations and enforces valid redirect URIs.

Resources Mentioned

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

Exploring the Risks of Model Context Protocol (MCP)
Casey Bleeker


From Combat Zones to Corporate Lobbies: A Guide to Physical Security with Josh Winter

Summary:

In this episode, host Timothy De Block dives into the often overlooked but critically important world of physical security with Josh Winter. Josh shares his unique journey from serving in combat infantry with the 82nd Airborne Division to running executive protection for high-net-worth individuals and conducting physical penetration testing for major corporations. They discuss the glaring differences between corporate security and residential security, how to spot the illusion of safety (like unplugged cameras and empty lobby desks), and why human behavior is always the most unpredictable variable in any security plan.

Key Topics Discussed

  • Josh's Background: How Josh transitioned from military service (82nd Airborne, PSD work in Afghanistan) to state security, executive protection for a wealthy family in San Diego, and eventually physical pen testing for a major firm.

  • Corporate vs. Residential Security: The stark contrast between the static, often complacent environment of a corporate office and the highly dynamic, unpredictable nature of securing a private residence.

  • The "Illusion of Security": Why lobby attendants without actual access control or security training are merely "decorations" and how unmonitored or broken cameras create a false sense of safety.

  • Physical Pen Testing Tactics: Josh explains how simple confidence, observation, and exploiting human nature (like tailgating or holding the door) are often more effective than sophisticated hacking tools.

  • The "Catch Me If You Can" Approach: How acting like you belong—much like Frank Abagnale Jr.—is the most powerful tool for bypassing physical security measures.

  • Practical Security Upgrades on a Budget: Why $500 spent on motion-activated lighting, a simple ring camera, and upgraded door hardware is far more effective than a multi-million dollar system that isn't properly maintained.

  • The Insider Threat: The reality that disgruntled employees, not shadowy hackers, often pose the greatest physical threat to an organization, and how to assess that risk.

  • Security Culture: How to shift an organization's mindset so that challenging an unknown person in the hallway is seen as a sign of respect and vigilance, rather than rudeness.

Memorable Quotes

  • "A lobby desk attendant with no actual access control... is probably just decoration."

  • "You have to train yourself to get away from that 'I'm supposed to be here' confidence... if you're an attacker, you're going to use that against them."

  • "You're dealing with the anesthetic of familiarity." (On why employees become complacent in their daily routines.)

  • "The antithesis of security is convenience. I don't want to wear a seatbelt, but I do because it could save my life."

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

What is Physical Security
Josh Winter


[RERELEASE] What is a SIEM?

In this most excellent edition of the Exploring Information Security podcast, I talk with Derek Thomas a senior information security analyst specializing in log management and SIEM on the topic of: "What is a SIEM?"

Derek (@dth0m) has a lot of experience with SIEM and can be found on Linkedin participating in discussions on the technology. I had the opportunity to hang out with Derek at DerbyCon in 2015 and I came away impressed with his knowledge of SIEM. He seemed to be very passionate about the subject and that showed in this interview.

In this episode, we discuss:

  • How to pronounce SIEM

  • What is a SIEM

  • How to use a SIEM

  • The biggest challenge using a SIEM

  • How to tune the SIEM

  • Use cases, use cases, use cases.

More Resources:

What is a SIEM?
With Derek Thomas

[RERELEASE] What is threat modeling?

Originally posted August 13, 2014.

In the fifth edition of the Exploring Information Security (EIS) podcast, I talk with J Wolfgang Goerlich, Vice President of Vio Point, about threat modeling.

Wolfgang has presented at many conference on the topic of threat modeling. He suggests using a much similar method of threat modeling that involves threat paths, instead of other methods such as a threat tree or kill chain. You can find him taking long walks and naps on Twitter (@jwgoerlich) and participating in several MiSec (@MiSec) projects and events. 

In this interview Wolfgang covers:

  • What is threat modeling?

  • What needs to be done to threat model

  • Who should perform the threat modeling

  • Resources that can be used to build an effective threat model

  • The life cycle of a threat model

What is threat modeling?
With Wolfgang Georlich

[RERELEASE] What is cryptography?

Originally posted July 30, 2014.

In the fourth edition of the Exploring Information Security (EIS) podcast, I talk to the smooth sounding Justin Troutman a cryptographer from North Carolina about what cryptography is.

Justin is a security and privacy research currently working on a project titled, "Mackerel: A Progressive School of Cryptographic Thought." You can find him on Twitter (@JustinTroutman) discussing ways in which crypto can be made easier for the masses. Be sure to check out his website for more information.

In the interview Justin talks about

  • What cryptography is

  • Why everyone should care about cryptography

  • What some of it's applications are

  • How someone would get started in cryptography and what are some of the skills needed

What is cryptography?
With Justin Troutman

[RERELEASE] What is a Chief Information Security Officer (CISO)

Originally July 9, 2015.

In the third edition of the Exploring Information Security (EIS) podcast my infosec cohort Adam Twitty and I talk to the Wh1t3 Rabbit, Rafal Los, about what exactly a Chief Information Security Officer, otherwise known as CISO, is.

Rafal Los (@Wh1t3Rabbit) is the Director of Solutions Research at Accuvant. He produces the Down The Security Rabbithole podcast and writes the Following the Wh1t3 Rabbit security blog. On several occasions he's tackled the CISO role within an organization on both his podcast and blog.  I would highly recommend both if you're in the infosec field or looking to get into it.

In the interview Rafal talks about:

  • What a CISO is

  • What role does a CISO fill in an organization

  • Who skills are needed to be an effective CISO

  • The different types of CISOs

What is a Chief Information Security Officer
With Rafal Los

Exploring The Bad Advice Cybersecurity Professionals Provide to the Public

Summary:

In this episode, Timothy De Block sits down with cybersecurity expert Bob Lord to discuss the dangerous impact of "Hacklore"—obsolete, excessive, and fear-based cybersecurity advice. They explore how bombarding everyday users with spy-thriller scenarios (like juice jacking and evil baristas) leads to security fatigue and inaction. Instead, they advocate for shifting the burden of security away from the user and onto tech companies, while narrowing consumer advice down to the absolute basics: Multi-Factor Authentication (MFA), password managers, and credit freezes.

Key Topics Discussed

  • The Origins of Hacklore: Bob Lord started the Hacklore website after a CISO friend emailed him a "trifecta" of problematic security advice concerning public Wi-Fi, juice jacking, and restaurant QR codes. The initiative serves as an expert-backed resource to debunk common myths and promote better, actionable security guidance.

  • Rethinking Security Advice: Providing users with excessive or overly complex advice often results in them ignoring it entirely. Security advice needs to be constantly reevaluated to ensure it addresses actual, common crimes rather than unlikely scenarios like an "evil barista" intercepting data.

  • Shifting the Security Burden: The responsibility for digital safety should move away from the end-user and toward internet service providers and tech companies. Companies must adopt "secure by design" practices, such as requiring password changes upon installation or shipping routers with unique default passwords.

  • The Power of MFA: Multi-Factor Authentication (MFA) is essential for protecting vulnerable populations, such as seniors who are frequently targeted by organized fraud. Even SMS-based MFA is far better than having no MFA at all, as it degrades most common attacks according to a Microsoft study.

  • The Hidden Benefit of Password Managers: A major, underappreciated benefit of password managers is their built-in phishing resistance. If a user is tricked into visiting an imposter website, the password manager will not fill in the credentials, effectively stopping the attack in its tracks.

  • Freezing Credit: Implementing a credit freeze is another highly recommended, fundamental security measure. This action builds directly on the basic security practices promoted by the Hacklore initiative.

  • Learning from Near Misses: At the upcoming RSA conference, Bob Lord will discuss the concept of cyber security "near misses". He advocates that the cybersecurity field should learn from incidents that almost went wrong, similar to the safety approach used in the aviation sector.

Memorable Insights

  • Sharing obsolete security advice can be considered an "act of harm" because it distracts people from effective measures and can create a fatalistic mindset that no security action will help.

  • Since most people will only dedicate a few minutes a year to security, recommendations must be strictly limited to what is truly feasible for them to implement.

  • Getting a friend or family member to make just one security change, like enabling MFA on their primary email account, is considered a significant victory.

Resources Mentioned

  • Hacklore Initiative: A non-commercial website aimed at replacing obsolete cybersecurity advice with expert-backed guidance (hacklore.org).

  • Hacklore on Bluesky: Follow the movement and join the conversation at @hacklore.bsky.social.

  • "How effective is multifactor authentication at deterring cyberattacks?": The Microsoft research paper (arXiv:2305.00945) referenced by Bob Lord detailing the real-world efficacy of MFA: https://arxiv.org/abs/2305.00945.

  • Bob Lord's Updated Cyber Guidance for Small Businesses: Originally written during his time at CISA, Bob has updated this practical security guide on his personal blog: Read on Medium.

  • Methods of Delivery vs. Intrusion (The Hacklore Edition): A blog post explaining why the security industry shouldn't over-index on flashy threats like parking meter QR codes: Read on Medium.

  • PSA: Elevator (un)safety: In addition to his popular seatbelt analogy, Bob explores the concept of built-in safety in this blog post about elevators: Read on Medium.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

Exploring the Bad Advice Cybersecurity Professionals Provide the Public
Bob Lord


Inside Cambodia's Scam Compounds: Pig Butchering, Organized Crime, and Protecting Your Life Savings

Summary:

Timothy De Block sits down with former FBI agent Scott Augenbaum to discuss his eye-opening trip to Cambodia, which has become the "online scam capital of the world". They dive into the terrifying evolution of "pig butchering" scams, how Chinese organized crime and geopolitical investments have fueled a massive criminal ecosystem, and why the ultimate vulnerability is still human psychology. Scott explains the massive scale of these operations and shares the single most important step you can take to avoid losing your money to these syndicates.

Key Topics Discussed

  • The Ground Zero of Scams: Scott discusses his trip to Sihanoukville, Cambodia, a city filled with scam compounds hiding in plain sight behind casino facades and fortress-like buildings with their backs facing the street.

  • The Pivot to "Pig Butchering": How China's 2018 ban on online gambling and the 2020 COVID-19 casino shutdowns forced organized crime to pivot to massive, highly organized cryptocurrency and romance advanced-fee scams.

  • A Geopolitical Nightmare: The complexities of combating these compounds when they are backed by Chinese investment and infrastructure (such as a highway built using Huawei routers). This dynamic leaves local law enforcement hesitant to intervene and limits the FBI's power.

  • The Anatomy of a $5.2 Million Scam: Scott breaks down a devastating case of "pig butchering," detailing how scammers use fake simulated trading apps, "spot gold trading," and artificial intelligence to fatten victims up before stealing millions.

  • The Double Crisis: The conversation acknowledges the horrifying human trafficking of compound workers—often lured from underdeveloped nations by fake jobs—while also focusing on the victims in the US and globally who are losing billions.

  • The "Cancer Drug" Problem: Why organizations and individuals often only invest in security after they've been breached to meet compliance requirements.

  • One Essential Tip: The absolute necessity of understanding social engineering and enabling Two-Factor Authentication (2FA) on all mission-critical accounts, such as home routers, cellular providers, iCloud, and Gmail.

Memorable Quotes

"If you're not going to make money through gambling, you're going to make money through the old-fashioned way, scamming." — Scott Augenbaum

"We don't need to make information security people smarter... We need to get the end users up to taking it seriously." — Scott Augenbaum

"I deal with people who want to buy the cancer drug after they had cancer. They don't want to buy it before because well, that's too much work." — Scott Augenbaum

Resources Mentioned

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

Inside Cambodia's Scam Compounds: Pig Butchering, Organized Crime, and Protecting Your Life Savings
Scott Augenbaum


What are the AI Vulnerabilities We Need to Worry About

Episode Summary

Timothy De Block sits down with Keith Hoodlet, Security Researcher and founder of Securing.dev, to navigate the chaotic and rapidly evolving landscape of AI security.

They discuss why "learning" is the only vital skill left in security, how Large Language Models (LLMs) actually work (and how to break them), and the terrifying rise of AI Agents that can access your email and bank accounts. Keith explains the difference between inherent AI vulnerabilities—like model inversion—and the reckless implementation of AI agents that leads to "free DoorDash" exploits. They also dive into the existential risks of disinformation, where bots manipulate human outrage and poison the very data future models will train on.

Key Topics

  • Learning in the AI Era:

    • The "Zero to Hero" approach: How Keith uses tools like Claude to generate comprehensive learning plans and documentation for his team.

    • Why accessible tools like YouTube and AI make learning technical concepts easier than ever.

  • Understanding the "Black Box":

    • How LLMs Work: Keith breaks down LLMs as a "four-dimensional array of numbers" (weights) where words are converted into tokens and calculated against training data. * Open Weights: The ability for users to manipulate these weights to reinforce specific data (e.g., European history vs. Asian Pacific history).

  • AI Vulnerabilities vs. Attacks:

    • Prompt Injection: "Social engineering" the chatbot to perform unintended actions.

    • Membership Inference: Determining if specific data (like yours) is in a training set, which has massive implications for GDPR and the "right to be forgotten".

    • Model Inversion: Stealing weights and training data. Keith cites speculation that Chinese espionage used this technique to "shortcut" their own model training using US labs' data.

    • Evasion Attacks: A technique rather than a vulnerability. Example: Jason Haddix bypassing filters to generate an image of Donald Duck smoking a cigar by describing the attributes rather than naming the character.

  • The "Agent" Threat:

    • Running with Katanas: Giving AI agents access to browsers, file systems (~/.ssh), and payment methods is a massive security risk.

    • The DoorDash Exploit: A real-world example where a user tricked a friend's email-connected AI bot into ordering them free lunch for a week.

  • Supply Chain & Disinformation:

    • Hallucination Squatting: AI generating code that pulls from non-existent packages, which attackers can then register to inject malware.

    • The Cracker Barrel Outrage: How a bot-driven disinformation campaign manufactured fake outrage over a logo change, fooling a major company and the news media.

    • Data Poisoning: The "Russian Pravda network" seeding false information to shape the training data of future US models.

Memorable Quotes

  • "It’s like we’re running with... not just scissors, we’re running with katanas. And the ground that we're on is constantly changing underneath our feet." — Keith Hoodlet

  • "We never should have taught runes to sand and allowed it to think." — Keith Hoodlet

  • "The biggest bombshell here is that we are the vulnerability. Because we're going to get manipulated by AI in some form or fashion." — Timothy De Block

Resources Mentioned

Books:

Videos & Articles:

About the Guest

Keith Hoodlet is a Security Researcher at Trail of Bits and the creator of Securing.dev. A self-described "technologist who wants to move to the woods," Keith specializes in application security, threat modeling, and deciphering the complex intersection of code and human behavior.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

What are the AI Vulnerabilities We Need to Worry About?
Keith Hoodlet

[RERELEASE] How to make time for a home lab

In this timely episode of the Exploring Information Security podcast, Chris Maddalena and I continue our home lab series by answering a listener's question on how to find time for a home lab.

Chris (@cmaddalena) and I were asked the question on Twitter, "How do you make time for a home lab?" We answered the question on Twitter, but also decided the question was a good topic for an EIS episode. Home labs are great for advancing a career or breaking into information security. To find the time for them requires making them a priority. It's also good to have a purpose. The time I spend with a home lab is often sporadic and coincides with research on a given area.

In this episode we discuss:

  • Making a home lab a priority

  • Use cases for a home lab

  • Ideas for fitting a home lab into a busy schedule

More resource:

How to make time for a home lab
With Chris Maddalena

[RERELEASE] How to build a home lab

In this getting stared episode of the Exploring Information Security podcast, I discuss how to build a home lab with Chris Maddalena.

Chris (@cmaddalena) and I have submitted to a couple of calls for training at CircleCityCon and Converge and BSides Detroit this summer on the topic of building a home lab. I will also be speaking on this subject at ShowMeCon. Home labs are great for advancing a career or breaking into information security. The bar is really low on getting started with one. A gaming laptop with decent specifications works great. For those with a lack of hardware or funds there are plenty of online resources to take advantage of. 

In this episode we discuss:

  • What is a home lab?

  • Why would someone want to build a home lab?

  • What are the different kinds of home labs?

  • What are the requirements?

  • How to get started building a home lab

More resources:

How to build a home lab
With Chris Maddalena

How to Build an AI Governance Program with Walter Haydock

Summary:

Timothy De Block sits down with Walter Haydock, founder of StackAware, to break down the complex world of AI Governance. Walter moves beyond the buzzwords to define AI governance as the management of risk related to non-deterministic systems—systems where the same input doesn't guarantee the same output.

They explore why the biggest AI risk facing organizations today isn't necessarily a rogue chatbot or a sophisticated cyber attack, but rather HR systems (like video interviews and performance reviews) that are heavily regulated and often overlooked. Walter provides a practical, three-step roadmap for organizations to move from chaos to calculated risk-taking, emphasizing the need for quantitative risk measurement over vague "high/medium/low" assessments.

Key Topics & Insights

  • What is AI Governance?

    • Walter defines it as measuring and managing the risks (security, reputation, contractual, regulatory) of non-deterministic systems.

    • The 3 Buckets of AI Security:

      1. AI for Security: AI-powered SOCs, fraud detection.

      2. AI for Hacking: Automated pentesting, generating phishing emails.

      3. Security for AI: The governance piece—securing the models and data themselves.

  • The "Hidden" HR Vulnerability:

    • While security teams focus on hackers, the most urgent vulnerability is often in Human Resources. Tools for firing, hiring, and performance evaluation are highly regulated (e.g., NYC Local Law 144, Illinois AI Video Interview Act) yet frequently lack proper oversight.

  • How to Build an AI Governance Program (The First 3 Steps):

    1. Establish a Policy: Define your risk appetite (what is okay vs. not okay).

    2. Inventory Systems (with Amnesty): Ask employees what they are using without fear of punishment to get an accurate picture.

    3. Risk Assessment: Assess the inventory against your policy. Use a tiered approach: prioritize regulated/cyber-physical systems first, then confidential data, then public data.

  • Quantitative Risk Management:

    • Move away from "High/Medium/Low" charts. Walter advocates for measuring risk in dollars of loss expectancy using methodologies like FAIR (Factor Analysis of Information Risk) or the Hubbard Seiers method.

  • Emerging Threats:

    • Agentic AI: The next 3-5 years will be defined by "non-deterministic systems interacting with other non-deterministic systems," creating complex governance challenges.

  • Regulation Roundup:

    • Companies are largely unprepared for the wave of state-level AI laws coming online in places like Colorado (SB 205), California, Utah, and Texas.

Resources Mentioned

  • ISO 42001: The global standard for building AI management systems (similar to ISO 27001 for info sec).

  • Cloud Security Alliance (CSA): Recommended for their AI Controls Matrix.

  • Book: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiers.

  • StackAware Risk Register: A free template combining Hubbard Seiers and FAIR methodologies.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

How to Build an AI Governance Program
Walter Haydock


Exploring Cribl: Sifting Gold from Data Noise for Cost and Security

Summary:

Timothy De Block and Ed Bailey, a former customer and current Field CISO at Cribl, discuss how the company is tackling the twin problems of data complexity and AI integration. Ed explains that Cribl's core mission—derived from the French word "cribé" (to screen or sift)—is to provide data flexibility and cost management by routing the most valuable data to expensive tools like SIEMs and everything else to cheap object storage. The conversation covers the 40x productivity gains from their "human in the loop AI", Cribl Co-Pilot, and their expansion into "agentic AI" to fight back against sophisticated threats.

Cribl's Core Value Proposition

  • Data Flexibility & Cost Management: Cribl's primary value is giving customers the flexibility to route data from "anywhere to anywhere". This allows organizations to manage costs by classifying data:

    • Valuable Data: Sent to high-value, high-cost platforms like SIMs (Splunk, Elastic).

    • Retention Data: Sent to inexpensive object storage (3 to 5 cents per gig).

    • Matching Cost and Value: This approach ensures the most valuable data gets the premium analysis while retaining all data necessary for compliance, addressing the CISO's fear of missing a critical event.

  • SIEM Migration and Onboarding: Cribl mitigates the risk of disruption during SIM migration—a major concern for CISOs—by acting as an abstraction layer. This can dramatically accelerate migration time; one large insurance company was able to migrate to a next-gen SIEM in five months, a process their CISO projected would have taken two years otherwise.

  • Customer Success Story (UBA): Ed shared a story where his team used Cribl Stream to quickly integrate an expensive User and Entity Behavior Analytics (UBA) tool with their SIEM in two hours for a proof-of-concept. This saved 9-10 months and the deployment of 100,000 agents, providing 100% value from the UBA tool in just two weeks.

AI Strategy and Productivity Gains

  • "Human in the Loop AI": Cribl's initial AI focus is on Co-Pilot, which helps people use the tools better. This approach prioritizes accuracy and addresses the fact that enterprise tooling is often difficult to use.

  • 40x Productivity Boost: Co-Pilot Editor automates the process of mapping data into complex, esoteric data schemas (for tools like Splunk and Elastic). This reduced the time to create a schema for a custom data type from approximately a week to about one hour, representing a massive gain in workflow productivity.

  • Roadmap Shift to Agentic AI: Following CriblCon, the roadmap is shifting toward "agentic AI" that operates in the background, focused on building trust through carefully controlled and validated value.

  • AI in Search: The Cribl Search product has built-in AI that suggests better ways for users to write searches and utilize features, addressing the fact that many organizations fail to get full value from their searching tools because users don't know how to use them efficiently.

Challenges and Business Model

  • Data Classification Pain Point: The biggest challenge during deployment is that many users "have never really looked at their data". This leads to time spent classifying data and defining the "why" (what is the end goal) before working on the "how".

  • Pricing Models: Cribl offers two main models:

    • Self-Managed (Stream & Edge): Uses a topline license (based on capacity/terabytes purchased).

    • Cloud (Lake & Search): Uses a consumption model (based on credits/what is actually used).

  • Empowering the Customer: Cribl's mission is to empower customers by opening choices and enabling their goals, contrasting with other vendors where it's "easy to get in, the data never gets out".

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

Exploring Cirbl: Sifting Gold from Data Noise
Ed Bailey


What is BSides ICS?

Summary:

Timothy De Block sits down with Mike Holcomb, founder of UtilSec, to discuss the critical and often misunderstood world of Operational Technology (OT) and Industrial Control Systems (ICS) security. Mike shares the origin story of BSides ICS, a global community-driven event designed to bridge the gap between IT security, engineering, and plant operations. The conversation dives into the "myth" of the air gap, the physical security risks in manufacturing, and why small utilities are the next major front in the cyber arms race.

The Reality of OT Security

  • The Vanishing Air Gap: While many believe OT systems are isolated, true air gaps are rare. Connectivity is driven by contractors dropping 5G hotspots for remote troubleshooting or employees charging phones on engineering workstations, inadvertently bridging OT networks to the internet.

  • Physical Security is Cyber Security: If an attacker can physically touch a device, they can own it. Mike shares a story of a VPN concentrator being stolen from a data center because there were no cameras and physical access was loosely controlled.

  • IT/OT Convergence: OT security is now "cyber security" because it involves TCP/IP packets, Windows machines in production environments, and networked PLC (Programmable Logic Controllers) and HMIs (Human Machine Interfaces).

BSides ICS: A Practical Community

  • Origin Story: BSides ICS was born out of a desire for a practical, down-to-earth alternative to highly academic or expensive "bleeding edge" conferences.

  • Global Expansion: Following a successful flagship event in Miami, BSides ICS is expanding globally in 2026 with events planned for Australia, Singapore, Argentina, Mexico City, and Bristol (UK).

  • Miami Flagship Details:

    • Date: February 23, 2026 (Monday before the S4 conference).

    • Location: Miami Dade College, Wolfson Campus.

    • Keynotes: Bryson Bort and Dr. Emma Stewart.

    • Features: Lockpick Village, ICS Village CTF (Capture the Flag), and a focus on diversity (achieving 50% women speakers last year).

The Threat Landscape: State Actors vs. Activists

  • The Hybrid Threat: Mike discusses his research on the alignment of state adversaries (low frequency, high impact) and activists (high frequency, low impact). The concern is a move toward a high-frequency, high-impact threat environment.

  • The "Long Tail" of Utilities: There are 50,000 water utilities in the U.S. 35,000 of them serve fewer than 500 clients. These "mom and pop" utilities lack the budget for basic IT security, let alone advanced OT monitoring, making them highly vulnerable targets.

  • Lessons from Colonial Pipeline & Jaguar Land Rover: Major incidents have shifted executive mindsets. Jaguar Land Rover's plants were down for five weeks due to fundamental failures in backup and recovery, highlighting that even large companies struggle with security basics.

How to Get Started in OT/ICS

  • Empathy is a Tool: The biggest problem in the field is a lack of empathy between IT and OT teams. Successful security requires understanding the engineer's goal (keeping the plant running) before enforcing security controls.

  • Free Resources: Mike provides over 40 hours of free course content on YouTube, covering OT essentials, OSINT, and pen testing for OT.

Resources Mentioned

  • Mike Holcomb’s Website: mikeholcomb.com (Training, consulting, and course links).

  • BSides ICS Website: bsidesics.org.

  • Standards: IEC 62443 (The global framework for securing OT/ICS).

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

What is BSides ICS
Mike Holcomb


Cybersecurity Career Panel: Transitioning from Technical to Leadership

Summary:

In this episode, Timothy De Block sits down with a panel of cybersecurity leaders—Chris Anderson, Roger Brotz, and Mike Vetri—to discuss the realities of moving from "boots on the ground" technical roles to senior leadership. The conversation explores the challenges of letting go of the keyboard, the critical importance of emotional intelligence, and why "empathy" is a high-performance tool in a high-stress industry.

Meet the Panel

  • Chris Anderson: Security Consultant and Architect known for his "pot-stirring" approach to solving complex organizational security problems.

  • Roger Brotz: CISO at Arcadia Healthcare with over four decades of experience, starting his journey in 1977.

  • Mike Vetri: Senior Director of Security Operations at Veeva and former Air Force cyber operations officer.

Main Topics & Key Takeaways

The "Passion" to Lead

The panel dives into the true meaning of leadership, noting that the word "passion" stems from the Latin word for "suffering". Leading a cyber team means being willing to suffer through mistakes and high-pressure incidents alongside your team.

Empathy as a Business Metric

Mike shares a pivotal study indicating that leaders who embrace emotional intelligence and empathy often exceed their annual revenue goals by 20%. Conversely, a lack of empathy directly correlates to high burnout and employee turnover.

Learning to Fail Fast

The leaders recount personal failures, from failing to recognize team burnout during 16-hour-a-day incident responses to the "pride" of holding onto technical tasks for too long. They emphasize that failure is not a roadblock but a necessary inflection point for growth.

Bridging the Gap: Technical vs. Business

A major challenge for new leaders is translating "this is bad" into actionable business risk. Leaders must learn to speak the language of the boardroom, focusing on profit protection and risk management rather than just technical vulnerabilities.

Actionable Advice for Aspiring Leaders

  • Set Boundaries Early: Don't let your job intrude on your personal life until it's too late; once you establish a habit of always being available, it’s hard to pull back.

  • Find Your Barometer: Use a spouse or a trusted peer as a "barometer" to tell you when your stress levels are negatively impacting your leadership style.

  • Work-Life Harmony: Move away from the idea of a perfect "50/50 balance" and strive for harmony where your professional and personal lives can coexist.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

Cybersecurity Career Panel: Transitioning from Technical to Leadership
Chris Anderson - Roger Brotz- Mike Verti


What is React2Shell (CVE-2025-55182)?

Summary:

Frank M. Catucci and Timothy De Block dive into a critical, high-impact remote code execution (RCE) vulnerability affecting React Server Components and popular frameworks like Next.js, a flaw widely being referred to as React2Shell.

They discuss the severity, the rapid weaponization by botnets and state actors, and the long-term struggle organizations face in patching this class of vulnerability.

The Next Log4j? React2Shell (CVE-2025-55182)

  • Critical Severity: The vulnerability, tracked as CVE-2025-55182 (and sometimes including the Next.js version, CVE-2025-66478, which was merged into it), carries a maximum CVSS score of 10.0.

  • The Flaw: The issue is an unauthenticated remote code execution (RCE) vulnerability stemming from insecure deserialization in the React Server Components (RSC) "Flight" protocol. This allows an attacker to execute arbitrary, privileged JavaScript code on the server simply by sending a specially crafted HTTP request.

  • Widespread Impact: The vulnerability affects React 19.x and other popular frameworks that bundle the react-server implementation, most notably Next.js (versions 15.x and 16.x using the App Router). It is exploitable in default configurations.

  • Rapid Weaponization: The speed of weaponization is "off the chain". Within a day of public disclosure, malicious payloads were observed, with activities including:

    • Deployment of Marai botnets.

    • Installation of cryptomining malware (XMRig).

    • Deployment of various backdoors and reverse shells (e.g., SNOWLIGHT, COMPOOD, PeerBlight).

    • Attacks by China-nexus threat groups (Earth Lamia and Jackpot Panda).

The Long-Term Problem and Defense

  • Vulnerability Management Challenge: The core problem is identifying where these vulnerable components are running in a "ridiculous ecosystem". This is not just a problem for proprietary web apps, but for any IoT devices or camera systems that may be running React.

  • The Shadow of Log4j: Frank notes that the fallout from this vulnerability is expected to be similar to Log4j, requiring multiple iterative patches over time (Log4j required around five versions).

    • Many organizations have not learned their lesson from Log4j.

    • Because the issue can be three or four layers deep in open-source packages, getting a full fix requires a cascade of patches from dependent projects.

  • Mitigation is Complex: Patches should be applied immediately, but organizations must also consider third-party vendors and internal systems.

    • Post-Exploitation: Assume breach. If the vulnerability was exposed, it is a best practice to rotate all secrets, API keys, and credentials that the affected server had access to.

    • WAF as a Band-Aid: A Web Application Firewall (WAF) can be a mitigating control, but blindly installing one over a critical application is ill-advised as it can break essential functionality.

  • The Business Battle: Security teams often face the "age-old kind of battle" of whether to fix a critical vulnerability with a potential break/fix risk or stay open for business. Highly regulated industries, even with a CISA KEV listing, may still slow patching due to mandatory change control and liability for monetary loss if systems go down.

The Supply Chain and DDoS Threat

  • Nation-State & Persistence: State actors like those from China will sit on compromised access for long periods, establishing multiple layers of backdoors and obfuscated persistence mechanisms before an active strike.

  • Botnet Proliferation: The vulnerability is being used to rapidly create new botnets for massive Denial of Service (DoS) attacks.

    • DoS attack sizes are reaching terabits per second.

    • DDoS attacks are so large that some security vendors have had to drop clients to protect their remaining customers.

  • Supply Chain Security: The vulnerability highlights the urgent need for investment in Software Bill of Materials (SBOMs) and Application Security Posture Management (ASPM)/Application Security Risk Management (ASRM) solutions.

    • This includes looking beyond web servers to embedded systems, medical devices, and auto software.

    • Legislation is in progress to mandate that vendors cannot ship vulnerable software and to track these components.

Actionable Recommendations

  • Immediate Patching: This is the only definitive mitigation. Upgrade to the patched versions immediately, prioritizing internet-facing services.

  • Visibility Tools: Use tools for SBOMs, ASPM, or ASRM to accurately query your entire ecosystem for affected versions of React and related frameworks.

  • Testing: Run benign proof-of-concept code to test for the vulnerability on your network. Examples include simple commands like whoami. (Note: Always use trusted, non-malicious payloads for internal testing.)

  • Monitor CISA KEV: The vulnerability has been added to the CISA Known Exploited Vulnerabilities (KEV) catalog.

  • Research: Look for IoCs (Indicators of Compromise) and TTPs (Tactics, Techniques, and Procedures) associated with post-exploitation to hunt for pervasive access and backdoors.

Resources

China-nexus cyber threat groups rapidly exploit React2Shell ... - AWS, accessed December 12, 2025, https://aws.amazon.com/blogs/security/china-nexus-cyber-threat-groups-rapidly-exploit-react2shell-vulnerability-cve-2025-55182/

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

What is React2Shell (CVE-2025-55182)
FrankCatucci


[RERELEASE] What is application security?

In this tenacious edition of the Exploring Information Security podcast, I talk with Frank Catucci of Qualys as we answer the questions: "What is application security?"

Frank (@en0fmc) has a lot of experience with application security. His current role is the director for web application security and product management at Qualys.  He's also the chapter leader for OWASP Columbia, SC. He lives and breathes application security.

In this episode we discuss:

  • What is applications security?

  • Why is application security important?

  • Where application security should be integrated

  • Resources for getting into application security

What is application security?
With Frank Catucci

The Final Frontier of Security: The State of Space Security with Tim Fowler

Summary:

Timothy De Block and Tim Fowler, CEO and founder of Ethos Labs LLC, strap in to discuss the critical, rapidly escalating threats in space security. Tim explains that space is now an extension of the internet, where security has historically been ignored due to "organizational inertia" and a perceived "veil of obscurity". The discussion covers the real-world impact of GPS timing disruption on terrestrial infrastructure (like power grids and financial systems) , the danger of unencrypted space communications , and the urgent need for a holistic security approach that integrates security testers directly with development teams. They conclude with a debate on the role of AI in anomaly detection versus critical human decision-making in space.

The State of Space Security and Major Threats

  • Security is a Low Priority: Historically, security was not a priority for systems in space, often operating under a "veil of obscurity". This is slowly changing, with an uptick in security engineering roles this year, moving beyond just GRC/cyber assurance.

  • Unencrypted Communications: A core challenge is the widespread use of unencrypted signals between bases and satellites, which can be easily intercepted and read. Tim estimates that less than 50% of signals are encrypted due to operational challenges.

  • Encryption is Not Enough: Encryption only addresses confidentiality. An encrypted signal can still be captured and replayed, and the satellite may process it if integrity is not addressed.

  • The Ground Segment Threat: Even encrypted space communications can be nullified if the ground network is compromised (e.g., stealing a FIPS-compliant encryption module), necessitating a holistic security approach.

  • Repeating History: Space security is currently experiencing a situation analogous to the internet's early days (ARPANet) or the ICS/OT SCADA world 12-15 years ago, focusing on getting things operational before securing them.

Real-World Impact on Terrestrial Life

  • GPS Timing is Critical: Critical infrastructure—including pipelines, power grids, and financial systems—all rely on GPS timing for synchronization.

  • Disruption Affects Everyone: Disrupting GPS timing can cause widespread outages. Examples include:

    • The London Stock Exchange going down in 2012 due to a localized GPS jamming attack that wasn't even targeting them.

    • A US Navy testing incident that caused widespread outages in San Diego, affecting ATMs and pharmacies for days.

  • Space is the New Internet: Partnerships like T-Mobile's direct-to-cell with Starlink demonstrate that space is becoming an extension of the internet, increasing connectivity but also the attack surface.

Strategy and Getting Involved

  • Integrating Security: The best model for moving decisions closer to security on the operations-to-security spectrum is to physically place security testers (like penetration testers) directly within development teams (DevSecOps).

  • Train Developers to Attack: A highly effective proactive security measure is to teach developers how to attack their own software; they magically stop writing vulnerable code.

  • Space is a Culmination of Niches: Space security is the culmination of all security specializations (cloud, network, web application, ICS/OT, physical security). There is a place and a need for experts from every niche.

  • Resources for Getting Started:

    • Check local security conferences for the Aerospace Village (a non-profit that hosts hands-on labs).

    • Read books like Space Cyber Security by Dr. Jacob Oakley.

    • Attend specialized conferences like Hackspace Con.

    • "Just Google it": Use your existing security expertise (e.g., "cloud security") and research how it applies to the space industry.

AI in Space: Augmentation vs. Autonomy

  • Anomaly Detection is Ideal: AI (machine learning) is tailor-made for high-speed computation and sensor analysis, making it excellent for anomaly detection in early warning systems.

  • The Human Decision-Maker: Tim Fowler insists that human involvement is essential for critical decision-making and validating AI output (to determine if an alert is a false positive). He argues that an autonomous AI decision in space could quickly escalate into a hostile international incident.

  • Scalability Debate: Timothy De Block questioned the scalability of relying on humans for every decision, using traffic light management as an example of where AI could safely and efficiently augment processes. Both agreed AI should handle "busy work" and augment human capabilities, not perform autonomous functions in sensitive situations.

ETHOS LAbs Links and Resources:

ETHOS LABS Website

Connect with Tim Folwer on Linkedin

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]

The Final Frontier of Security: The State of Space Security 2025
Tim Fowler