How AI Will Transform Society and Affect the Cybersecurity Field

Summary:

Timothy De Block sits down with Ed Gaudet, CEO of Censinet and a fellow podcaster, for a wide-ranging conversation on the rapid, transformative impact of Artificial Intelligence (AI). Ed Gaudet characterizes AI as a fast-moving "hammer" that will drastically increase productivity and reshape the job market, potentially eliminating junior software development roles. The discussion also covers the societal risks of AI, the dangerous draw of "digital cocaine" (social media), and Censinet's essential role in managing complex cyber and supply chain risks for healthcare organizations.

Key Takeaways

AI's Transformative & Disruptive Force

  • A Rapid Wave: Ed Gaudet describes the adoption of AI, particularly chat functionalities, as a rapid, transformative wave, surpassing the speed of the internet and cloud adoption due to its instant accessibility.

  • Productivity Gains: AI promises immense productivity, with the potential for tasks requiring 100 people and a year to be completed by just three people in a month.

  • The Job Market Shift: AI is expected to eliminate junior software development roles by abstracting complexity. This raises concerns about a future developer shortage as senior architects retire without an adequate pipeline of talent.

  • Adaptation, Not Doom: While acknowledging significant risks, Ed Gaudet maintains that humanity will adapt to AI as a tool—a "hammer"—that will enhance cognitive capacity and productivity, rather than making people "dumber".

  • The Double-Edged Sword: Concerns exist over the nefarious uses of AI, such as deepfakes being used for fraudulent job applications, underscoring the ongoing struggle between good and evil in technology.

Cyber Risk in Healthcare and Patient Safety

  • Cyber Safety is Patient Safety: Due to technology's deep integration into healthcare processes, cyber safety is now directly linked to patient safety.

  • Real-World Consequences: Examples of cyber attacks resulting in canceled procedures and diverted ambulances illustrate the tangible threat to human life.

  • Censinet's Role: Censinet helps healthcare systems manage third-party, enterprise cyber, and supply chain risks at scale, focusing on proactively addressing future threats rather than past ones.

  • Patient Advocacy: AI concierge services have the potential to boost patient engagement, enabling individuals to become stronger advocates for their own health through accessible second opinions.

Technology's Impact on Mental Health & Life

  • "Digital Cocaine": Ed Gaudet likened excessive phone and social media use, particularly among younger generations, to "digital cocaine"—offering short-term highs but lacking nutritional value and promoting technological dependence.

  • Life-Changing Tools: Ed Gaudet shared a powerful personal story of overcoming alcoholism with the help of the Reframe app, emphasizing that the right technology, used responsibly, can have a profound, life-changing impact on solving mental health issues.

Resources & Links Mentioned

  • Censinet: Ed Gaudet's company, specializing in third-party and enterprise risk management for healthcare.

  • Reframe App: An application Ed Gaudet used for his personal journey of recovery from alcoholism, highlighting the power of technology for mental health.

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


Exploring AI, APIs, and the Social Engineering of LLMs

Summary:

Timothy De Block is joined by Keith Hoodlet, Engineering Director at Trail of Bits, for a fascinating, in-depth look at AI red teaming and the security challenges posed by Large Language Models (LLMs). They discuss how prompt injection is effectively a new form of social engineering against machines, exploiting the training data's inherent human biases and logical flaws. Keith breaks down the mechanics of LLM inference, the rise of middleware for AI security, and cutting-edge attacks using everything from emojis and bad grammar to weaponized image scaling. The episode stresses that the fundamental solutions—logging, monitoring, and robust security design—are simply timeless principles being applied to a terrifyingly fast-moving frontier.

Key Takeaways

The Prompt Injection Threat

  • Social Engineering the AI: Prompt injection works by exploiting the LLM's vast training data, which includes all of human history in digital format, including movies and fiction. Attackers use techniques that mirror social engineering to trick the model into doing something it's not supposed to, such as a customer service chatbot issuing an unauthorized refund.

  • Business Logic Flaws: Successful prompt injections are often tied to business logic flaws or a lack of proper checks and guardrails, similar to vulnerabilities seen in traditional applications and APIs.

  • Novel Attack Vectors: Attackers are finding creative ways to bypass guardrails:

    • Image Scaling: Trail of Bits discovered how to weaponize image scaling to hide prompt injections within images that appear benign to the user, but which pop out as visible text to the model when downscaled for inference.

    • Invisible Text: Attacks can use white text, zero-width characters (which don't show up when displayed or highlighted), or Unicode character smuggling in emails or prompts to covertly inject instructions.

    • Syntax & Emojis: Research has shown that bad grammar, run-on sentences, or even a simple sequence of emojis can successfully trigger prompt injections or jailbreaks.

Defense and Design

  • LLM Security is API Security: Since LLMs rely on APIs for their "tool access" and to perform actions (like sending an email or issuing a refund), security comes down to the same principles used for APIs: proper authorization, access control, and eliminating misconfiguration.

  • The Middleware Layer: Some companies are using middleware that sits between their application and the Frontier LLMs (like GPT or Claude) to handle system prompting, guard-railing, and filtering prompts, effectively acting as a Web Application Firewall (WAF) for LLM API calls.

  • Security Design Patterns: To defend against prompt injection, security design patterns are key:

    • Action-Selector Pattern: Instead of a text field, users click on pre-defined buttons that limit the model to a very specific set of safe actions.

    • Code-Then-Execute Pattern (CaMeL): The first LLM is used to write code (e.g., Pythonic code) based on the natural language prompt, and a second, quarantined LLM executes that safer code.

    • Map-Reduce Pattern: The prompt is broken into smaller chunks, processed, and then passed to another model, making it harder for a prompt injection to be maintained across the process.

  • Timeless Hygiene: The most critical defenses are logging, monitoring, and alerting. You must log prompts and outputs and monitor for abnormal behavior, such as a user suddenly querying a database thousands of times a minute or asking a chatbot to write Python code.

Resources & Links Mentioned

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


Exploring the Rogue AI Agent Threat with Sam Chehab

Summary:

In a unique live recording, Timothy De Block is joined by Sam Chehab from Postman to tackle the intersection of AI and API security. The conversation goes beyond the hype of AI-created malware to focus on a more subtle, yet pervasive threat: "rogue AI agents." The speakers define these as sanctioned AI tools that, when misconfigured or given improper permissions, can cause significant havoc by misbehaving and exposing sensitive data. The episode emphasizes that this risk is not new, but an exacerbation of classic hygiene problems.

Key Takeaways

  • Defining "Rogue AI Agents": Sam Chehab defines a "rogue AI agent" as a sanctioned AI tool that misbehaves due to misconfiguration, often exposing data it shouldn't have access to. He likens it to an enterprise search tool in the early 2000s that crawled an intranet and surfaced things it wasn't supposed to.

  • The AI-API Connection: An AI agent is comprised of six components, and the "tool" component is where it interacts with APIs. The speakers note that the AI's APIs are its "arms and legs" and are often where it gets into trouble.

  • The Importance of Security Hygiene: The core of the solution is to "go back to basics" with good hygiene. This includes building APIs with an open API spec, enforcing schemas, and ensuring single-purpose logins for integrations to improve traceability.

  • The Rise of the "Citizen Developer": The conversation highlights a new security vector: non-developers, or "citizen developers," in departments like HR and finance building their own agents using enterprise tools. These individuals often lack security fundamentals, and their workflows are a "ripe area for risk".

  • AI's Role in Development: Sam and Timothy discuss how AI can augment a developer's capabilities, but a human is still needed in the process. The report from Veracode notes that AI-generated code is only secure about 45% of the time, which is about on par with human-written code. The best approach is to use AI to fix specific lines of code in pre-commit, rather than having it write entire applications.

Resources & Links Mentioned

Support the Podcast:

Enjoyed this episode? Leave us a review and share it with your network! Subscribe for more insightful discussions on information security and privacy.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn][YouTube]


How Artificial Intelligence is impacting Cybersecurity with Steve Orrin

Summary:

In this engaging episode, Timothy De Block speaks with Steve Orrin Federal CTO at Intel about the intersection of artificial intelligence and cybersecurity. The conversation delves into the challenges and opportunities that AI presents in the cybersecurity landscape, exploring topics such as deep fakes, disinformation, and the implementation of AI in security practices.

Key Discussion Points:

  1. AI in Cybersecurity:

    • The rise of AI in both defensive and offensive cybersecurity strategies.

    • How AI is being used to enhance security measures and identify threats.

  2. Deep Fakes and Disinformation:

    • The challenges posed by deep fakes in the current digital landscape.

    • Techniques to detect and counteract deep fakes.

    • The implications of deep fake technology on public opinion and security.

  3. Practical AI Applications:

    • Real-world examples of AI in action within cybersecurity frameworks.

    • The role of AI in threat detection and response.

    • Implementing AI to automate routine security tasks, freeing up human resources for more complex issues.

  4. Policy and Ethical Considerations:

    • The importance of developing policies for the responsible use of AI.

    • Ethical considerations in deploying AI for cybersecurity purposes.

    • Balancing innovation with security in AI development.

  5. Future of AI and Cybersecurity:

    • Upcoming trends in AI and their potential impact on cybersecurity.

    • The evolving nature of cyber threats and how AI can adapt to these changes.

    • The need for continuous learning and adaptation in the face of rapidly advancing technology.

Resources Mentioned:

Contact Information:

Leave a comment below or reach out via the contact form on the site, email [timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn]


What are Deepfakes with Dr. Donnie Wendt

Summary:

In this enlightening episode of the Exploring Information Security podcast, we dive deep into the world of deepfakes with Dr. Donnie Wendt. With a background in cybersecurity at MasterCard, Dr. Wendt shares his journey into the exploration of deepfake technology, from setting up a home lab using open-source tools to presenting the potential business impacts of deepfakes to leadership teams.

Key Discussions:

  • What are Deepfakes? Dr. Wendt explains the basics of deepfakes, a technology that uses machine learning to superimpose someone's likeness onto another person, creating realistic fake videos or audio recordings. Initially used for nefarious purposes, the technology has found applications in politics, social engineering, and entertainment.

  • Creating Deepfakes: Discover how Dr. Wendt utilized open-source tools and a good Nvidia video card to experiment with deepfake creation, including making Nicholas Cage a regular "guest" in security briefings at MasterCard.

  • The Threat Landscape: Dr. Wendt discusses the use of deepfakes in political manipulation and fraud, highlighting recent instances where deepfakes have influenced elections and scammed individuals and businesses out of large sums of money.

  • Detection and Prevention: The conversation touches on the challenges of distinguishing deepfakes from real footage, emphasizing the importance of skepticism, critical thinking, and verification processes to combat misinformation.

  • Positive Applications: Despite their potential for misuse, deepfakes also have beneficial uses, such as giving voice back to ALS patients, recreating historical speeches, and aiding medical diagnosis. Dr. Wendt stresses the importance of recognizing the technology's positive impact alongside its threats.

Episode Highlights:

  • Dr. Wendt's firsthand experience with creating deepfakes and the technical requirements for doing so.

  • Insight into the evolving capabilities of deepfake technology and the cat-and-mouse game between creators and detectors.

  • The significance of robust verification processes within organizations to safeguard against deepfake-related fraud.

Resources Mentioned:

  • Faceswap.dev: An open-source tool for experimenting with different deepfake creation algorithms.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email [timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn]


ShowMeCon: How AI will impact Cybersecurity Enhancements and Threats with Jayson E. Street

Summary:

Jayson E. Street

In this engaging episode Jayson E. Street, a renowned cybersecurity expert, joins me to discuss the return of ShowMeCon, the impact of AI in cybersecurity, and innovative strategies for enhancing security and combating threats. Jayson shares his excitement for ShowMeCon, insights on utilizing AI for security enhancements rather than traditional attacks, and offers practical advice for users, executives, and information security professionals.

This podcast sponsored by ShowMeCon.

Episode Highlights:

  • ShowMeCons return

  • Utilizing AI in Cybersecurity

  • Creative Use of AI for Security

  • Practical Security Tips Across the Board

  • The Future of AI in Security

Guest Information:

Jayson E. Street referred to in the past as: A "notorious hacker" by FOX25 Boston, "World Class Hacker" by National Geographic Breakthrough Series and described as a "paunchy hacker" by Rolling Stone Magazine.

He however prefers if people refer to him simply as a Hacker, Helper & Human.

Contact Information:

Leave a comment below or reach out via the contact form on the site, email [timothy.deblock[@]exploresec[.]com, or reach out on LinkedIn.

Check out our services page and reach out if you see any services that fit your needs.

Social Media Links:

[RSS Feed] [iTunes] [LinkedIn]