Mon | Aug 7, 2023 | 4:30 AM PDT

This is Part 1 of a three-part series tackling the topic of generative AI tools. This first installment is "Safeguarding Ethical Development in ChatGPT and Other LLMs through a Comprehensive Approach: Integrating Security, Psychological Considerations, and Governance."

In the realm of generative AI tools, such as Language Learning Models (LLMs), it is essential to take a comprehensive approach toward the development and deployment. Three key elements require our attention: security measures, psychological considerations, and governance strategies. By carefully examining the dynamic interactions among these elements, we can highlight the significance of a comprehensive approach that integrates resilient security measures and fosters ethical behavior, user trust, and risk mitigation.

Let's discuss the first key element, Security Measures.

While AI's LLMs have proven invaluable in augmenting productivity, research, and data analysis, technologists must recognize security standards as an unwavering prerequisite for the survival and success of any new technology. As the guardians and stewards of these cutting-edge innovations, it falls upon us to uphold the integrity of our creations in a world where threats are inherent. In a perpetual race where if attackers consistently stay ahead, security will find itself five or more steps behind, it is imperative to fortify our defenses.

Consider a scenario where prompt engineering abuse, specifically the introduction of DAN 13.5 (prompt injection), poses a significant threat to the generative AI system's security. DAN 13.5 is a roleplaying model for ChatGPT that allows the AI to assume different roles. Imagine a sophisticated attacker who cunningly injects malicious prompts into an LLM to manipulate its output and deceive unsuspecting users. Hostile threat actors assume the role of a medical provider, financial institution, or other legitimate supplier (impersonation). This subtle but insidious form of attack can lead to the theft of data, IP (Intellectual Property), funds, and dissemination of false information, all of which compromise the credibility of the technology and erodes user trust. To combat such threats effectively, we must adopt a proactive approach to security, constantly anticipating and countering potential vulnerabilities through continuous monitoring and robust safeguards.

Developers of the "new" (who integrate security) versus "old" (security has less consideration) as a development philosophy, should be on autopilot as it pertains to security measures, where security-by-design is a natural flow of the initial phase of development. Why should AI get a pass on S (Secure) SDLC methodologies? Despite the active contributions of SDLC methodologies over the past 20 years—such as Waterfall, Agile, V-shaped, Spiral, Big Bang, and others—there remains a lack of security-by-design for integration into AI developments such as ChatGPT, DALL-E, and Google's Bard.

Let's face it, as the world continues to become mesmerized by generative AI tools, security is not a top priority. Heck, nor was it a priority during the fascination of the first smartwatch.

People/consumers drive development, not developers. As technologists, we are not only aware of but understand this basic concept. If security were being prioritized, the integrated process of allowing "security" requirements to be collected with the "functional" requirements (i.e., risk analysis during design, static code review, and testing in parallel) would be consistent throughout every stage of the development and deployment lifecycle. Adopting a security-by-design approach helps to embed robust security measures into the very fabric of technology, fortifying it against threats and vulnerabilities.

Bob Janssen, Vice President and Global Head of Innovation at Delina, wrote an article for CPO Magazine in May 2023, stating: "Open AI has a free-to-use Moderation of API that can help reduce the frequency of unsafe content in completions. Alternatively, you may wish to develop a custom content filtration system tailored to specific use cases."

He further describes how Chat GPT is a third-party service provider and should be strictly governed and managed as other third-party APIs. He wrote: "If the API is not properly secure, it can be vulnerable to misuse and abuse by attackers who can use the API to launch attacks against the enterprise's systems or to harvest sensitive data. Organizations should ensure that they have appropriate measures in place to protect the API from misuse and abuse."

As technology continues to evolve, attackers are also upping their game and finding loopholes and gaps in security systems and solutions, including in generative AI tools. Fact: this has previously occurred and resulted in attackers gaining access to secure data, posing serious threats to organizations.

Drew Todd of SecureWorld News wrote an article that revealed a shocking discovery made by cybersecurity firm Group-IB, which unveiled a major security breach that exposed over 100,000 ChatGPT accounts. Todd wrote: "The company's Threat Intelligence platform detected over 100,000 compromised devices with saved ChatGPT credentials traded on illicit Dark Web marketplaces. According to Group-IB, these compromised accounts pose a serious risk to businesses, especially in the Asia-Pacific region, which has experienced the highest concentration of ChatGPT credentials for sale."

Though the discovery was not U.S.-based, it doesn't lessen the seriousness of the security risks organizations and individuals (consumers) can potentially face. If organizations want to utilize generative AI tools (i.e., ChatGPT, DALL-E, Bard, etc.) to improve their operations, then the focal lens must stay on security.

I agree with Janssen's philosophy; organizations must place security perimeters on these tools and treat them as any other API-integrated solution in their organization.

By doing so, it highlights the consumer's commitment to their own governance of TPRM (Third-Party Risk Management) security measures. Further, in Todd's article, he points out a recommendation by Dmitry Shestakov, Head of Threat Intelligence at Group-IB, emphasizing the importance of implementing two-factor authentication (2FA), enforcing password change rules, and maintaining continuous prioritization of all sensitive data.

Implementing practical security measures for AI-integrated solutions may seem elementary, but prioritizing governance and risk mitigation is essential for fostering a security mindset. Can organizations enable their workforce to utilize AI tools to support daily workflows, drive innovation, and boost productivity, all while safeguarding critical intellectual property, sensitive data, and digital assets? I firmly believe they can. As generative AI tools have the potential to revolutionize the creation of information and products, the adoption of a security mindset backed by decades of cybersecurity has already taken root.

By prioritizing security considerations, embracing a proactive security mindset, and fostering collaboration within the technology community, we can strengthen our technological advancements and confidently navigate the future, ensuring a secure and resilient digital landscape. Here are some practical security measures that should be considered.

  • Develop a roadmap
    Outline the key risk of LLM use and identify the areas your company can control (i.e., include TechOps and Security Team).
  • Vet LLM systems
    Treat generative AI systems as another API and incorporate them into the Third-Party Risk Management (TPRM) process with an emphasis on DLM (Data Life Cycle Management) and Privacy.
  • User restrictions
    Create clear access control and authorization protocols. Consider developing supportive LLM guidelines with policies.
  • Promote LLM awareness
    Education for users (i.e., end-users, technical teams, marketing/sales, etc.) on prompt engineering techniques and potential attacks (i.e., text deepfake, spear phishing, etc.).
  • Frequent security testing
    Due diligence to include assessing and ensuring data encryption both at rest and in transit, "real-time" threat monitoring and intrusion detection, pen testing, and regular security audits to assess vulnerabilities in the LLM infrastructure and applications.

Note: OWASP recently published a report on the Top 10 for Large Language Model Applications, which might provide further understanding for securing and safeguarding your organization's environment effectively.

In this ever-changing landscape, where technology evolves at NASCAR speed, our commitment to security resilience becomes even more critical. By staying vigilant and integrating comprehensive security measures, we can mitigate the risks associated with prompt engineering abuse and safeguard the true potential of generative AI tools. As we continue to march forward as technologists, let us uphold the paramount importance of security-by-design to create a future where innovation thrives in harmony with trust, integrity, and responsible deployment of AI technologies.

~~~

In the next installment, we will explore the intricate interactions between security and the human element by diving into the complexities of psychological considerations. These include aspects such as user trust, ethical behavior, privacy, biases in LLM programming, and more.

About the author: Kimberly "KJ" Haywood, Principal Advisor, Nomad Cyber Concepts; Adjunct Cybersecurity Professor, Collin College, will be part of the opening keynote panel at SecureWorld Dallas on October 26, on the topic of "Implications of ChatGPT and Other Similar AI Tools." She will be joined by Shawn Tuma, Co-Chair, Data Privacy & Cybersecurity Practice, Spencer Fane, LLP; Larry Yarrell, II, Chief Development Officer, Diversity & Inclusion Program Development, The Marcus Graham Project; and moderator Michael Anderson, Deputy CTO & CISO, Dallas Independent School District.

Comments