Almost Half of Companies Leave Cybersecurity Teams Out of AI Development, Onboarding, and Implementation

Published:

The Role of Cybersecurity Professionals in AI Policy Development: A Call to Action

In an era where artificial intelligence (AI) is rapidly transforming industries, the intersection of AI and cybersecurity has become a focal point of concern and opportunity. However, a recent survey conducted by ISACA, a global professional association dedicated to advancing trust in technology, reveals a troubling trend: only 35 percent of cybersecurity professionals or teams are involved in the development of policies governing the use of AI technology within their enterprises. Alarmingly, nearly half (45 percent) report having no involvement in the development, onboarding, or implementation of AI solutions. This gap highlights a critical need for cybersecurity professionals to engage more actively in AI policy discussions and implementations.

The Current Landscape of AI in Cybersecurity

The 2024 State of Cybersecurity survey, sponsored by Adobe, gathered insights from over 1,800 cybersecurity professionals regarding their experiences and perspectives on the evolving threat landscape. The findings indicate that while security teams are increasingly utilizing AI for various applications—such as automating threat detection and response (28 percent), enhancing endpoint security (27 percent), automating routine security tasks (24 percent), and fraud detection (13 percent)—the lack of involvement in policy development raises significant concerns.

Jon Brandt, ISACA’s Director of Professional Practices and Innovation, emphasizes the importance of integrating cybersecurity expertise into AI solutions. He states, “In light of cybersecurity staffing issues and increased stress among professionals in the face of a complex threat landscape, AI’s potential to automate and streamline certain tasks and lighten workloads is certainly worth exploring. But cybersecurity leaders cannot singularly focus on AI’s role in security operations.” This statement underscores the necessity for cybersecurity teams to be included in the broader conversation surrounding AI implementation.

The Need for Cybersecurity Involvement in AI Policy Development

The absence of cybersecurity professionals in AI policy development can lead to significant vulnerabilities. As organizations adopt AI technologies, they must ensure that these systems are secure, ethical, and compliant with regulations. Cybersecurity teams possess the expertise to identify potential risks associated with AI, such as data privacy concerns, algorithmic bias, and adversarial attacks. By involving these professionals in the policy-making process, organizations can create robust frameworks that mitigate risks while maximizing the benefits of AI.

Moreover, as AI technologies evolve, so too do the threats they face. Cybersecurity teams must stay ahead of these threats by actively participating in the development and implementation of AI solutions. This proactive approach not only enhances security but also fosters a culture of collaboration between AI developers and cybersecurity experts.

ISACA’s Initiatives to Bridge the Gap

Recognizing the urgent need for cybersecurity professionals to engage with AI, ISACA has developed a range of resources aimed at helping organizations navigate this complex landscape. Among these initiatives is the EU AI Act white paper, which outlines the requirements and timelines for AI systems used within the European Union. As organizations prepare for compliance with the EU AI Act, ISACA recommends key steps, including instituting audits, adapting existing cybersecurity policies, and designating an AI lead to oversee AI tools and strategies.

Additionally, ISACA has released resources focusing on the implications of AI in authentication systems, particularly in the context of deepfakes. Their white paper, Examining Authentication in the Deepfake Era, highlights both the advantages and risks of AI-driven adaptive authentication, urging cybersecurity professionals to remain vigilant against potential vulnerabilities.

Developing a Comprehensive AI Policy

For organizations looking to implement a generative AI policy, ISACA provides a set of guiding questions to ensure comprehensive coverage. These questions include considerations such as the impact of the policy scope, acceptable terms of use, and compliance with legal requirements. By addressing these critical areas, organizations can create a balanced approach to AI that prioritizes security and ethical considerations.

Education and Credentialing for Cybersecurity Professionals

To further support cybersecurity professionals in adapting to the evolving landscape, ISACA has expanded its education and credentialing options. Their latest on-demand AI course, Machine Learning: Neural Networks, Deep Learning, and Large Language Models, offers insights into the technical aspects of AI and its applications in cybersecurity. Additionally, the upcoming Certified Cybersecurity Operations Analyst certification, set to launch in Q1 2025, will equip professionals with the skills necessary to evaluate threats and recommend countermeasures in an increasingly automated environment.

Conclusion: A Call to Action

As AI continues to reshape the cybersecurity landscape, the need for cybersecurity professionals to engage in policy development and implementation has never been more critical. By actively participating in these discussions, cybersecurity teams can help ensure that AI technologies are deployed securely and ethically, safeguarding organizations against emerging threats.

For those interested in exploring ISACA’s resources and gaining insights into the intersection of AI and cybersecurity, a complimentary copy of the 2024 State of Cybersecurity survey report is available at ISACA’s website. As the digital landscape evolves, it is imperative that cybersecurity professionals step up and take an active role in shaping the future of AI within their organizations.

Related articles

Recent articles