Generative AI, a boon for organizations despite the risk


Generative AI is too beneficial to abandon despite the threats it poses to organizations, according to experts speaking at the ISC2 Security Conference 2023.

During a session at the event, LBMC Director Kyle Hinterburg and LBMC Senior Director Brian Willis emphasized that while criminals will use generative AI tools and they carry security risks data and privacy, this is the case for all the technologies we use on a daily basis, such as email and ATMs.

Hinterburg emphasized that these tools are not sentient beings, but rather tools formed and used by humans.

This is a message shared by Jon France, CISO at ISC2, speaking to Information security Review. “Is AI good or bad? Actually, it’s yes to both, and it’s not AI’s fault, it’s the way we as humans use it,” he noted.

How Generative AI Can Improve Security

Hinterburg and Willis outline the different ways generative AI can be used by cybersecurity teams:

1.Documents. Willis noted that documentation is “the fundamental part of building a good security program,” but it is a task that security professionals typically dread doing. Generative AI can help create policies and procedures in areas such as incident response faster and more accurately, ensuring that no compliance requirements and best practices are missed.

2. System configuration tips. Organizations often don’t configure themselves correctly, and as a result, configuration errors pose a major threat. Generative AI can alleviate this problem by providing prompts and commands to configure correctly in areas such as logging, password settings, and encryption. Willis emphasized: “By leveraging AI, you can ensure that you are using good configuration standards that suit your organization. »

3. Scripting and coding. There are many different coding languages, such as Powershell, Python, and HTML. For security professionals who aren’t proficient in a particular area, tools like ChatGPT can quickly suggest the code or script they need, Hinterburg said, rather than having to do difficult research on their own. line.

4. Facilitating the process. Another area where generative AI can improve the performance of security teams is to help them manage tasks throughout a conversation flow, beyond a single prompt. Hinterburg gave the example of an incident response simulation exercise, which generative AI tools are able to facilitate by providing scenarios and options to choose from, and continuing from there.

5. Develop private generative AI tools. Willis said many organizations are now creating their own private generative AI tools, based on publicly available technologies, specifically trained on internal data. These can be used to quickly access and summarize documents, such as meeting notes, contracts and internal policies. These tools are also more secure than open source tools because they are presented in the organizations’ own environment.

How to mitigate AI risks

Hinterburg and Willis also outlined three major insider threats related to generative AI tools, along with ways to mitigate these risks:

1. Unreliable results. Tools like ChatGPT are trained on data from the Internet and are therefore prone to errors, such as “hallucinations”. To overcome these issues, Willis advised taking steps such as using multiple AI tools to run a query and compare the results. Additionally, humans should avoid relying too heavily on these tools, recognizing their weaknesses in areas such as bias and error. “We should always want to use our own minds to do things,” he stressed.

2. Disclosure of Sensitive Material. There have been cases where sensitive data from organizations has been accidentally exposed in generative AI tools. OpenAI also revealed there was a data breach in ChatGPT itself in March 2023, which may have exposed some customers’ payment information. Because of these breach risks, Hinterburg advised organizations not to enter sensitive data into these tools, including email conversations. He noted that there are tools available that can undertake pre-processing tasks, allowing organizations to know what data to enter into generative AI.

3. Copyright issues. Willis warned that using generative AI-generated content for commercial purposes can lead to copyright issues and plagiarism. He said it is essential that organizations understand the legal aspects of generating content in this way, such as rights to AI-generated content, and that they keep records of AI-based data. AI used for these purposes.

In conclusion, Hinterburg said the risks of generative AI are “things we need to be aware of,” but the benefits are too great to simply stop using these tools.

Leave a comment