Artificial intelligence (AI) is on everyone’s mind: autopilot for Teslas, deepfakes for fun or fake news, DALL·E 2 for the untalented artist, big data and machine learning for data analysis. And CERN is no exception. While all those AI applications come with pros and cons, with benefits and risks, AI systems deserve their own consideration when it comes to computer security. So we took the opportunity to ask a renowned expert in AI, ChatGPT, how best to secure AI systems. Here’s the answer we got:
“Artificial intelligence (AI) is a rapidly advancing technology that has the potential to greatly benefit society in a variety of ways. However, it also poses new security risks that must be addressed. Here are some best practices for ensuring the security of AI systems:
- Secure data inputs and outputs. AI systems rely on large amounts of data to learn and make decisions. This data must be protected from unauthorised access and tampering to ensure the integrity and accuracy of the system.
- Use secure algorithms and protocols. AI systems use complex algorithms and protocols to process data and make decisions. It is important to use secure algorithms that have been thoroughly vetted and tested, and to ensure that all communication between the AI system and other systems is encrypted.
- Monitor and audit AI systems. Regular monitoring and auditing of AI systems can help detect and prevent security breaches. This includes monitoring for unusual or suspicious behavior, as well as auditing the system’s log files for any signs of unauthorised access or tampering.
- Implement access controls. Access controls can help prevent unauthorised access to AI systems and the data they process. This includes setting up user accounts and assigning roles and permissions, as well as implementing authentication mechanisms such as biometrics or multi-factor authentication.
- Stay informed and educated. As AI technology continues to evolve, it is important to stay informed about the latest security risks and best practices. This includes attending relevant conferences and workshops, reading industry publications and staying up-to-date with software updates and patches.
- Be prepared for the worst. AI systems can be vulnerable to cyberattacks, and it is important to have a plan in place to respond to a security incident. This includes having a trained incident response team, and regularly testing incident response plans to ensure they are effective.
By following these best practices, organisations can help ensure the security of their AI systems and protect themselves from potential cyber threats. As AI is permeating all industries, it is crucial for everyone to be aware of the security risks and best practices to mitigate them.”
Thanks a lot, ChatGPT bot. So far, nothing special here for AI – it’s a matter of applying the same general security paradigms that every computer user and every service manager should apply to any well-managed computing service (like those run by CERN IT) and individual computing device (like your smartphone or laptop):
- Make sure that you only input valid and trusted data, URLs, etc.
- Use encryption to protect your privacy
- Have intrusion detection and/or an anti-malware program running in the background
- Deploy access control and opt into 2FA security
- Train yourself and have auto-update enabled
- Have an incident response plan.
Six simple steps to make your computing service more secure, and to protect your personal assets stored on your laptop and smartphone. Six simple steps, common sense for security-aware humans. Six simple steps, apparently also rather logical for an AI bot. So, tell us: how human is AI; how AI are you?
______
Do you want to learn more about computer security incidents and issues at CERN? Follow our Monthly Report. For further information, questions or help, check our website or contact us at Computer.Security@cern.ch.