Book a Demo
search icon
illustration

How Artificial Intelligence attacked my family and other AI security lessons

4 min read

Written by: Rob van der Veer

publication inner img
illustration

In my home, we have one voice assistant right next to the shower – so we can play music without getting the device wet. And since we don’t hold private conversations there, we are not worried about any recordings. Our only worry is that someday recordings of us singing in the shower may leak (pun intended).

One day I discovered the voice assistant can get in a deadlock. When asking for the news, it starts playing radio news streams, but it turns out it can’t be stopped. The following commands ‘stop’, ‘shutdown’, and ‘stop reading’ all fail. After many tries, I came upon the magic word ‘pause’.

That brought me to a diabolic idea. 

Later, when my wife was in the shower, I shouted through the bathroom door, “VOLUME TEN, READ THE NEWS” then heard her shout in agony to the device “STOP”, “NO”, “QUIT”, “PLEASE”, “SHUT DOWN”, and “AAARG!” much to the delight of the rest of the family.

After a brief minute, I did the honorable thing and put her out of her misery by suggesting the ‘pause’ command.

This undesirable behavior demonstrates two of the many AI aspects that I wrote down in the ISO/IEC 5338 standard on AI engineering. The first issue is that the voice assistant is “potentially autonomous”: AI systems often directly interact with the real world by themselves. Additionally, it displays “emergent behavior”: instead of explicit programming it acts based on complex interactions of rules and guesses, which can seem as if it has a mind of its own.

With the introduction of unpredictable and potentially harmful behavior, one countermeasure could be a killswitch, which needs to be easily accessible for users. However, this is not always the correct protocol, because shutting down the cooling system of a nuclear reactor may not be a good idea. In the case of the voice assistant, it should be very easy to stop it from reading the news at an unbearable volume.

This example shows that AI systems have characteristics that are important to take into account when creating them. On November 29, 2022, I was part of a discussion panel during the AI assurance conference in Brussels. The audience asked how organizations can build secure AI systems based on these characteristics. My response was that it helps to treat AI just like any IT while understanding a few caveats. 

These were my recommendations:

  1. Keep on doing everything you are already doing for cybersecurity
  2. Incorporate AI developers, data scientists, and AI-related applications and infrastructure into your security programs: training, requirements, static analysis, code review, pentesting, etc.
  3. Go beyond security by applying good software engineering protocols to your AI practices, such as versioning, documentation, unit testing, integration testing, performance testing, and code quality. See the ISO/IEC 5338  standard for guidelines. This way, AI systems will become easier to maintain, transfer, be reliable, and future-proof.
  4. Make sure that everybody involved is aware of ‘special’ AI security risks, including:
    • Data and data processing need protection
    • AI model attacks: data poisoning, input manipulation (our shower example), data reverse engineering, and model theft, which all require deep machine learning knowledge and not security expertise per se. Read more at BIML, ENISA, and Microsoft.
    • More aspects can be found in ISO/IEC 5338 and the upcoming ISO/IEC 27090 on AI security, a standard I am involved in as a member of the AI working group ISO/IEC JTC1/SC42 where we would welcome your input at r.vanderveer@softwareimprovementgroup.com.
  5. Avoid dragging every ‘popular’ AI risk into the security activity, such as transparency, fairness, and correctness. They are important, but it’s better to divide and conquer AI issues in an organization, instead of letting everybody worry about everything.

In other words, my main recommendation to security officers and development teams is to treat AI pragmatically. No need to be philosophical or overwhelmed. AI is software with a few extra aspects that we are becoming increasingly familiar with. 

So, there’s hope for AI, and for the safety of my family from future AI harassment.


Update March 11, 2023:
Rob has taken the initiative to start an open source project through OWASP to share his thoughts. For more information, please check the OWASP AI security & privacy guide.

Author:

Rob van der Veer

Senior Director, Security & Privacy and AI

image of author
yellow dot illustration

Let’s keep in touch

We'll keep you posted on the latest news, events, and publications.

  • This field is for validation purposes and should be left unchanged.