The Risk of AI
Originally written on Mar 2023.
Our data identifies us, defines us, and allows us to be tracked. As it stands today we can’t have any expectations of privacy, neither the ability to control where all that data is going or who has access to it.
Now imagine you feed all that data, willingly, to the current incarnations of artificial intelligence (AI). Yes, modern AIs, like ChatGPT and others, are getting smarter. Well, they are learning to get smart.
The researchers that are spending time and money into making them faster, bigger, all reaching, and all “knowing”, are, in my opinion, not taking the time to threat-model where this is leading us to. Understanding the risks that result from feeding more and more real world data into AIs and trusting these new incarnations of software with solving actual problems, has to be a priority.
Do we really know what would/could happen if more data gets fed into an AI? What if some of that data is malicious? What can the AI do with malicious data? What about stolen intellectual property data from a company breach? What if a bad actor decides to use the AIs to comb the virtually unlimited amounts of online data about people and businesses, using the technology to begin profiling them and selecting better targets based on ease of attack? Or worse, what if they ask the AIs where the most damage could be inflicted within those targets...
Like all technology, AIs can serve both sides. There is no way to stop that, but the more we let it slide, the more it will tend to lean towards helping the dark side. That’s the nature of technology because that’s human nature.
I think it’s time we pause, all of us, researchers, companies, and users alike, and truly assess what can go wrong. We need to seriously weigh the threats and whether we are ready to accept what would probably come next and live with it.
I am personally terrified of where this is going. As smart as we “think” we are, we are not often smart enough to think ahead, and see things for what they are.
These are a few of the threats presented by the use of AIs that come to mind:
- Privacy violations. AIs will look into people’s or company’s data and make that part of their knowledge base, resulting in that data being used on chats, and in making decisions. Whether people or companies are OK with this, it doesn’t matter. There are no regulations on this (see below).
- Algorithmic bias caused by bad data. However, this private data can be mixed with other data that may or may not be accurate or even real, rendering all data bad and causing answers to lean towards a biased answer. Furthermore, if the source of the private data is leaked, even if the data has been mangled and no longer is the original one, it may cause legal or reputational issues to the people or companies whose identity has been exposed.
- Deepfakes and disruption. Everything becomes unreliable and untrustworthy. You can’t rely on data you didn’t generate and collect yourself any more. You can’t trust videos or images. Bad actors can actually use this to cause chaos and disruption. Things like automation of comments, social media posts, and other things for destabilization operations becomes a simple game.
- Economic and market volatility. Injecting biased data, playing with deepfakes, and reading data with an end goal in mind can create issues in the markets and other economic repercussions. Again, bad actors or other AIs can use this to influence outcomes.
- Weapons automatization. AIs are being trusted currently with handling some of the automation for long range weapons. Supposedly faster reaction time enables better actions against an enemy. But what would happen if the AIs get faulty data? Or gets influenced in any way? Imagine if certain countries in the east use their already vast (illegally obtained) access to the west infrastructure or companies, and inject biased data or cripple the algorithms?
- Lack of regulation. Currently there are no regulations, legal or industry-wide. Anyone can use an AI for whatever they want, in any way they want. From medical research to planning the next security strategy for a large multinational, these unregulated programs and algorithms are being trusted with too much. What if we trust them with all and then one big corporation, or worse, a populist and tyrannical government takes control over all AI?
- Security nightmares in the making. The use of AI without filters or regulation increases everyone’s exposure to malicious activities, enhancing the chance of data breachers, of being phished, or scammed. Imagine if bad actors use ChatGPT and other AI-writing tools to make phishing scams more effective?
So, what can we do? A good question without an easy answer.
It might be too late to do anything already, however maybe the scientists and corporations creating and training these AIs, making them smarter with each iteration, can themselves get smarter and understand the risks their creations already present to all of us, and, then, look into the future and see the risk that awaits us if we don’t do something now.
I think it’s time for the security community to get together and generate a collection of rules and regulations for a world that will be controlled by the use of AI.
Or, like Daniel Messier put it:
“We can’t curmudgeon our way to protecting users.
We need to get out front, and do our best to clear the way.”
We will see.