Chris Anley, Chief Scientist at NCC Group, gives an overview of the risks that come with the increasing use of Artificial Intelligence (AI) and Machine Learning (ML) systems, and how businesses can ensure the security of this technology.
AI generally refers to technology algorithms that are able to simulate human intelligence, such as learning and problem-solving. Machine Learning (ML) refers to a more practical application of Artificial Intelligence that allows machines to learn from data without being programmed explicitly. People often use the two terms interchangeably, but they are different concepts.
How are AI and ML being used today?
The use of AI and ML technology has skyrocketed in recent years. Most people use it every day, perhaps without knowing. Common examples include modern web search engines and social media platforms. Both gather user input data to power their preference engines to recommend content to users.
Today, the use of Artificial Intelligence and Machine Learning is becoming more widespread. The world has seen the recent explosion of AI content generators, for example, offered through the likes of Google and ChatGPT. Businesses are utilising these technologies to better understand customer behaviour on their platforms and software, such as their website, and adapt accordingly.
Therefore, Artificial Intelligence and Machine Learning play an active role in growing digital platforms and informing key business decisions. The technology is prevalent in a range of sectors, from healthcare to high-profile organisations such as the government. The use of this technology has helped to aid the understanding of consumer behaviours, increase diagnosis speed and accuracy, and improve the overall efficiency of business data.
However, as prophesied by countless sci-fi movies in recent decades, the rapid advancement of AI and ML brings with it an increased level of risk.
How does the current regulatory landscape and legislation look?
In the UK, there is currently no law specifically linked to AI and surrounding technologies; however, recently, there have been consultations to discuss how legislation can keep up with this evolving landscape and then set out standards businesses can use to keep safe.
For example, the UK’s 2017 Industrial Strategy consultation sets out the government’s vision to make the UK a global centre for AI innovation. It recognises the power of AI to increase resilience, productivity, growth and innovation across a number of sectors.
The UK government launched a call for views on Artificial Intelligence (and intellectual property) in late 2020, which led to the launch of a route map for the development of technical standards on 23 March 2021. This set out where cross-sector and sector-specific standards are required. It also detailed how the Government will utilise horizon-scanning, and early detection of emerging technologies, to ensure the regime remains flexible and forward-looking so it is able to cope with the ever-advancing technology.
This puts in place the early foundations of what future AI regulations and legislation will look like.
What do regulations need to consider to keep these technologies secure?
There are two main security risks when it comes to Artificial Intelligence and Machine Learning technologies.
For one, their algorithms are designed to change behaviour based on inputs and lifetime. This can present inherent security risks, as, in theory, an AI that is responsible for any important decision-making can be led astray. It could be accidentally or intentionally taught to make incorrect or dangerous decisions.
The second is that threat actors are starting to use AI and ML to improve the efficiency of their attacks. The reason these security risks are heightened is down to AI and ML technology becoming more accessible, datasets becoming larger, and skills being more widespread.
When discussing the next generation of regulation and legislation, the Government must continually monitor the development of Artificial Intelligence and Machine Learning technology, as well as research its use and capabilities. Only then do we stand the best chance of protecting people and organisations safely and securely from the risks of these technologies.
What does the future look like for the tech space?
As we see this technology advance, it will inevitably begin to fill more roles across organisations. There is a lot to be excited about, as AI and ML systems could allow for improved business services, assistance in digital aspects, around-the-clock operation and faster processing for larger databases. This is a rapidly emerging space with lots of opportunities for organisations, but it’s essential that new advancements do not create new risks.
Artificial Intelligence and Machine Learning will continue to advance as they are utilised more often across various sectors. It is, therefore, important that we have a regulatory framework that enables legislation to keep up.
Chris Anley
Chief Scientist
NCC Group