We live in an era where new, disruptive technologies emerge faster than ever before, and while this has an extremely positive impact on our lives, it’s still necessary to recognize and identify potential drawbacks. Namely, whenever something new and progressive appears in the tech world, it’s not possible to immediately spot the array of potential risks that it can bring about.
Artificial intelligence is undoubtedly one of the most important and challenging projects that humanity has ever tackled. This is a technology whose aim is creating intelligent machines capable of performing tasks that require functions traditionally associated with the human mind, such as problem-solving and learning from the existing examples.
The AI revolution has already begun, and we witness different benefits of this technology taking over some repetitive, tedious tasks, and, most importantly, data analytics. It’s this last use case that draws the most attention as being ethically unregulated and posing some serious privacy risks.
The Data Problem
Brands and marketing agencies have always relied on having detailed data about their audiences in order to be able to create relatable and engaging campaigns. This allowed them to target their prospects with the right marketing message as well as predict their purchasing behaviour. Back in the day, it wasn’t that easy to collect, process, and interpret huge volumes of customer data. But, as the world started to go digital, various data collecting tools and tactics appeared, which facilitated the process and offered marketers an opportunity to gain valuable insights into their audience’s preferences, needs, and problems.
However, as all these bits and pieces of information were scooped from different sources, they were highly unstructured, meaning that it was impossible to make sense of them. Until artificial intelligence and big data analytics appeared. From that point on, one of the core roles of AI is to analyze both structured and unstructured data and interpret it.
And as it’s usually the case with data, the issue of privacy appears.
What Is the Concept of Digital Privacy?
The idea of privacy has evolved immensely over the past 50 years. In a pre-Internet age, it mainly referred to physical privacy, and it was much easier to comprehend and control it. People could protect their sensitive information, such as Social Security or credit card number. However, when the internet and numerous online services were launched, cybercriminals made it possible for personal information to fall into the wrong hands.
But hackers aren’t always responsible for stealing or misusing data. People use a number of different online tools, meaning that they go through a lot of fine print and simply accept terms of service without reading them, thus allowing companies to collect, store, and use their personal data. With AI and advanced analytics, this problem only becomes more complex, as this technology opens the doors to a number of serious and even dangerous consequences concerning privacy violations, manipulation, and accident Some of the privacy issues related to the use of AI include:
De-anonymisation and re-identification. Together with facial recognition, AI can be used to identify and monitor people.
Discrimination. As a result of de-anonymization and re-identification, AI profiling, and automated decision-making, people can be discriminated against and judged negatively based on sensitive information available about them.
The opacity of profiling. AI-profiling has become a reality, and the fact that even those who designed such systems can’t always fathom what processes they use makes it difficult to interpret the outcomes and understand whether they are correct or not. This opacity can, thus, have a significant effect on people’s lives.
As the world is still reeling from the Facebook-Cambridge Analytica Scandal, and a stream of subsequent data breaches for which respectable companies were responsible, it’s only logical to think about how artificial intelligence, and its subsets – machine learning and natural language processing – will overcome these challenges.
Is There a Way to Overcome These Challenges?
We can’t deny that artificial intelligence is something that improves our lives and makes things easier. However, this tremendous potential has to be properly harnessed in order not to wreak havoc in the arena of privacy. But it’s worth the effort, as AI powers numerous other tech advancements that can lead to seismic shifts in the way we do things. Take the Internet of Things, for example, a huge interconnected web of devices and services that allows people to control, among many other things, their cars, appliances, and homes remotely. It’s obvious that such a colossal network generates vast volumes of data that can be compromised.
Similarly, AI-powered chatbots have become indispensable in numerous industries due to their ability to improve customer engagement, handle multiple customer queries at the same time, as well as collect, analyze, and store customer information so that it can be used for personalizing future interactions. Thanks to them, companies can reduce operational costs, allow for great customer experience, create the right marketing messages, and streamline call centre tasks, thus increasing customer retention rates.
How to use all these advantages of AI without sacrificing privacy?
Scientists are trying to find a way of combining cryptography and machine learning to enable the usage of such data without actually seeing it. This system will protect end users on one hand, and allow companies to leverage their data without breaking the code of ethics.
This set of regulations is the first step towards improving privacy in the world of AI, but there’s a catch. Companies won’t be allowed to collect data unless they can give assurance that they understand its value. Many companies find it difficult to do that because they can’t always be sure of that in advance. This means that the GDPR can hinder their innovation capacity.
This decentralised AI framework, which is distributed across millions of devices, enables scientists to create, train, improve and assess a model based on a certain number of local models.
The key factor is that in this approach companies don’t actually have access to user’s raw data or an option to label it. In other words, this technology, which is a synergy of AI, blockchain, and IoT, protects users’ privacy and still offers the benefits of aggregated model improvement.
Many applications, such as maps, collect individual users’ data so that they can make traffic predictions and offer the fastest route recommendations. At the moment, it’s theoretically possible to identify individual contributions to these aggregated data sets, which means that it’s also possible to expose the identity of every single person who makes a contribution.
Differential privacy randomizes the entire process, thus making things haphazard and preventing the possibility of tracing back the information and identifying individual contributors.
This approach leverages machine learning algorithms that deal with encrypted data, meaning that it makes sensitive information inaccessible.
In a nutshell, the data can be encrypted and analyzed by a remote system, while the results will be sent back in encrypted form too. A unique key is used to decipher and unlock data. The entire process can be conducted by protecting the privacy of users whose sensitive information is being used in the analysis. It’s clear that researchers work on finding the best way to make the most of artificial intelligence but not at the expense of exposing and compromising sensitive information. In the future, we can expect that this entire process will be much more sophisticated.
Michael has been working in marketing for almost a decade and has worked with a huge range of clients, which has made him knowledgeable on many different subjects. He has recently rediscovered a passion for writing and hopes to make it a daily habit. You can read more of Michael’s work at Qeedle.