5 Key AI problems related to Data Privacy
How addressing issues with your machine learning models will improve data privacy compliance.
Privacy is a concern not only related to Artificial Intelligence (AI) but any data-related field in general. It is about people having control over their personal data and decisions taken based on those.
In Europe, the General Data Protection Regulation (GDPR) that came into force in 2018 regulates the collection and use of personal data. Data protection law does not refer explicitly to AI or Machine Learning but there is a significant focus on large-scale automated processing of personal data and automated decision-making.
This means that where AI uses personal data it falls within the scope of the regulation and GDPR principles apply.
This can be through the use of personal data to train, test or deploy an AI system. Failure to comply with the GDPR may result in enormous penalties for the companies involved.
Some examples of personal data would be the date of birth, postcode, gender or even a user’s IP address.
Specifically, GDPR gives individuals the right not to be subject to a solely automated decision.
The key question for AI experts then is: How can you show that you treated an individual fairly and in a transparent manner when making AI-assisted decision about them, or give them the opportunity to contest such decision?
Even though GDPR is most relevant to Europe and the UK, the main principles and ideas should be relevant worldwide.
What to consider from an AI perspective?
Although fairness and explainability of the models are active research topics in AI, there are at least 5 considerations that you could already be thinking about in relation to data privacy.
Class imbalance occurs when your training labels are disproportionally in favour of a specific class. In other words, in a binary classification problem, you have lots of examples for cases where the output is 0 but only a few when the output is 1,
or vice versa. This might be due to a bias in the data collection process e.g. data collected only from a local branch, or be inherent in the properties of the domain e.g., identifying anomalous data points in a manufacturing process.
Class imbalance is one of the most common reasons for model bias, but it is often ignored by data scientists. This is because, typically, minor imbalance does not pose a huge risk as the models can learn the features of all classes equally well. However, when a severe class imbalance occurs, things can be tricky.
Specifically, the minority class will be harder to predict, so your model is biased towards the majority class.
For example, when you train an AI system to recognise images you could face a number of potential issues, class imbalance may be one of those.
Think of a group of 100,000 images out of which only 100 are images of cats and 99,900 are images of dogs.
The AI system you trained is more likely to predict a dog as it is trained to do so more frequently; it doesn’t have enough negative cases to accurately distinguish between the two types of images.
This potential issue is not as innocent as wrongly classifying cats and dogs. Imagine you are training a model to accept or reject personal loans and most of the historic loans got rejected (for some reason).
Guess what. Your model is likely to reject most or all of the future loan applications as it was more exposed to this sort of information and potentially didn’t learn to differentiate between the two cases.
This is an issue from a data privacy perspective as the model does not produce fair results.
First of all, identifying whether this can be an issue early on is important. You can check that by understanding the volume of data points belonging to each of your classes:
If you do notice a class imbalance but you still run a simple model on your data anyway, you are likely to get good results in terms of accuracy.
Your test data are likely to follow the same distribution as your train data and thus if most instances were mostly from 1 class, just always predicting that class gives good accuracy score.
But don’t get fooled by that.
Typically a confusion matrix will give you a better picture on what your model is actually doing. One of the main and easiest strategies to mitigate issues related to that is random resampling. You can either reduce your majority class to match the minority (under-sampling) or oversample the minority class.
count_0, count_1 = df.target.value_counts()
df_class_0 = df[df[‘your_label’] == 0].sample(count_1)#under sample
df_class_1 = df_class_1.sample(count_0, replace=True)#over sample
You could equally use NumPy or other libraries to sample your data like Imblearn. Also, there are many more ways to address this like SMOTE and Tomek links.
Deep Learning models vulnerable to adversarial attacks
An adversarial attack on an AI system can completely confuse the system. Image recognition systems, for example, were shown to have weaknesses or being vulnerable to adversarial attacks.
Researchers have shown that even if an AI system is trained on thousands of images, a carefully placed pixel in an image can fundamentally alter the perception of it by the AI system, leading to a false prediction.
 This might have a serious effect on real applications involving the identification of individuals. Imagine a security-camera footage scenario where the AI system misidentifies the offender because of this type of attack.
We need to make our deep learning models more robust. Unfortunately, this issue is a tough one. It is currently being investigated at a research-level in top universities across the world. However, in theory, you should be able to test your model not just on an unseen test dataset but also emulate these sort of adversarial attacks to assess its robustness.
Neurons in the deep learning model that are activated erroneously could potentially be dropped to improve robustness.
Reproducibility and consistency
A common question in AI is how easy it is if at all, to replicate the results we obtained or, the models we generated. Many algorithms have stochastic elements when training their models.
So, different training runs result in different models (assuming different random seed), and different models may have different prediction outcomes. How do we make sure that a prediction that concerns an individual won’t be reversed by the next model trained on the same data?
Also, a system that is shown to perform well on our local machine with our data, may perform poorly when tested in the field. How do we make sure that the performance we initially had is propagated to the deployed application? How do we make sure that the system’s performance does not deteriorate over time, which will impact decisions taken about individuals?
These are multiple related issues that require a number of approaches.
To ensure consistency in your results, you should typically employ a cross-validation technique to make sure your results are not based on a lucky split of your train and test set. See this post for practical guidance.
Also, for forecasting models, you could do a backward test and assess what the performance would have been if it was deployed at some point in the past given only train data up to that point.
Also, it is a good idea to assess your model on a totally different dataset with similar input to check its generalisability outside of the given dataset it was trained on. Importantly though, when you deploy a model the real-world, the data should be expected to follow the same distributions as your training one. In any other case, the performance will drop unpredictably.
Finally, it is always a good practice to monitor a deployed model and assess its performance on new data. In case of sudden drops or drifts in performance, it might be a sign that the model needs to be retrained. Of course, this will also depend on the specific application.
Depending on the application you might have a re-training strategy in place to have a new model daily, weekly, quarterly, yearly and so on.
Key questions when building AI systems should be “How do we evaluate the system?”. One of the most common metrics is accuracy. In other words, whatever your model predicted correctly over the total samples it was tested on. But is accuracy a good metric? Think about a problem where you have 100 women of which 10 are pregnant.
Imagine you have some information about these women and, you try to build a model to predict who is pregnant and who is not. You do that and your model has an accuracy of 85%.
Does this mean that you have a good model? On the other hand, let’s assume you have no model and what you rather do is predict all women being non-pregnant.
Surprisingly, this has an accuracy of 90% as you will be correct 90 out of 100 times. Is that better than the actual model you created above? So, what metrics do we use and, how can we assess the performance of our models? Would you rely on just accuracy for decisions that affect individuals?
The answer is obviously no. In fact, usually, the best approach is to compare multiple metrics and ideally examine the confusion matrix closely to understand the strengths and weaknesses of your model.
So for the naive approach above that has 90% accuracy, F1-score would actually be 0 as there no True Positives (only True Negatives). On the contrary, your model of 85% accuracy could, in fact, has 67% F1-score, which might or might not be acceptable in specific applications.
Other metrics to look for is the area under the curve (AUC) of a Receiver Operating Characteristic (ROC), Precision, Recall and Specificity, just to name a few.
Relying on historic data to make predictions for the future does not always work. A great example is trying to predict the stock market. This is intrinsically difficult for a number of reasons. Using data that had for so long a certain outcome creates models that work within the boundaries of their history.
This means that if you train a model in a period of no market crashes, there is no way that the model would be able to ever forecast a crash.
Even if you trained it over periods of a market crash, it is still very unlikely for the model to learn when one would happen due to the rarity of the event and the lack of a clear signal pointing towards that direction.
Now, think about models making decisions that impact individuals during the times of a global pandemic.
Since all of the models have seen no similar data in the past, it is unlikely to make decisions about individuals as accurately as they did before the pandemic.
In such situations, the models are likely to require re-training with data taken from the new situation, in order to operate within the new reality.
This might work temporarily until perhaps the behaviour shifts again to the old standard. If re-training is not possible then decisions shouldn’t be taken automatically as they are likely to be wrong. This needs to be tested and validated though.
All in all, predicting black swan events is not possible with the current assumptions that our models operate within. Making predictions and taking decisions about individuals, given that you know that the data you are predicting on do not follow the same distribution as the data you trained on would be irresponsible to do.
That is not to say that the models cannot be useful as a consulting tool. Besides “all models are wrong, just some are useful” — George Box.