Privacy Issues Raised by Advanced Technologies

Privacy risks from artificial intelligence (AI), robotics, Big Data, and IoT.


Stephen Wu

3 years ago | 9 min read

Privacy risks from artificial intelligence (AI), robotics, Big Data, and the Internet of Things (IoT) stem from eight main causes:

  1. the unwanted, surprising, intrusive, and/or opaque collection of more varieties of personal data than ever before;
  2. the volume of personal data collected giving data controllers more capabilities;
  3. the velocity of collecting, using, and sharing personal data giving data controllers more abilities to act on that personal data;
  4. issues with the veracity of personal data;
  5. bridging contexts, allowing data controllers to use personal data from disparate sources to profile data subjects;
  6. surveillance capabilities in physical and virtual spaces previously outside the capabilities of data controllers;
  7. the lack of control over personal data; and
  8. the direction of marketing messages to data subjects in new and unanticipated ways.

This section discusses each of these issues in turn. The last subsection discusses blockchain privacy issues.

As mentioned above, sometimes the synergies among these technologies give rise to privacy issues. Big Data collected by IoT sensors allow data controllers to use analytics and machine learning/artificial intelligence to analyze the data and take actions regarding a data subject.

Those IoT sensors may be in or on robots or automated vehicles. In these cases, it is the combination of advanced technologies used that raise the privacy issues discussed in this section.

Greater Varieties of Personal Data Collected, Used, and Shared

New technologies make it possible for data controllers to collect a much larger variety of personal data than ever before. IoT devices collecting personal data range from the small scale — for instance, devices (eventually at nanoscale) ingested, injected, or embedded in the human body — to the worldwide views possible with services like Google Earth, as well as the myriad of devices at scales anywhere between these extremes.

We may give informed consent to allow devices into our body during surgical procedures, but Google collects data about our homes and streets (for Street View) without notifying us or allowing us to consent.

Businesses may minimize legal risk by providing additional notices, including location- and time-specific notifications. For instance, a building owner can mitigate the risk of invasion of privacy suits regarding IoT video cameras by being selective about the location of these cameras ( e.g., avoiding sensitive locations) and posting notices about video recording in the public spaces where cameras are placed.

Employers can disclose to workers in employment manuals that they are using security cameras in non-sensitive areas of the office.

Businesses collecting clickstream data or mobile device data in privacy policies. Nonetheless, few consumers read privacy policies. Accordingly, educating users about privacy controls and showing consumers privacy notifications immediately before a new personal data collection process would mitigate legal risk.

Greater Volume of Personal Data

To some extent, the sheer volume of personal data collected creates its own set of privacy issues. For instance, one mobile purchase in isolation might not mean very much. Nonetheless, collecting entire purchase histories of a user and the user’s household would give a merchant and much clearer picture of purchasing interests and allow it to direct more targeted advertising to that household’s members.

The greater volumes of Big Data collected increase the possibility of leakage and inadvertent disclosure, even aside from the greater security risks. Moreover, large volumes of personal data about an individual in one context may make it more likely that de-identified data about that individual can be re-identified.

Businesses collecting, using, and disclosing large volumes of personal data should use controls with rigour to commensurate with the volume of data collected to manage the collection, use, and disclosure of personal data.

Greater transparency about the types and sources of data collected will promote trust in the data controller. Minimizing the volume of personal data collected or using de-identification will likely reduce the legal risks associated with personal data volume.

Greater Velocity of Personal Data

The rapid analysis of Big Data (perhaps through machine learning) may make immediate action possible for a data controller in ways not possible in previous eras. For instance, consider the combination of geolocation and purchasing data history in a retail scenario. Let’s say that a shopper has an app on his or her phone that communicates with a smart retail kiosk in a mall.

When the shopper first walks into the mall, the hallway kiosk detects the shopper entering and performs a lookup. The mall’s retail system can review the shopper’s purchasing history and serve up a targeted display ad on the kiosk reflecting a special ad or discount offer based on the shopper’s interest.

The ad appears almost instantly as the shopper approaches and is about to pass by. The speed with which data can be collected and turned into action is increasing.

Again, a business using such a system can clearly explain the way it collects data, how it uses that data, and how it impacts the consumer. Since consumers do not read privacy policies, it would reduce privacy risks by providing context-specific notifications and opt-out options.

Imagine a shopper that has been looking for an engagement ring to surprise his or her partner. In the absence of control, the mall kiosk we are discussing might display engagement ring ads as the shopper walks in the door. Now, imagine that the shopper brings his or her partner to the mall.

That shopper would likely want to opt-out of targeted ads right before entering the mall with his or her partner to avoid the possibility of seeing engagement ring ads that would spoil the surprise. Offering an easy way to opt-out in specific contexts would help give the shopper control over the experience.

Issues of Veracity of Personal Data

With Big Data, questions arise concerning the veracity of personal data. Biased or incorrect data may lead to incorrect results of automated data processing. Intentional or unintentional corruption of data may also cause mistakes.

As mentioned in section II.C.1, the EU’s General Data Protection Regulation (GDPR) provides data subjects with rights to an explanation and human intervention when automated data processing using personal data affect a data subject.

These rights help correct mistakes caused by bias, incorrect data, or corruption of data. These GDPR rights are mandatory for GDPR-covered businesses. Nonetheless, using such techniques will also minimize legal risks for businesses outside the scope of GDPR.

Issues with Bridging Contexts of Personal Data Collection and Use

In our era of Big Data collection, cloud computing, and interoperability, it is increasingly common for businesses to collect data sets from different sources and combine or compare them to create more comprehensive profiles of data subjects. Data subjects that consent to data collection in disparate contexts may be surprised to find out that collecting businesses have combined data sets to see new patterns and correlations.

Businesses can use greater transparency in their notifications to explain the sources of personal data they rely upon and how they use different data sources. Forthright disclosures can diffuse data subjects’ surprise.

Moreover, just-in-time and context-specific disclosures can provide additional notifications to data subjects, thereby reducing legal risk.

Greater Surveillance Capabilities

New IoT devices are observing data subjects in ways not possible in previous generations. Smart speakers and Siri chatbots are listening and, when triggered, record voice data. IoT cameras record video in increasingly large areas of public spaces, as well as workplaces, entertainment locations, businesses, and homes.

People are concerned that drones flying near our homes are recording private activities. Pervasive surveillance is shrinking the areas in which we used to feel free from intrusion.

Currently, affective computing systems are trying to watch individuals determine their emotional states and act accordingly. At some point in the future, AI systems fed by IoT data may be able to read minds. We have always thought that the last bastion of privacy was our internal thoughts in our minds.

When the day comes that AI systems can read minds, even that last bastion will fall. This prospect is a scary one indeed. Fortunately, that day is not near, but we should at least monitor developments in AI to remain vigilant about the privacy of our mental states and thoughts.

In today’s world, private businesses can minimize legal risk by providing location-specific notifications of data recording. Businesses providing smart speakers can provide clear disclosures of when voice recording occurs, what voice data is captured, and how long it is retained.

Commentators have also talked about requiring device-specific interface mechanisms to warn people of data collection. For instance, drones with cameras could turn on a red light to warn people that video recording is taking place. Legislation may be necessary when market solutions fail to address specific privacy threats.

Lack of Control over Personal Data

Some privacy complaints stem from a lack of control. A prime example is the data collection, use, and disclosure practice of credit bureaus. Unless consumers use identity theft services of a credit bureau, a consumer has no direct contractual relationship with credit bureaus that have a critical role in lending decisions about the consumer. Federal statutes give consumers limited rights to correct information.

In the IoT context, a data subject may have no way of receiving notice of or opting out of personal data collection. Imagine a guest in the home of a consumer who bought a smart speaker, or a child talking into a playmate’s toy that collects voice data. The guest and the child in these examples have no relationship with the company selling the device.

They are simply bystanders whose data is collected. And as non-purchasers, they may have no rights under consumer protection laws to access collected data or insist on erasure.

It may be that in today’s life, people should be educated about greater data collection possibilities and simply exercise caution about what they say in areas in which they have no control. And it may be that legislation will become necessary to curb abuses.

In the meantime, privacy notices can raise the issue of bystanders and educate consumers about respect for the privacy of their friends and family members. In addition, interface mechanisms can promote transparency. For instance, smart speakers that light up when recording or toys that display a signal when recording is occurring can promote transparency to bystanders.

New Ways to Direct Marketing Messages to Data Subjects

Advertisers are always looking at new ways to target ads to consumers. Section III.C talks about smart kiosks in malls directing targeted ads to shoppers. We may expect retail environments to target us.

But there may be new and innovative ways to deliver ads in ways impossible in previous years. For instance, imagine that you are instructing your automated vehicle to drive to a business establishment from the airport.

That automated vehicle may calculate your route and notify you that the location of your favourite coffee shop is on the way to your destination and may ask if you want to stop there for refreshment, perhaps coupled with a special discount offer.

More immediately, imagine that your smart refrigerator starts delivering discount coupons to its display. Previous generations of refrigerators have never delivered ads to their owners.

If current trajectories hold, it may simply be another fact of life that we are going to face more targeted advertisements in previously ad-free locations and contexts.

Manufacturers and service providers can mitigate legal risks by notifying consumers about when they may deliver ads and offer them the ability to opt-out. For instance, some presumably large swath of the population may never want to see ads coming from a refrigerator, even if purchasers could save money.

Blockchain and Privacy

One privacy issue that creates an interesting dilemma is the effect of the GDPR and the California Consumer Privacy Act (CCPA) on blockchain technology. The blockchain technology used for public networks has inherent tensions with GDPR in that any personal data recorded on the blockchain is shared publicly.

Moreover, blockchain networks in which personal data is recorded cannot delete records without breaking the blockchain, while GDPR would (in the absence of an applicable exception) require the erasure of personal data upon the data subject’s demand. The CCPA would raise the same issue.

The EU wrote a helpful summary of the effect of GDPR on blockchain technology with suggestions for mechanisms and controls to try to resolve these dilemmas.

Examples include:

  • questioning whether a blockchain is needed at all for specific applications;
  • avoiding the storage of personal data on the blockchain, and instead using mechanisms to minimize personal data collection and use, such as blockchains storing only pointers to off-blockchain data, obfuscating personal data, or encrypting personal data (coupled with cryptographic key destruction upon an erasure request);
  • using private blockchain networks with access controls, rather than public networks;
  • developing technological solutions to allow the erasure of blockchain data without breaking the blockchain’s protection.


Created by

Stephen Wu

Stephen Wu is an attorney and shareholder with Silicon Valley Law Group in San Jose, California. Steve advises clients concerning privacy, security, transactions, compliance, liability, and governance of emerging and mature information technologies, such as artificial intelligence, autonomous and connected vehicles, robotics, Big Data, the Internet of Things, and cloud computing. He negotiates technology agreements, resolves disputes for clients, and serves as an outside general counsel for emerging companies. Steve also advises clients on governing and assessing corporate programs to promote compliance and ethics. An author of seven data security legal books and numerous other publications, Steve is the current Chair of the American Bar Association Artificial Intelligence and Robotics National Institute. Also, Steve served as the 2010-11 Chair of the American Bar Association Science & Technology Law Section.







Related Articles