Social Failure & 21st Century Design

Applied ethics crucial to realizing the benefit of new innovations


Tealfeed Guest Blog

3 years ago | 5 min read

Orignally written by Mairead Matthews on Medium .

Canada Research Chair Jason Millar is an engineer and philosopher who studies social and ethical issues related to new innovations in technology. Below is an overview and discussion of some of his most recent work.

Following its product launch in 2013, Google Glass saw two years of poor sales and heavy criticism prior to being shelved officially in 2015. Alongside other social and ethical considerations, critics were concerned about personal privacy — most notably, that Google Glass gave users the ability to seamlessly record private conversations and interactions with others, as well as the ability to employ facial recognition software.

St. George’s Hospital Medical School designed a new computer program to automate the screening of medical school applicants in 1979. By 1988, St. George’s had been found guilty of racial and gender discrimination in its admissions process: based on historical data, sourced from a time when the school had openly discriminated against certain groups of applicants, the computer program had been designed accidentally to reiterate discriminatory human biases.

March 2018 marked one of the most high-profile fatal accidents involving an autonomous vehicle to date. The US National Transportation Safety Board (NTSB) determined in November 2019 that the collision had resulted from a series of decisions by Uber ATG, an organization which according to the NTSB had failed to make clear the abilities and limitations of its vehicles. Federal regulators have since been called upon to establish a formal review process before allowing companies to test automated vehicles on public roads.

In each of the cases above, individuals responsible for the design and deployment of new, innovative technologies failed to consider the full spectrum of social and ethical implications including, but not limited to, justice, bias, fairness, interpretability, explainability, control, power, gender, privacy, discrimination, truth, and equality (Millar, 2019).

St. George’s Hospital Medical School failed to consider the ethical implications of using biased historical data in their admissions process; Uber ATG failed to establish clear lines of responsibility and accountability before testing near-driverless cars; and Google failed to consider personal privacy in designing Google Glass.

With both an engineering and ethics background, Canada Research Chair Jason Millar is uniquely positioned to perform cutting-edge research in this area. Studying the various ways designers and engineers tend to overlook the ethical and social considerations of their work, Millar has found ethical and social analysis crucial to realizing the benefit of many new innovations like machine learning algorithms, driverless cars, and robots.

Baked into the practice of engineering is an in-depth understanding of the various ways materials and mechanical systems in technology fail: corrosion, erosion, fatigue, and overload, just to name a few. In engineering, these breakdowns are referred to as failure modes, generally classified as either material or mechanical in nature. From this body of knowledge, engineers have been able to develop an effective list of tools, codes, standards, risk assessments, and other best practices aimed at preventing future material or mechanical failures in engineering and design.

Alarmingly, Millar has found existing approaches to ethical analysis to be somewhat out of step with new and emerging risks. That is, unlike with material and mechanical failure, there are no universally accepted tools, codes, standards, or risk assessments aimed at preventing social and ethical problems related to AI, automation, and autonomous robots (though there have been ample efforts to establish a common set of high-level ethical principles to guide decision making around autonomous and intelligent systems). In response to this finding, Millar has in turn developed a thoughtful set of tools and techniques for engineers and designers to incorporate into their daily practice, three of which are listed and explained below.

At the core of his research, Millar argues that in addition to being able to fail materially or mechanically, new technologies may also fail socially: social failure occurs when an artefact’s design conflicts with the accepted social norms of its users or environment to the extent that its intended use is prevented or diminished (Millar, 2019). In other words, products and tools may be designed in such a way that they transgress fundamental social norms and ethical expectations, ultimately causing their benefits to go unrealized. In line with this argument, Millar has begun compiling a list of common social failure modes for engineers and designers to use in creating tools, codes, standards, or for risk assessments.

In hopes of establishing a practical way to conduct ethical analysis in engineering and design, Millar and his team at the University of Ottawa’s Canadian Robotics and Artificial Intelligence Ethical Design Lab ( CRAiEDL) are developing value maps and other worksheets for designers and engineers to use in their daily practice. These worksheets are intended to guide engineers and designers through a process Millar calls value exploration. This process first seeks to identify the full range of stakeholders involved in the development of a given technology, along with their respective values, and then explore any existing value tensions that may need to be addressed during the engineering and design process.

One common example of value tension occurs in the context of automated decision-making systems. While some stakeholders may value transparency and the ability to understand how algorithms behind automated decision-making systems work, others may value intellectual property rights and the ability to keep valuable and proprietary information private. In this content, value maps and other kinds of worksheets may assist designers and engineers in identifying the right amount of transparency and IP protections needed for their products.

Other tools developed by Millar are much more specific to their intended applications. For example, Millar developed a tool to evaluate automated ethical decision making in autonomous robots, such as autonomous vehicles, virtual assistants, or social robots. Millar sought to develop a tool that was user-centred and proportional in its approach, that acknowledged and accepted the psychology of user-robot relationships, that helped designers satisfy the principles contained in the human-robotics interaction (HRI) Code of Ethics, and that helped designers distinguish between acceptable and unacceptable design features (Millar, 2016). The result was a series of 12 questions for engineers, designers, and policymakers when evaluating automated, ethical decision-making systems.

The Government of Canada developed its own Algorithmic Impact Assessment tool in 2019, which was a series of questions designed to help public service employees assess and mitigate risks associated with deploying an automated decision system. Interestingly, Canada was the first country in the world to develop this kind of procedure.

As new technologies and new applications for existing technologies emerge over the coming years, it will be vital to continue to develop and perfect practical tools for ethical and social analysis in engineering and design.


Created by

Tealfeed Guest Blog







Related Articles