Skip to main content

Ethical Concepts

Why are we coming to ethics?

  • “Ethics provides tools for the moral evaluation of behaviors, institutions, and social structures.” (National Academies of Sciences, Engineering, and Medicine, 2022)
  • In technical courses/lessons, ethics is often positioned as an addendum instead of a foundational consideration. By embedding ethics into the engineering mindset, future creators of technology will be encouraged to understand and act upon the broader social and political ecosystem within which their technology is situated.
  • “The lack of diversity in tech, combined with employers prioritizing profit over consideration of the potential negative impacts of computing, has led to a “tech ethics crisis” where software engineers see ethics as a “specialty” rather than “foundation[al to] all design” [Zunger cited in 12, par. 3], and where “moral weight [is] not on the work of engineers but instead the ad hoc uses of engineered artifacts”” (Ryoo et al., 2021)

 

What might different lenses of ethics offer? What are their limitations?

 

How might ethical concepts be influential on the scale of an individual, professional, community, or society? What are the implications of choosing a particular scope?

  • Access to Technology 
  • Accessibility of Technology
  • Accountability and responsibility
  • Bias
  • Community Engagement
  • Environmental Impact / Sustainability 
  • Equity
  • Fairness
  • Freedom of Speech
  • Inclusion / Participation
  • Justice
  • Labor
  • Liability
  • Mis/information
  • Mis/interpretability
  • Misuse
  • Privacy / Surveillance
  • Regulation / Law
  • Reliability
  • Risk / Safety
  • Transparency / Explainability
  • Trust
  • Vulnerabilities / Hacking 
  • Whistleblowing
  • Workplace Culture / Behavior

 

What do these resources offer? What are their limitations?

Fairness
  • Procedural fairness: it’s fair if the procedure it uses is fair
  • Distributive fairness: it’s fair if it leads to fair outcomes
  • Representational fairness: it’s fair if it doesn’t reinforce subordination of some groups along lines of identity (e.g. denigrate, stereotype, or fail to recognize group)
  • Why it’s so damn hard to make AI fair and unbiased (Vox)
Bias
  • #ethicalCS: Thinking about Bias in Computing classes
  • Algorithmic Types of Bias
    • Pre-existing, technical, emergent bias
    • Historical, representation, measurement, aggregation, evaluation, and deployment bias
  • When Bias Emerges
    • Training data (most commonly cited source)
    • Model selection
    • Inadequate/narrow objective functions
    • Failure modes
    • Deployment
    • White supremacy in the tech world
  • Cognitive Biases (to be wary of when programming)
    • Confirmation bias:
      • Definition: seeking evidence that supports pre-existing beliefs and taking it a face value + critically judging evidence that disconfirms belief
      • Implication: could cause programmers to be less open to critique and be overlook/fail to account for the negative ethical implications of their program
    • Hindsight bias: 
      • Definition: a skewed perception of how close you were in your prediction of the outcome 
      • Implication: could be an issue when it comes to taking credit for work
    • Illusory correlation: 
      • Definition: a cognitive bias due to overestimating a link between two variables that can lead to prejudice about social groups
      • Implication: could cause programmers to overlook other details of their code that is causing bias, for instance
Accessibility
Justice
Privacy
  • Who will have access to the data that is collected? Will there be any restrictions on the purposes for which data is accessed, or with whom it is shared, or can those with access browse through the data whenever they want? How will requests for access by users, non-users, those accused of wrongdoing, media outlets, or others be handled? Is there any logging of access to the data, or other mechanisms for enforcing rules about sharing and access? (Source)
  • Insensitive data may proxy sensitive features. E.g. US Census – socio-economic status, zip code, and race. (Source)
  • Privacy concerns not just who owns the collected data but which rights can be transferred and what obligations collecting or receiving such data entails. (Source)
  • The users do not always know when it is later used in ways they did not expect or desire [36]. Hence, project teams (and organizations) should not enter into confidentiality agreements that preclude explaining who their data partners are, as well as making the data supply chain visible so that an individual or organization has the ability to ensure no data misuse. (Source)
  • It has been noted that people can be re-identified from anonymous data using zip code, birthdate and gender with 87% accuracy. (Source)
Transparency
  • When transparency is needed, some have argued that the models such as neural networks should not be used, and one should use models simple enough to allow some explanation, such as explaining which covariate is driving a particular decision—perhaps even reduced to logistic regressions. Alternatively, one could also consider institutional processes, documentation, and access to those documents as a way to explain the behavior models used [75]. In other words, if justification requires understanding why the model’s rules are what they are, one should seek explanations of the process behind a model’s development and use, not just explanations of the model itself [75]. (Source)
Labor