Abstract:
Global companies manifest a growing trend in the adoption of Artificial Intelligence,
mainly due to the economic benefits it brings to the organizations, such as the increase of efficiency and productivity, by resulting in an overall rise in profits. However, companies must also be aware of some ethical issues that the adoption of AI
requires. Unemployment, inequality, security, and transparency are only some of the
ethical risks with which companies have to deal with. It turns out that managers are
required to making decisions in the so-called “ethical dilemma” situations, in which
his/her actions have different consequences towards different segments of the society. An apparent right action for a group of people, could be regarded as unethical
practices for others, and therefore, it is not easy for manager takes a position. For
this reason, three normative ethical theories are applied in the context of AI use
within companies, as they provide some ethical guidelines. The three ethical theories taken into consideration are: the theory of responsibility by Hans Jonas, Kant’s
categorical imperative, and the capability approach from Sen / Nussbaum. In order
to operationalize and interconnect these three ethical theories, a business ethics
Canvas is developed. Finally, this model is applied to a real corporate case: the introduction of the AI software, called SO99+, in Fischer Italia.