skip to Main Content
+32 2 502 36 09 office@sefi.be
Nael Barakat, Ph.D., P.Eng. FASME
University of Texas at Tyler

According to a report by Goldman Sachs, Generative Artificial Intelligence (AI) presents the potential to raise the global GPD by 7% (Goldman Sachs Report).  Considering the recent vast and rapid advances in implementing AI in every aspect of life, qualifications related to engineering and STEM fields would dominate emerging jobs’ requirements.  

Meanwhile, many ethical concerns have been raised regarding AI.  As with every emerging technology, these concerns range from highly optimistic expectations (AI will treat cancer forever) to extremely pessimistic ones (AI will control lives), with the inclusion of concerns with some level of relevance (automation will introduce health issues related to physical activities), (Pew Research Center).  This range of concerns is a natural reaction that comes with every new technology and the imaginary forecast of how it could impact humans and society, especially when technology enters human lives so fast, and with the sobering historical incidents of technological failures (Fukushima and Chernobyl) or even recent events involving AI (Facebook-Cambridge-Analytica-Scandal).  Moreover, characteristics of AI, such as autonomy and manipulation-ability augments its potential impact on human life and the world.  Consequently, from the big mix of misinformation exchanged in public, a collection of relevant ethical concerns could be identified and categorized in three categories as stated by B.C. Stahl (Artificial Intelligence for a Better Future): 1) Ethical concerns arising from machine learning, 2) General issues related to living in a digital world, and 3) Metaphysical issues.

The engineering educator is left with the challenge of properly equipping engineering graduates adequately to deal with AI at every level from generation to utilization. Nevertheless, engineering ethics educators have accumulated relevant experiences and evidence-based approaches to provide a flexible starting base that allows for creativity in accommodating an emerging technology such as AI. These experiences emphasize equipping engineering students with a robust understanding of the foundational ideas of professional ethics and the need for an objective and technically sound understanding of the facts and the context of the ethical situation.  Skills and techniques to achieve a good understanding of the factual issues come from training using examples and case studies of ethical problems and questions, which present opportunities to allow engineers to navigate through the plethora of ethical questions related to AI and exchanged among the public and apply their combined knowledge and skills.  Examples of AI ethical concerns include data bias and the idea of machines being unethical. Following the approach of referring always to the foundational knowledge and then utilizing objective analysis, the data bias question can be handled by starting at the basics, where the word bias implies a measurement within a frame of reference.  Collecting data within frame A that constitutes certain constraints and conditions and using the same data set to resolve or judge an issue within another frame B that has differences from the original frame A will automatically create a level of bias.  Therefore, data is never biased but rather the utilization of it could be biased. Consequently, any algorithms built on this data would inherit this bias.  Similarly, the idea of machine ethics ought to be connected back to the concept of moral agency being a requirement to think about ethics.  A relevant explanation of man-machine interaction and the side with responsibility can be extracted from Isaac Asimov’s three basic rules for robotics (Isaac Azimov 3 Laws of Robotics).  Once misconceptions have been cleared and facts have been identified, engineers can confidently move forward to devise innovative solutions based on sound professional knowledge, which consists of an integration of both technical and ethical concepts.

Many similar concerns can be analyzed to establish validity, and a resolution, with a creative and responsible engineering mindset which has the discussion of ethics of AI (or any emerging technology) ingrained in its DNA as a healthy integral part of every stage of AI development and implementation.  Such engineers would find it seamless to identify the ethical benefits of technology, as part of their awareness of the broader impact of technology, not just ethical concerns, which is the positive and less-mentioned side in engineering ethics education, or even in the interpretation of accreditation requirements of engineering programs (e.g. Criterion 3: S.O. 4, ABET EAC Accreditation Criteria 22-23).  This is different from what usually takes place when starting a discussion of ethics, where ethical problems are implied immediately, thus causing bias in the thought process and pushing it to the defensive side.  The result of establishing a balanced understanding of AI, which includes both ethical benefits and concerns, would pave the way for discussing pragmatic ideas such as: 1) Data collection conditions, data size, and data use conditions influence AI results, but these constraints are usually defined by a human, 2) AI ethical concerns should not be handled superficially or based on media presentations.  Ethical problems (real ethical dilemmas) should be distinguished from non-ethical problems (e.g. killing with AI), and 3) Engineers’ responsibility does not stop at ensuring that AI-based products are designed and built on the least risk possible of infringement upon any ethical standards, but extends to engaging with, and educating the public, especially decision making bodies, of the limitations associated with AI as a socio-technological system and the fact that the ultimate goal of it is defined by humans, who have the moral agency that comes with responsibility and accountability.

Back To Top