Between 2025-2027, the SEFI Ethics SIG is excited to invite you to take part in The Routledge International Handbook of Engineering Ethics Education talk show series, where 3-4 authors will discuss in a lively format on themes of interest for the community of engineering education research.
A consortium of four European universities – three of them SEFI members-, together with research institutes and AI companies have established the Human-Centered AI Masters Programme (HCAIM), which has launched last year at (alphabetical order) Budapest University of Technology and Economics, HU University of Applied Sciences Utrecht, Technological University Dublin, University of Naples Federico II (Héder, 2022).
Designed with an interdisciplinary aim, the curriculum incorporates humanities – especially ethics and law -, and social sciences into the technical education of Artificial Intelligence.
What made the development of this activity challenging is the still-evolving body of knowledge, together with the social construction of the value system related to AI. Simply put, not enough time have passed since the current wave of end-user AI applications commenced, so both the social expectations and the technical possibilities are still evolving.
What we know for sure is that AI, like any software is capable of creating technological lock-in situations, because the economic incentives are extremely strong for the reuse of pre-existing code. Our contemporary software tools for productivity, collaboration, AI and even science reflects this reality. Moreover, unless there are prohibiting factors (i.e., software needs to run on-board), we tend to rely on the cloud. The ethical recommendation and regulation tsunami (Héder, 2020) surrounding AI is a reaction to this extreme lock-in potential (Héder, 2021).
This means that post-hoc product and service regulation alone has little hope of succeeding. Instead, we need engineers who can act morally in the design process, relying on a strong background in the humanities and social sciences. This is what the Human-Centered AI Masters focuses on. Unlike other engineering curricula, it includes a significant amount – at least 25%, possibly more depending on the thesis topic – of outright humanities, broadly construed, including ethical, legal, economic and social aspects.
Moreover, the more technology-specific subjects and all case studies also are aware and make use of foundations of ethics, social sciences and law. The development of the curriculum required ground-breaking work on the incorporation of topics such as social responsibility, transparency and explainability; value-based design; and bringing social fairness into these re-thought technical subjects.
Work done in the field of engineering education for the previous wave of regulation, mostly about privacy and data governance, is also incorporated. As a result, privacy-preserving machine learning, algorithmic justice and the prevention of bias may be learned, along with forward-looking topics such as future AI topics, including singularity, robot rights movements, and human-machine biology are also discussed.
The learning events, slides, lecture and lab notes and other teaching materials created by the consortium will be made publicly available before the end of 2023.
The Human-Centered AI Masters programme was Co-Financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.