Human-Centered AI Masters

Within recent years, and especially, against the backdrop of the COVID-19 pandemic, ethical considerations across all research domains have risen to the forefront, and artificial intelligence makes no exception. Probably, the most affected component in the AI ecosystem is the data. Even before the outburst of the pandemic, the European Data Strategy identified the establishment of the European Union as a role model for a society empowered by data, as a key aim[i]. Doubling the number of data science professionals to over 10 million within Europe, a key element of the European Data Strategy, consequently, poses a significant challenge to the field of data science and artificial intelligence, further including requirements about the ethical collection, processing, and storage of data to address these market needs.


By the same token, a High-Level Expert Group of the EC presented ethics guidelines for trustworthy AI, to ensure ethical, trustworthy, and robust AI development[ii]. Similarly, global efforts, including by recognized universities such as Stanford and MIT have launched their own initiatives in human-centred artificial intelligence, recognizing the pivotal role of AI for the well-being and prosperity of society and citizens alike. However, as stated by the High-Level Expert Group on Artificial Intelligence of the EU, “ensuring Trustworthy AI requires us to build and maintain an ethical culture and mindset through public debate, education and practical learning”[iii], which requires, in itself, ensuring equal access to formal and informal education on ethical and trustworthy artificial intelligence.


For this purpose, a group of European universities, excellence centres, and SMEs have come together to develop a combined AI program, addressing the market and industry needs concerning human-centred artificial intelligence. The consortium consists of 10 partners from 5 European Member States (Bulgaria, Hungary, Italy, Ireland, and The Netherlands) representing four geographical areas of Europe (South, North, West, and Central-East Europe), which enables the partners to collect a holistic view on the market and needs of Europe with relation to human-centred artificial intelligence and use this knowledge as a backbone for the design of the Human-Centered AI Master’s Programme.


HCAIM believes in ethical and technically robust AI, enhancing our humanity and ensuring adherence to ethical principles and values and is dedicated to working for the development of analytical, design and creative skills based on sound foundations of AI and ethics, balancing different expertise and integrating AI with human-centred systems and applications.


The goal of HCAIM is to develop a holistic master’s program supporting the legal, regulatory and Ethical adoption of AI by developing resources fostering deep knowledge of AI and Human-Centered approaches to its application. Graduates of this programme will have not just a strong technological background, but also knowledge of AI ethics and regulation, as well as all relevant competencies to apply their knowledge in real-world situations.


The HCAIM project is co-financed by the Connecting Europe Facility (CEF) instrument of the European Commission.





Get in touch:

[i] European Commission. A European Strategy for Data. February 2020. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Available online at:

[ii] European Commission. High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. April 2019. Available online at

[iii] European Commission. High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. April 2019. Page 9. Available online at

Skip to content