We are asking for expressions of interest in postdoctoral positions that will soon be available at the University of L’Aquila (2 years, with possible extension) within the projects whose summary is provided below.
The hired postdocs, together with the scientific advisors, will
participate in one of the projects. The required competencies concern one or more of the following topics: agents and multi-agent systems, logic programming, automated reasoning, cognitive architectures, and neuro-symbolic AI. Implementation skills are required.
The expression of interest, together with the CV, has to be sent to [email protected].
In case you are still a Ph.D. student, please indicate the
presumed date of the Ph.D. defense.
TrustPACTX – Design of the Hybrid Society Humans-Autonomous Systems: Architecture, Trustworthiness, Trust, EthiCs, and
EXplainability (the case of Patient Care)
The Project’s objective is the realization of a prototype instance, related to the healthcare field, of a novel notion of a “Hybrid Society” (HS) that we intend to propose, where humans and autonomous systems (AS, or “agents”) will be coupled at multiple levels. The envisaged HS will encompass human users’ principles and standards. Users in the Hybrid Society will be supported by Personal Assistant Agents (PAs), that will connect each user to the HS. The HS will have various kinds of users, including patients, doctors, caregivers, healthcare professionals, and healthcare administrators. PAs will provide users access to relevant services, encapsulated in their own ASs (e.g., medical doctors, Emergency Room, emergency transportation, specialist clinics, entertainment facilities, etc.). The aim is to define, implement, and test on real case studies the specialized HS, incorporating therein the principles, and the ethical and conduct codes required in the healthcare field. The project will devise a working approach to the design and development of the healthcare HS, by defining: (i) a technological infrastructure for the specification of HS, (ii) methods to keep under control the operation of AS, so as to ensure their trustworthiness and consistent behavior of the overall system (iii) strategies to embody and enact norms and ethics in AS, and to explain their behavior, so as to build humans’ trust in AS. Simulations will study to what extent patients and physicians, families, etc. may accept, influence, or reject decisions taken by an AS (possibly embodied in a robot).
ADVISOR – ADaptiVe legIble robotS for trustwORthy health coaching
The aim of this project is to advance the current state of the art to fulfill the need to integrate robotic technologies for improving the quality, efficiency and success in telehealth and at-home healthcare, investigating how to build Trustworthy and Transparent Socially Assistive Robots, able to promote healthy lifestyle habits in people by engaging them in social interactions using behavioral and social cues (verbal and non-verbal), emotional and cognitive abilities adapted according to each individual. An increasingly important issue for the acceptance of robots in human homes is not only the pertinence of the robot behaviors use and robot’s application, but also the transparent interpretability of the robot’s behaviors and its underlying decision-making processes. To fulfil this need, approaches have to be extended for developing systems that can analyse and modulate the behaviors of a robot to provide transparent, interpretable information to the users. This is particularly relevant in a healthcare scenario where we envisage robots that are able to adapt their behaviors to the users’ needs, in terms of personality, cognitive profile, medical records and requirements, and requests. To endow people with the ability to understand and predict a robot’s behaviors, the project aims to develop a robotic Cognitive Architecture that integrates several arising techniques to build legible and trustworthy robots, such as the robot’s ability to talk to itself (i.e., Inner Speech), and increasing the quality and accuracy of the user’s ability to form a mental model of the robot (i.e., Theory of Mind). To ensure adherence to the expected robot’s behaviors and the well-being of the users, robots must be able to monitor their own state and contextual environment to dynamically adapt and recover from violations.