Intelligent Security
A.Y. 2026/2027
Learning objectives
The cluster aims to further explore emerging topics at the intersection of artificial intelligence and cybersecurity, with particular reference to generative models and agent-based approaches (agentic AI), and their impact on digital systems and processes. It introduces the operating principles of generative models and frames their main application areas, highlighting opportunities, limitations, and risk profiles.
The cluster examines the role of AI in cybersecurity both as a means to support defensive capabilities (e.g., analysis, detection, and response support) and as a technology that introduces new attack surfaces and vulnerabilities, with particular attention to risks specific to model-based systems (e.g., input manipulation, information exposure, misuse, and dependencies on external components). The educational objective is to develop an integrated understanding that includes technical aspects, governance principles, and protection measures for the adoption and responsible use of systems that employ AI models.
The cluster is organised into the modules Generative AI and Security and AI, designed in a coherent and coordinated manner. The modules jointly contribute to the intended learning outcomes by integrating foundations of generative AI with risk analysis and the main mitigation approaches related to AI use in security scenarios, following the complementary perspectives of AI for security and security of AI.
The cluster examines the role of AI in cybersecurity both as a means to support defensive capabilities (e.g., analysis, detection, and response support) and as a technology that introduces new attack surfaces and vulnerabilities, with particular attention to risks specific to model-based systems (e.g., input manipulation, information exposure, misuse, and dependencies on external components). The educational objective is to develop an integrated understanding that includes technical aspects, governance principles, and protection measures for the adoption and responsible use of systems that employ AI models.
The cluster is organised into the modules Generative AI and Security and AI, designed in a coherent and coordinated manner. The modules jointly contribute to the intended learning outcomes by integrating foundations of generative AI with risk analysis and the main mitigation approaches related to AI use in security scenarios, following the complementary perspectives of AI for security and security of AI.
Expected learning outcomes
Knowledge and understanding
At the end of the cluster, the student is able to:
· describe basic principles of generative models and the main application areas of generative AI, including potential impacts on digital systems.
Applying knowledge and understanding
At the end of the cluster, the student is able to:
· apply basic principles and techniques for the safe use of generative-AI-based systems in an assigned scenario, defining minimum protection requirements (e.g., sensitive data handling, input/output controls, interaction traceability, access control) and translating them into justified operational choices;
· set up and carry out a practical security-risk evaluation of a system using generative models, defining test cases and controlled scenarios (e.g., input manipulation attempts, unauthorised requests, information exposure), collecting evidence and results, and selecting appropriate mitigations, documenting effectiveness and limitations.
Making judgements
The student develops the ability to:
· assess typical risks and threats associated with generative-model-based systems (e.g., data exposure, input manipulation, misuse), identifying protection requirements;
· analyse how AI can support defence and response activities, clarifying benefits and limitations compared to traditional methods;
· recognise new attack surfaces and vulnerabilities introduced by adopting AI models, discussing realistic scenarios and potential mitigations;
· propose mitigation measures and governance principles for the safe and responsible use of AI systems in operational contexts.
Communication skills
At the end of the cluster, the student is able to:
· communicate clearly security implications and trade-offs related to design and usage choices involving AI, using appropriate technical language and structured arguments.
Learning skills
The student acquires the ability to:
· autonomously update knowledge and tools related to generative models and their security implications (emerging threats, countermeasures, risk evaluation);
· experiment with and compare strategies and configurations (e.g., prompting and evaluation techniques) to improve reliability, safety, and transparency of outcomes;
· learn new solutions and libraries to integrate generative AI models into workflows, documenting decisions and limitations.
At the end of the cluster, the student is able to:
· describe basic principles of generative models and the main application areas of generative AI, including potential impacts on digital systems.
Applying knowledge and understanding
At the end of the cluster, the student is able to:
· apply basic principles and techniques for the safe use of generative-AI-based systems in an assigned scenario, defining minimum protection requirements (e.g., sensitive data handling, input/output controls, interaction traceability, access control) and translating them into justified operational choices;
· set up and carry out a practical security-risk evaluation of a system using generative models, defining test cases and controlled scenarios (e.g., input manipulation attempts, unauthorised requests, information exposure), collecting evidence and results, and selecting appropriate mitigations, documenting effectiveness and limitations.
Making judgements
The student develops the ability to:
· assess typical risks and threats associated with generative-model-based systems (e.g., data exposure, input manipulation, misuse), identifying protection requirements;
· analyse how AI can support defence and response activities, clarifying benefits and limitations compared to traditional methods;
· recognise new attack surfaces and vulnerabilities introduced by adopting AI models, discussing realistic scenarios and potential mitigations;
· propose mitigation measures and governance principles for the safe and responsible use of AI systems in operational contexts.
Communication skills
At the end of the cluster, the student is able to:
· communicate clearly security implications and trade-offs related to design and usage choices involving AI, using appropriate technical language and structured arguments.
Learning skills
The student acquires the ability to:
· autonomously update knowledge and tools related to generative models and their security implications (emerging threats, countermeasures, risk evaluation);
· experiment with and compare strategies and configurations (e.g., prompting and evaluation techniques) to improve reliability, safety, and transparency of outcomes;
· learn new solutions and libraries to integrate generative AI models into workflows, documenting decisions and limitations.
Lesson period: Third four month period
Assessment methods: Esame
Assessment result: voto verbalizzato in trentesimi
Single course
This course cannot be attended as a single course. Please check our list of single courses to find the ones available for enrolment.
Course syllabus and organization
Single session
Modules or teaching units
Generative AI
INFO-01/A - Informatics - University credits: 9
: 16 hours
: 12 hours
: 32 hours
: 12 hours
: 32 hours
Security and AI
INFO-01/A - Informatics - University credits: 9
: 16 hours
: 12 hours
: 32 hours
: 12 hours
: 32 hours