Advanced Intelligent Systems
A.Y. 2025/2026
Learning objectives
The course provides an in depth understanding of machine learning and artificial intelligence algorithms. Students will acquire the capability of analyzing and modeling complex problems as well as a methodology to analyse and solve problems.
Expected learning outcomes
Students will understand deeply the fundamentals of statistical learning, reinforement learning, fuzzy systems, decision trees, neural networks and genetic algorithms also through the realization of a project in one of these areas. The debate on weak and strong position in artificial intelligence will also be analyzed.
Lesson period: Second four month period
Assessment methods: Esame
Assessment result: voto verbalizzato in trentesimi
Single course
This course can be attended as a single course.
Course syllabus and organization
Single session
Responsible
Lesson period
Second four month period
Course syllabus
Symbolic intelligence: Turing machine and Chinese room experiment. Weak and strong position on AI. Collective intelligence. Fuzzy sets and fuzzy systems.
Statistical learning: statistical distribution. Maximum likelihood and least squares. Variance analysis. Bayesian estimate and comparison with regularization.
Agents learning. Supervised, not supervised and reinforcement learning. Clustering and associated metrics. K-means and quad-tree decomposition. Hierarchical Clustering. Neural networks and non-linear perceptron. Kohonen maps and competitive learning. Multi-scale regression. Applications.
Reinforcement learning. Associative and not associative setting. Stationary and non-stationary problems Greedy and epsilon greedy policies. Markov models. Value function computation and Bellman equations. Learning with temporal differences and Q-learning. Improvement of temporal span through eligibility trace. Stochastic automata.
Biological intelligence. Neuron. Under-threshold behavior and action potential. Structure of the neuron and of the central nervous system. Mirror neurons. Genetic algorithms and evolutionary optimization. Parameters role. Examples.
Statistical learning: statistical distribution. Maximum likelihood and least squares. Variance analysis. Bayesian estimate and comparison with regularization.
Agents learning. Supervised, not supervised and reinforcement learning. Clustering and associated metrics. K-means and quad-tree decomposition. Hierarchical Clustering. Neural networks and non-linear perceptron. Kohonen maps and competitive learning. Multi-scale regression. Applications.
Reinforcement learning. Associative and not associative setting. Stationary and non-stationary problems Greedy and epsilon greedy policies. Markov models. Value function computation and Bellman equations. Learning with temporal differences and Q-learning. Improvement of temporal span through eligibility trace. Stochastic automata.
Biological intelligence. Neuron. Under-threshold behavior and action potential. Structure of the neuron and of the central nervous system. Mirror neurons. Genetic algorithms and evolutionary optimization. Parameters role. Examples.
Prerequisites for admission
None
Teaching methods
Frontal lessons with exercises and open questions on theory. Laboratory to prepare students to the final project.
Teaching Resources
Slide available at: https://ais-lab.di.unimi.it/Teaching/SIA/Programma.html
Russel Norvig, Artificial Intellgence, a Modern Approach, Prentice Hall 2003.
Sutton and Barto, Reinforcement Learning - An Introduction, MIT Press, 2019.
B. Kosko - Neural Networks and Fuzzy Systems, Prentice Hall, 1991.
C. Bishop. Bayesian Learning. Pattern Recognition and Machine Learning. Springer Verlag.
Hertz, Krough and Palmer, Introduction to the theory of Neural Computation, Addison Wesley, 1991.
Russel Norvig, Artificial Intellgence, a Modern Approach, Prentice Hall 2003.
Sutton and Barto, Reinforcement Learning - An Introduction, MIT Press, 2019.
B. Kosko - Neural Networks and Fuzzy Systems, Prentice Hall, 1991.
C. Bishop. Bayesian Learning. Pattern Recognition and Machine Learning. Springer Verlag.
Hertz, Krough and Palmer, Introduction to the theory of Neural Computation, Addison Wesley, 1991.
Assessment methods and Criteria
Oral on theory + project
Professor(s)