The course provides an in depth understanding of machine learning and artificial intelligence algorithms. Students will acquire the capability of analyzing and modeling complex problems as well as a methodology to analyse and solve problems.
Expected learning outcomes
Students will understand deeply the fundamentals of statistical learning, reinforement learning, fuzzy systems, decision trees, neural networks and genetic algorithms also through the realization of a project in one of these areas. The debate on weak and strong position in artificial intelligence will also be analyzed.
Lesson period: First semester
(In case of multiple editions, please check the period, as it may vary)
Symbolic intelligence: Turing machine and Chinese room experiment. Weak and strong position on AI. Collective intelligence. Fuzzy sets and fuzzy systems.
Statistical learning: statistical distribution. Maximum likelihood and least squares. Variance analysis. Bayesian estimate and comparison with regularization.
Agents learning. Supervised, not supervised and reinforcement learning. Clustering and associated metrics. K-means and quad-tree decomposition. Hierarchical Clustering. Neural networks and non-linear perceptron. Kohonen maps and competitive learning. Multi-scale regression. Applications.
Reinforcement learning. Associative and not associative setting. Stationary and non-stationary problems Greedy and epsilon greedy policies. Markov models. Value function computation and Bellman equations. Learning with temporal differences and Q-learning. Improvement of temporal span through eligibility trace. Stochastic automata.
Biological intelligence. Neuron. Under-threshold behavior and action potential. Structure of the neuron and of the central nervous system. Mirror neurons. Genetic algorithms and evolutionary optimization. Parameters role. Examples.
Prerequisites for admission
Russel Norvig, Artificial Intellgence, a Modern Approach, Prentice Hall 2003. Sutton and Barto, Reinforcement Learning - An Introduction, MIT Press, 2019. B. Kosko - Neural Networks and Fuzzy Systems, Prentice Hall, 1991. C. Bishop. Bayesian Learning. Pattern Recognition and Machine Learning. Springer Verlag. Hertz, Krough and Palmer, Introduction to the theory of Neural Computation, Addison Wesley, 1991.
Assessment methods and Criteria
The evaluation is performed through a written exam followed by a project. In the written exam, that lasts three hours, the student has to solve exercises that required to apply the concepts learnt in the course and to answer to some open questions.
The project requires to apply techniques and methodologies of the course to solve a real problem.
Each exam is evaluated in thirtieth. Evaluation takes into consideration the level and depth of knowledge and the clarity of language.