Policy design analysis and evaluation
A.A. 2025/2026
Obiettivi formativi
This course trains researchers in political analysis and public policy. Thus, it contributes to delivering COM/DAPS&CO's learning outcomes in policy analysis and evaluation that will serve students in a variety of contexts including firms, public agencies, private and public interest associations, and research institutes.
Risultati apprendimento attesi
This course equips students to shape and test causal claims about policy and institutional design. By the end of the course, students will acquire the key theoretical knowledge and skills to perform small- and large-scale analyses that establish the nature of the connection between relevant institutional features, behavior, and policy performance.
Periodo: Primo trimestre
Modalità di valutazione: Esame
Giudizio di valutazione: voto verbalizzato in trentesimi
Corso singolo
Questo insegnamento non può essere seguito come corso singolo. Puoi trovare gli insegnamenti disponibili consultando il catalogo corsi singoli.
Programma e organizzazione didattica
Edizione unica
Responsabile
Periodo
Primo trimestre
The course will utilize a dedicated website hosted on the Ariel platform, where you will find all essential materials, announcements, and resources. We will also use a Microsoft Teams channel for messaging, updates, and recordings.
In case of any emergency or unforeseen circumstance requiring a shift from in-person to remote instruction, course sessions will continue through the Ariel platform and Teams, ensuring that all students can remain engaged and up-to-date.
All prospective students, regardless of their attendance status, are warmly encouraged to email the instructor and request access to these resources.
In case of any emergency or unforeseen circumstance requiring a shift from in-person to remote instruction, course sessions will continue through the Ariel platform and Teams, ensuring that all students can remain engaged and up-to-date.
All prospective students, regardless of their attendance status, are warmly encouraged to request access to these resources.
In case of any emergency or unforeseen circumstance requiring a shift from in-person to remote instruction, course sessions will continue through the Ariel platform and Teams, ensuring that all students can remain engaged and up-to-date.
All prospective students, regardless of their attendance status, are warmly encouraged to email the instructor and request access to these resources.
In case of any emergency or unforeseen circumstance requiring a shift from in-person to remote instruction, course sessions will continue through the Ariel platform and Teams, ensuring that all students can remain engaged and up-to-date.
All prospective students, regardless of their attendance status, are warmly encouraged to request access to these resources.
Programma
Module A
Dissecting Policy Designs
Instructor: A. Damonte
This module equips students with the analytical tools needed to systematically break down and reconstruct policy designs. It introduces key theoretical perspectives — including institutional analysis, behavioral insights, and causal mechanisms — and shows how they can be used to unpack complex policy problems. Students will learn how to map causal structures using consolidated models such as Coleman's boat, articulate theories of change, and elaborate criteria for evaluating the match of policy instruments to policy problems. Through hands-on exercises, they will also consider how substantive and procedural tools may interact to shape policy outcomes, building practical skills in policy modelling and design.
Session 01. Introduction
Getting to know each other. Short debate on the course topics. Overview of the course structure, resources, and expectations.
Backing materials:
Easton, D. (1957). An Approach to the Analysis of Political Systems. World Politics, 9(3), 383-400. https://doi.org/10.2307/2008920
Robertson, D. B. (1984). Program implementation versus program design: which accounts for policy "failure"? Review of Policy Research, 3(3‐4), 391-405. https://doi.org/10.1111/j.1541-1338.1984.tb00133.x
Session 02. Policy design, logic models, and theory of change
Exploring the systematic approach to policy design through logic models and theories of change. Understanding how interventions connect to outcomes, building culturally responsive frameworks, and applying institutional analysis to policy development.
Backing materials:
Meyer, M. L., Louder, C. N., & Nicolas, G. (2021). Creating with, not for people: theory of change and logic models for culturally responsive community-based intervention. American Journal of Evaluation, 43(3), 378-393. https://doi.org/10.1177/10982140211016059
Polski, M.M., and E. Ostrom. (2017) An Institutional Framework for Policy Analysis and Design. In Cole, D.H. and M.D. McGinnis (eds.), Elinor Ostrom and the Bloomington School of Political Economy: Volume 3, A Framework for Policy Analysis. Lanham, MD: Lexington Books, 13-48.
Module A1.
Unpacking policy problems
Session 03. Policy problems and their structure
Analyzing how policy problems are defined, structured, and framed. Examining the social construction of target populations and how problem definition shapes policy solutions and political dynamics.
Backing materials:
Peters, B. G. (2018). Policy Problems and Policy Design, Edwar Elgar, Ch. 1
Schneider, A. and Ingram, H. (1993). Social construction of target populations: implications for politics and policy, American Political Science Review 87(2), 334-347. https://www.jstor.org/stable/2939044
Session 04. Theoretical foundations: old and new behavioralism
Tracing the evolution from rational choice theory to behavioral insights in policy studies. Examining bounded rationality, behavioral models, and prospect theory applications to political science and policy design.
Backing materials:
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99-118. https://doi.org/10.2307/1884852
Caraley, D. (1964). The political behavior approach: methodological advance or new formalism?-- a review article. Political Science Quarterly, 79(1), 96-108. https://doi.org/10.2307/2146576
Mercer, J., 2005. Prospect theory and political science. Annual Review of Political Science, 8(1), 1-21. https://doi.org/10.1146/annurev.polisci.8.082103.104911
Session 05. Theoretical foundations: neoinstitutionalisms
Understanding historical, rational, sociological, and discursive institutionalism .
Backing materials:
Immergut, E. M. (1998). The theoretical core of the new institutionalism. Politics & Society, 26(1), 5-34. https://doi.org/10.1177/0032329298026001002
Hall, P. A., & Taylor, R. C. R. (1996). Political science and the three new institutionalisms. Political Studies, 44(5), 936-957. https://doi.org/10.1111/j.1467-9248.1996.tb00343.x
Carstensen, M. B., & Schmidt, V. A. (2015). Power through, over, and in ideas: conceptualizing ideational power in discursive institutionalism. Journal of European Public Policy, 23(3), 318-337. https://doi.org/10.1080/13501763.2015.1115534
Session 06. Conceptual foundations: Coleman's boat
Policy problems' drivers: situational, action-formation, and transformational mechanisms.
Backing materials:
Hedström, P. and Ylikoski, P., 2010. Causal mechanisms in the social sciences. Annual Review of Sociology, 36(1), pp.49-67. https://doi.org/10.1146/annurev.soc.012809.102632
Raub, W., Buskens, V. and Van Assen M.A.L.M. (2011). Micro-macro links and microfoundations in sociology. The Journal of Mathematical Sociology, 35:1-3, 1-25. https://doi.org/10.1080/0022250X.2010.532263
Session 07. Group exercise
Select a current policy challenge. Unpack it in terms of Coleman's boat, with special attention to plausible situational, action formation, and aggregation mechanisms. Present your analysis to class.
Deliverable to submit: a max 1500-word written report, including the visualization of your Coleman's boat, according to the following template:
a) title
b) description of the policy problem
c) identification of the key actors
d) analysis of each mechanism (situational → action formation → transformation)
e) Coleman's boat visualization
f) concluding reflections
The deliverable will be assigned:
- max 4 pts for comprehension and use of the Coleman's boat framework (correct, complete, clear, consistent discussion of the mechanisms);
- max 2 pts for analytical depth (credible justification of the mechanism in light of course and external knowledge)
- max 1 pts for the quality of the visualization (accurate and clear diagram);
- max 1 pts for writing quality (organized and clear writing).
Module A2.
Changing people's behavior
Session 08. Policy tools: behavioral assumptions of addressees' compliance
Tweaking situational mechanisms.
Backing materials:
Vedung, E. (2017). Policy instruments: Typologies and theories. In Bemelmans-Videc, M.-L., Rist, R. C., & Vedung, E. (Eds.). Carrots, Sticks and Sermons (pp. 21-58). Routledge.
Schneider, A., & Ingram, H. (1990). Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510-529. https://doi.org/10.2307/2131904
McDonnell, L. M., & Elmore, R. F. (1987). Getting the job done: alternative policy instruments. Educational Evaluation and Policy Analysis, 9(2), 133-152. https://doi.org/10.2307/1163726
Strassheim, H. (2021). Behavioural mechanisms and public policy design: Preventing failures in behavioural public policy. Public Policy and Administration, 36(2), 187-204. https://doi.org/10.1177/0952076719827062
Session 09. Regulation
Examining regulation as a policy tool, from command-and-control to meta-regulation approaches. Understanding compliance behavior, enforcement strategies, and potential crowding-out effects of regulatory interventions.
Backing materials:
Lemaire, D. (2017). The stick: regulation as a tool of government. In Bemelmans-Videc, M. L., Rist, R. C., & Vedung, E. (Eds.). Carrots, Sticks and Sermons (pp. 59-76). Routledge.
Scott, C. (2003). Speaking softly without big sticks: Meta‐regulation and public sector audit. Law & Policy, 25(3), 203-219.
Reinders Folmer, C. P. (2021). Crowding-out effects of laws, policies, and incentives on compliant behaviour. In B. van Rooij & D. D. Sokol (Eds.), The Cambridge Handbook of Compliance (pp. 326-340). Cambridge University Press. https://doi.org/10.1017/9781108759458.023
Session 10. Taxation and expenditure
Analyzing fiscal tools as policy instruments. Exploring tax expenditures, subsidies, and direct spending programs. Understanding how economic incentives shape behavior and are expected to achieve policy goals.
Backing materials:
McIlroy-Young, B., Henstra, D., & Thistlethwaite, J. (2022). Treasure tools: using public funds to achieve policy objectives. In Howlett, M, (Ed). The Routledge Handbook of Policy Tools (pp. 332-344). Routledge.
Hakelberg, L., & Seelkopf, L. (2021). Introduction. In Id. (ds.), Handbook on the Politics of Taxation. Edward Elgar, pp. 1-15. https://doi.org/10.4337/9781788979429.00008
Guerra, A., & Harrington, B. (2021). Why do people pay taxes? Explaining tax compliance by individuals. In Hakelberg, L., & Seelkopf, L. (Eds.). Handbook on the Politics of Taxation. Edward Elgar, pp. 355-373. https://doi.org/10.4337/9781788979429.00036
Burton, M., & Sadiq, K. (2013). Tax Expenditure Management: A Critical Assessment Cambridge University Press, Ch.2. https://doi.org/10.1017/CBO9780511910142.002
Clements, B. and Hugounenq, R. and Schwartz, G., (1995). Government subsidies: concepts, international trends, and reform options . IMF Working Paper No. 95/91, https://ssrn.com/abstract=883238
Pope, K. R. (2017). All the Queen's Horses. https://www.dailymotion.com/video/x9hl3zg
Session 11. Information
Information as a policy instrument: from public campaigns to educational testing. Understanding persuasion mechanisms, framing effects, and the strategic use of information in policy effectiveness.
Backing materials:
Vedung, E., & van der Doelen F.C.J. (2017). The sermon: information programs in the public policy process—choice, effects, and evaluation. In Bemelmans-Videc, M.-L., Rist, R.C., & Vedung, E. (Eds.). Carrots, Sticks and Sermons (pp. 103-128). Routledge.
McDonnell, L.M. (2004). Politics, Persuasion, and Educational Testing, Harvard University Press, Ch.2. https://doi.org/10.4159/9780674040786-005
Druckman, J. N. (2022). A framework for the study of persuasion. Annual Review of Political Science, 25(1), 65-88. https://doi.org/10.1146/annurev-polisci-051120-110428
Session 12. Nudges and shoves
Applying behavioral insights to policy design through choice architecture. Examining ethical dimensions of libertarian paternalism, manipulation concerns, and the effectiveness of behavioral interventions across policy domains.
Backing materials:
Sunstein, C. R. (2020). Behavioral Science and Public Policy. Cambridge University Press. https://doi.org/10.1017/9781108973144
Miller, D. & Prentice, D. (2013). Psychological levers of behavior change. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 301-309). Princeton University Press. https://doi.org/10.1515/9781400845347-021
John, P. (2013). All tools are informational now: how information and persuasion define the tools of government. SSRN. http://dx.doi.org/10.2139/ssrn.2141382
Weber, E. (2013). Doing the right thing willingly: Using the insights of behavioral decision research for better environmental decisions. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 380-397). Princeton University Press. https://doi.org/10.1515/9781400845347-026
Thaler, R., Sunstein, C. & Balz, J. (2013). Choice architecture. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 428-439). Princeton University Press. https://doi.org/10.1515/9781400845347-029
Lichtenberg, J. (2013). Paternalism, manipulation, freedom, and the good. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 494-498). Princeton University Press. https://doi.org/10.1515/9781400845347-034
Malanowski, S. C., Baima, N. R., & Kennedy, A. G. (2022, July). Science, shame, and trust: against shaming policies. In: Resch, M.M., Formánek, N., Joshy, A., Kaminski, A. (eds) The Science and Art of Simulation (pp. 147-160). Springer. https://doi.org/10.1007/978-3-031-68058-8_10
Session 13. Group exercise
Building on Session 7 analysis, revisit and refine your Coleman's boat. Develop a credible theory of change. Select the policy tools that would activate the hypothesized mechanisms. Create a visual logic model. Present your model in class, and engage in peer discussion.
Deliverable to submit: A visual logic model and a max 1000-word written justification explaining your chosen theory of change, a selection of substantive policy tools, and a discussion of the behavioral assumptions that promise to cushion, reduce, or eliminate the policy problem you set in the previous deliverable, according to the following template:
a) Title
b) Theory of change
c) Substantive tools
d) Logic model (figure)
e) Ethical and/or practical reflections
Your deliverable will be graded according to the following rubric:
- max. 3 pts to the coherence of your theory of change (logical, well-justified chain-of-reasoning to expected outcomes)
- max 3 pts to the appropriateness of tool selection (behavioral assumptions credibly triggering the identified change);
- max 1 pt to the logic model (complete, accurate, visually clar);
- max 1 pt to writing quality (clarity, structure)
Module A3.
Making substantive tools work
Session 14. Procedural policy tools
Managing governance through process: consultation mechanisms, participatory instruments, and deliberative approaches. Understanding how procedural tools complement substantive policy instruments in modern governance.
Backing materials:
Howlett, M. (2000). Managing the "hollow state": Procedural policy instruments and modern governance. Canadian Public Administration, 43(4), 412-431. https://doi.org/10.1111/j.1754-7121.2000.tb01152.x
Fung, A. (2006). Varieties of participation in complex governance. Public Administration Review, 66, 66-75. https://doi.org/10.1111/j.1540-6210.2006.00667.x
Fraussen, B. (2022). Consultation tools and agenda-setting. In Howlett, M. (Ed). The Routledge Handbook of Policy Tools (pp. 149-159). Routledge. https://hdl.handle.net/1887/3502352
Wagner, W., West, W., McGarity, T., & Peters, L. (2021). Deliberative rulemaking. Administrative Law Review, 73(3), 609-687. https://www.jstor.org/stable/27178501
Balla, S. J., Beck, A. R., Cubbison, W. C., & Prasad, A. (2019). Where's the spam? Interest groups and mass comment campaigns in agency rulemaking. Policy & Internet, 11(4), 460-479. https://doi.org/10.1002/poi3.224
Session 15. Accountability tools
Transparency, monitoring, and evaluation as policy instruments. Examining fiscal openness, corruption control, and trust-building mechanisms. Understanding how accountability tools enhance policy effectiveness and legitimacy.
Backing materials:
De Renzio, P., & Wehner, J. (2017). The impacts of fiscal openness. The World Bank Research Observer, 32(2), 185-210. https://doi.org/10.1093/wbro/lkx004
Hollyer, J. R., Rosendorff, B. P., & Vreeland, J. R. (2015). Transparency, protest, and autocratic instability. The American Political Science Review, 109(4), 764-784. http://www.jstor.org/stable/24809509
Bourgeois, I., & Maltais, S. (2023). Translating evaluation policy into practice in government organizations. American Journal of Evaluation, 44(3), 353-373. https://doi.org/10.1177/10982140221079837
Olken, B. A. (2007). Monitoring corruption: evidence from a field experiment in Indonesia. Journal of Political Economy, 115(2), 200-249. https://www.jstor.org/stable/10.1086/517935
Ostrom, E. (2009). Building trust to solve commons dilemmas: taking small steps to test an evolving theory of collective action. In: Levin, S.A. (ed.) Games, Groups, and the Global Good (pp. 207-228). Springer. https://doi.org/10.1007/978-3-540-85436-4_13; https://ssrn.com/abstract=1304695
Session 16. Guided group exercise
Embed substantive and procedural tools within an action situation.
Deliverable to submit: Building on the results of Session 13, elaborate a complete action situation scheme that integrates substantive and procedural policy tools and a max 1500-word narrative discussing how the action situation setting can secure the expected policy outcomes, following this template:
a) Title
b) Procedural and substantive tools as the seven elements of the action situation
c) The action situation (figure)
d) Why these joint constraints promise effectiveness?
e) Reflections on limitations
Your deliverable will be assigned:
- max 2 pts to the diagram of the action situation (complete, clear)
- max 3 pts to the discussion of the seven elements (complete and consistent rendering of procedural and substantive tools)
- max 2 pts to the explanation of constraints' effectiveness (consistency, credibility)
- max 1 pt to the identification of limits (credibility)
- max 1 pt to the writing quality (structure, clarity)
Module B.
Establishing policy design effectiveness
Instructor: A. Damonte
This module introduces students to methodological strategies for assessing whether and how policy designs produce intended causal effects. It presents both design-driven approaches (based on counterfactual reasoning and quasi-experiments) and model-driven approaches (focused on causal mechanisms and graphical models). Special emphasis is placed on the operationalization of causal claims through adequate conceptualization and measurement strategies. The module prepares students to design policy evaluations that can credibly test the effectiveness claims of policy interventions.
Session 17. Design-driven strategies for establishing causation
Applying Mill's methods, counterfactual reasoning, and natural experiments to policy evaluation. Understanding strengths and limitations of quasi-experimental approaches to causal inference.
Backing materials:
Ducheyne, S. (2008): J.S. Mill's canons of induction: from true causes to provisional ones, History and Philosophy of Logic, 29:4, 361-376. http://doi.org/10.1080/01445340802164377
Fearon, J. D. (1991). Counterfactuals and hypothesis testing in political science. World Politics, 43(2), 169-195. https://doi.org/10.2307/2010470
Dunning, T. (2008). Improving causal inference: Strengths and limitations of natural experiments. Political Research Quarterly, 61(2), 282-293. https://doi.org/10.1177/1065912907306470
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945-960. https://doi.org/10.1080/01621459.1986.10478354
Session 18. Model-driven strategies for establishing causation
From Aristotelian causation to modern mechanistic explanations. Understanding causal mechanisms, INUS conditions, and causal graphs. Identifying good and bad controls in causal research design.
Backing materials:
Moravcsik, J. M. E. (1974). Aristotle on adequate explanations. Synthese, 28(1), 3-17. http://www.jstor.org/stable/20114949
Glennan, S., Illari, P. & Weber, E. (2022). Six theses on mechanisms and mechanistic science. Journal for General Philosophy of Science 53, 143-161. https://doi.org/10.1007/s10838-021-09587-x
Mackie, J. L. (1965). Causes and conditions. American Philosophical Quarterly, 2(4), 245-264. https://www.jstor.org/stable/20009173
Cinelli, C., Forney, A., & Pearl, J. (2024). A crash course in good and bad controls. Sociological Methods & Research, 53(3), 1071-1104. https://doi.org/10.1177/00491241221099552
Session 19. Building and findings indicators
Constructing valid and reliable indicators, addressing measurement challenges, and linking indicators to theories of change and logic models.
Backing materials:
Goertz, G. (2020). Concept structure: aggregation and substitutability. In Id. Social science concepts and measurement. Princeton University Press, Ch.6.
Adcock, R., & Collier, D. (2001). Measurement validity: a shared standard for qualitative and quantitative research. American Political Science Review, 95(3), 529-546. https://doi.org/10.1017/S0003055401003100
Session 20. Capstone
Students integrate constraints, institutional structure, and actor motivations into a complete design blueprint, ready for empirical testing.
Deliverable to submit: a research proposal (max 1500 words) that includes
- A clear causal question or hypothesis
- The policy design to be tested;
- The proposed methods for establishing causal inference
- Indicators for measuring design features and outcomes
The deliverable will be assigned:
- max 2 pts to the causal question (well-formulated, clear, consistent with the course concepts)
- max 3 pts to the selected methodology (appropriate, consistent with the driving question, well-justified)
- max 2 pts to indicator (consistent and appropriate constructs)
- max 1 pt to writing quality (well-structured, clear)
Module C.
X-QCA
Instructor: A. Damonte
This module guides students in designing an effective configurational strategy to assess the strength of explanatory claims. It introduces the core steps of Qualitative Comparative Analysis for explanatory research — from case and condition selection, through calibration and the analysis of necessity and sufficiency, to robustness testing. Particular attention is given to the practical challenges of replicability and transparency in published QCA, equipping students to apply configurational methods rigorously and critically in policy design research.
Session 21. Where to begin: variable selection and calibration
This session introduces key issues in case selection, variable selection, and calibration when designing a QCA. Students will learn how to select conditions and outcomes that reflect relevant theoretical claims, and how to calibrate raw variables into crisp or fuzzy sets. The session will also explore the epistemological underpinnings of calibration as a central act of conceptualization.
Backing materials:
Amenta, E., & Poulsen, J. D. (1994). Where to begin: a survey of five approaches to selecting independent variables for Qualitative Comparative Analysis. Sociological Methods & Research, 23(1), 22-53. https://doi.org/10.1177/0049124194023001002
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.4. https://bookdown.org/dusadrian/QCAbook/calibration.html
Session 22. Hands-on
In this session you will learn how to turn variables into conditions with R.
Working in small groups, students will use a provided dataset and script with incomplete comments. They will be invited to employ AI agents (LLMs) to generate prompts that comment functions, possibly improve them, and correct arguments if needed. The accuracy and appropriateness of these prompts and comments will be shared and discussed during class to foster learning and critical reflection on tool usage.
Session 23. Parameters of fit and the Analysis of individual Necessity
The session focuses on the theoretical meaning and empirical use of consistency, coverage, and relevance in the analysis of necessity. Students will learn to compute and interpret these parameters, and to distinguish between trivial and meaningful necessary conditions.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.5. https://bookdown.org/dusadrian/QCAbook/analysisofnecessity.html
Session 24. Parameters of fit and the Truth Table
Students will learn how to construct and interpret truth tables for sufficiency analysis. The session will clarify the different types of configurations in a truth table and how consistency, coverage, and proportional reduction of inconsistency parameters can point to violations to the claim of joint sufficiency.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.6, 7. https://bookdown.org/dusadrian/QCAbook/analysisofsufficiency.html
Session 25. Hands-on
In this session we will learn how to check for trivial conditions and inconsistencies in the truth table, and how to manage them.
As in the previous hands-on session, student groups will be given a dataset and a script without comments and invited to use any AI agent to elaborate prompts and get comments of the functions' meaning and possibly improve them or correct arguments. The accuracy and appropriateness of these comments will be shared and discussed during class.
Session 26. Getting to solutions
This session explains how to minimize truth tables and derive complex, parsimonious, and intermediate solutions. Students will learn the special counterfactual thinking embedded in QCA, the different causal assumptions beneath each type of solution, and how to manage model ambiguity.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.8. https://bookdown.org/dusadrian/QCAbook/minimize.html
Session 27. Hands-on
Building on the previous sessions, students will work in groups to obtain complex, parsimonious, and possibly intermediate solutions using R.
Again, groups will be given a dataset and a script without comments and invited to use AI agents to elaborate prompts to get comments of the functions' meaning and possibly improve them or correct arguments. The accuracy and appropriateness of these comments will be shared and discussed during class.
Session 28. Robustness tests
The session will introduce the Oana-Schneider robustness protocol and alternative strategies. The discussion will address when different robustness tests are appropriate and how their results can be used to support or qualify explanatory claims.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Oana, I.-E., & Schneider, C. Q. (2021). A robustness test protocol for applied QCA: Theory and R software application. Sociological Methods & Research, 53(1), 57-88. https://doi.org/10.1177/00491241211036158
Session 29. Hands-on
In this session we will learn how to perform different robustness test and establish when they are relevant.
As in earlier practical sessions, they will use AI agents to annotate and improve scripts. The accuracy and appropriateness of these comments will be shared and discussed during class.
Session 30. Q&A session
This session is dedicated to clarifying technical and methodological aspects of applying QCA for probing explanatory theories in preparation for the module's final deliverable. Students are invited to bring all their questions and doubts for collective discussion before replicating a published QCA.
The deliverable asks students to produce an annotated R script that answers the following questions about a selected published QCA:
1) the model: is it configurational?
2) case and raw variable selections: do they afford proper analysis?
3) calibration: is it replicable?
4) directional expectations: are they empirically supported?
5) truth table: is there any inconsistent primitive?
6) solutions: are the deserving ones discussed?
7) are solutions 'robust'?
The deliverable will demonstrate students' twofold capacity to:
- accurately run the main steps of the original QCA using the original raw data and appropriate R functions (replication);
- critically assess whether the published article is transparent and replicable, by comparing their outputs to the published results and commenting on any discrepancies, ambiguities, or omissions (critical assessment).
The deliverable will be assigned
- max 15 pts for technical execution of properly set R functions (complete, accurate, possibly creative);
- max 15 pts for critical thinking about the replicability of the published QCA (insightful, rigorous, balanced);
- max 3 pts for clarity in commenting and proper documentation (readable, clear, professional script and documentation).
Module D.
The Statistics of Causal Inference
Instructor: A. De Angelis
This module guides students to designing an effective identification strategy to assess causal effects in their own empirical projects. By drawing on the Potential Outcome Framework, an influential theoretical account of causal inference, this course introduces the most popular research methods for causal inference, including experimental and quasi-experimental designs such as Instrumental Variables, Difference-in-Difference estimation and Regression Discontinuity Design.
Session 31. Introduction and review
Introduction, organization, and rules of the seminar. What are we going to learn? Why are we using R in this seminar? How can we work effectively? Short refresher on statistics and multivariate regression.
Session 32. Directed Acyclic Graphs
DAGs for causal identification. Exercise with replication paper. Colliders & confounders. Working reproducibly with data science projects.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 118-141. https://mixtape.scunning.com
Imbens (2020). "Potential Outcome and Directed Acyclic Graph approaches to causality: relevance for empirical practice in economics". Journal of Economic Literature 58(4): 1129-1179. https://www.nber.org/system/files/working_papers/w26104/w26104.pdf
Other assignment:
Code one or two published papers that use one of the causal identification strategies covered in this course at the shared sheet file in the tab "list of replication papers". Read the instructions before proceeding.
Session 33. The Potential Outcome Framework
Randomization and experimental design. Potential outcomes. Switching equation. ATE and ATT. SDO and selection bias. Independence assumption. SUTVA assumption.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 142-168. https://mixtape.scunning.com
Other assignments:
Start thinking of a potential research question and hypotheses for your master thesis: 1. draft your research question; 2. Develop one or two hypotheses based on your question; 3. Find 2/3 scientific references that connect your question to existing theories/debates in the field. Follow the instructions below. Upload a simple .txt file named `SURNAME-RQ.txt` before class.
Instructions: a good research questions should start from your intrinsic interests and move your passion. Strong research questions are: 1. Clear; 2. Specific (i.e., focused and narrow, not too broad or general, which leads to vague questions); 3. Researchable (answerable with actual data collections/analysis rather than abstract reasoning alone); 4. Impactful (relevant for meaningful issues with real-world or theoretical implications); 5. Rooted in ongoing scientific debates and existing literature of a specific field of studies. Strong research hypotheses are: 1. Measurable and testable; 2. Specific and directed (i.e., indicate the type and direction, e.g. positive, negative, causal ); 3. Grounded in existing theories and prior research.
Session 34. Matching and Subclassification
Substratification and Conditional Independence. Curse of dimensionality. Exact Matching.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 204-242. Quickly skim through the following sections: "Some background" pp. 205-211", "Subclassification exercise: Titanic data set" pp. 212-218. Skip section "Bias correction" pp. 232-239. https://mixtape.scunning.com
Other assignments:
- Sign up for mandatory office hour to discuss the replication paper and career orientation.
- Think about a potential empirical strategy that uses matching or re-weighting to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 35. Regression Discontinuity
Cutoff point and running variable. Continuity assumption. Local Average Treatment Effect (LATE).
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 278-291. https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses RDD to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 36. Instrumental Variables
Intuition of IVs. Exclusion restriction. Homogeneous Treatment Effects. Two-Stage Least Squares (2SLS), Heterogeneous Treatment Effects.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 371-383, and then pp. 401-407. https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses IVs to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 37. Panel Data
Panel data structure. Estimations (POLS and FEs). Identifying assumptions.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 445-456 ("Data Exercise" excluded). https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses panel data to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 38. Difference-in-Differences
Diff-in-diffs design. John Snow's falsification of miasma theory. Estimation. Parallel trends assumption.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 467-488. https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses DiD to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 39. Synthetic Control
Comparative case study and synthetic control model. Picking synthetic controls. The case of California's tobacco control law.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 584-616. https://mixtape.scunning.com
Session 40. Final Presentation
Final presentation on the status of replication work (significance, progress, intended goals).
Deliverable: a final short replication paper (2/3 of the grade). The short paper must critically replicate a quasi-experimental analysis of a published peer-reviewed study. The published paper must be first coded in this shared class list. The resulting short replication paper should consist of about minimum 2,000 and ideally 4,000 words and be structured as follows:
a. Abstract. A 150 word abstract of the replication study. The title page should include name, surname, date of submission, word count (all texts excluding references, tables, figure notes and footnotes).
b. Introduction. Background on the study, brief summary, and explanation why its replication is important. Outline key debate
c. Literature review. Outline key debates and position the study in the broader literature and cite related studies to emphasize its importance.
d. Methods and replication strategy. Short critical assessment of the identification strategy linking to the description of the study in terms of potential outcomes framework. Description of original data, methods, and models. Clear statement of the scope of the replication: direct, robustness checks, or alternative specifications?
e. Results. A side-by-side comparison of the published and the replicated figures and tables; additional analysis discussing the implications for the stability of the findings; significant differences should be connected to potential explanations.
f. Discussion. A clear interpretation of the findings clarifying how the replication results support/refine/challenge the original conclusions. Implications of the replication findings for theory and practice. A paragraph with the limitations in your replication—data, methods, scope. A concluding statement with an overall assessment of the replication work and its implications for our understanding of the underlying phenomena.
g. References. Including the replicated article in Harvard reference style and eventually other materials and published papers referenced in the text.
Additionally, students must deliver the full replication project folder with all the reproducible and commented code.
Rubric: Excellent replication papers will try to move beyond the replication including extensions of the analysis (e.g., alternative operationalizations, measures, robustness tests ) and/or discussing proposals about how to tackle identification problems or untested assumptions.
Dissecting Policy Designs
Instructor: A. Damonte
This module equips students with the analytical tools needed to systematically break down and reconstruct policy designs. It introduces key theoretical perspectives — including institutional analysis, behavioral insights, and causal mechanisms — and shows how they can be used to unpack complex policy problems. Students will learn how to map causal structures using consolidated models such as Coleman's boat, articulate theories of change, and elaborate criteria for evaluating the match of policy instruments to policy problems. Through hands-on exercises, they will also consider how substantive and procedural tools may interact to shape policy outcomes, building practical skills in policy modelling and design.
Session 01. Introduction
Getting to know each other. Short debate on the course topics. Overview of the course structure, resources, and expectations.
Backing materials:
Easton, D. (1957). An Approach to the Analysis of Political Systems. World Politics, 9(3), 383-400. https://doi.org/10.2307/2008920
Robertson, D. B. (1984). Program implementation versus program design: which accounts for policy "failure"? Review of Policy Research, 3(3‐4), 391-405. https://doi.org/10.1111/j.1541-1338.1984.tb00133.x
Session 02. Policy design, logic models, and theory of change
Exploring the systematic approach to policy design through logic models and theories of change. Understanding how interventions connect to outcomes, building culturally responsive frameworks, and applying institutional analysis to policy development.
Backing materials:
Meyer, M. L., Louder, C. N., & Nicolas, G. (2021). Creating with, not for people: theory of change and logic models for culturally responsive community-based intervention. American Journal of Evaluation, 43(3), 378-393. https://doi.org/10.1177/10982140211016059
Polski, M.M., and E. Ostrom. (2017) An Institutional Framework for Policy Analysis and Design. In Cole, D.H. and M.D. McGinnis (eds.), Elinor Ostrom and the Bloomington School of Political Economy: Volume 3, A Framework for Policy Analysis. Lanham, MD: Lexington Books, 13-48.
Module A1.
Unpacking policy problems
Session 03. Policy problems and their structure
Analyzing how policy problems are defined, structured, and framed. Examining the social construction of target populations and how problem definition shapes policy solutions and political dynamics.
Backing materials:
Peters, B. G. (2018). Policy Problems and Policy Design, Edwar Elgar, Ch. 1
Schneider, A. and Ingram, H. (1993). Social construction of target populations: implications for politics and policy, American Political Science Review 87(2), 334-347. https://www.jstor.org/stable/2939044
Session 04. Theoretical foundations: old and new behavioralism
Tracing the evolution from rational choice theory to behavioral insights in policy studies. Examining bounded rationality, behavioral models, and prospect theory applications to political science and policy design.
Backing materials:
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99-118. https://doi.org/10.2307/1884852
Caraley, D. (1964). The political behavior approach: methodological advance or new formalism?-- a review article. Political Science Quarterly, 79(1), 96-108. https://doi.org/10.2307/2146576
Mercer, J., 2005. Prospect theory and political science. Annual Review of Political Science, 8(1), 1-21. https://doi.org/10.1146/annurev.polisci.8.082103.104911
Session 05. Theoretical foundations: neoinstitutionalisms
Understanding historical, rational, sociological, and discursive institutionalism .
Backing materials:
Immergut, E. M. (1998). The theoretical core of the new institutionalism. Politics & Society, 26(1), 5-34. https://doi.org/10.1177/0032329298026001002
Hall, P. A., & Taylor, R. C. R. (1996). Political science and the three new institutionalisms. Political Studies, 44(5), 936-957. https://doi.org/10.1111/j.1467-9248.1996.tb00343.x
Carstensen, M. B., & Schmidt, V. A. (2015). Power through, over, and in ideas: conceptualizing ideational power in discursive institutionalism. Journal of European Public Policy, 23(3), 318-337. https://doi.org/10.1080/13501763.2015.1115534
Session 06. Conceptual foundations: Coleman's boat
Policy problems' drivers: situational, action-formation, and transformational mechanisms.
Backing materials:
Hedström, P. and Ylikoski, P., 2010. Causal mechanisms in the social sciences. Annual Review of Sociology, 36(1), pp.49-67. https://doi.org/10.1146/annurev.soc.012809.102632
Raub, W., Buskens, V. and Van Assen M.A.L.M. (2011). Micro-macro links and microfoundations in sociology. The Journal of Mathematical Sociology, 35:1-3, 1-25. https://doi.org/10.1080/0022250X.2010.532263
Session 07. Group exercise
Select a current policy challenge. Unpack it in terms of Coleman's boat, with special attention to plausible situational, action formation, and aggregation mechanisms. Present your analysis to class.
Deliverable to submit: a max 1500-word written report, including the visualization of your Coleman's boat, according to the following template:
a) title
b) description of the policy problem
c) identification of the key actors
d) analysis of each mechanism (situational → action formation → transformation)
e) Coleman's boat visualization
f) concluding reflections
The deliverable will be assigned:
- max 4 pts for comprehension and use of the Coleman's boat framework (correct, complete, clear, consistent discussion of the mechanisms);
- max 2 pts for analytical depth (credible justification of the mechanism in light of course and external knowledge)
- max 1 pts for the quality of the visualization (accurate and clear diagram);
- max 1 pts for writing quality (organized and clear writing).
Module A2.
Changing people's behavior
Session 08. Policy tools: behavioral assumptions of addressees' compliance
Tweaking situational mechanisms.
Backing materials:
Vedung, E. (2017). Policy instruments: Typologies and theories. In Bemelmans-Videc, M.-L., Rist, R. C., & Vedung, E. (Eds.). Carrots, Sticks and Sermons (pp. 21-58). Routledge.
Schneider, A., & Ingram, H. (1990). Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510-529. https://doi.org/10.2307/2131904
McDonnell, L. M., & Elmore, R. F. (1987). Getting the job done: alternative policy instruments. Educational Evaluation and Policy Analysis, 9(2), 133-152. https://doi.org/10.2307/1163726
Strassheim, H. (2021). Behavioural mechanisms and public policy design: Preventing failures in behavioural public policy. Public Policy and Administration, 36(2), 187-204. https://doi.org/10.1177/0952076719827062
Session 09. Regulation
Examining regulation as a policy tool, from command-and-control to meta-regulation approaches. Understanding compliance behavior, enforcement strategies, and potential crowding-out effects of regulatory interventions.
Backing materials:
Lemaire, D. (2017). The stick: regulation as a tool of government. In Bemelmans-Videc, M. L., Rist, R. C., & Vedung, E. (Eds.). Carrots, Sticks and Sermons (pp. 59-76). Routledge.
Scott, C. (2003). Speaking softly without big sticks: Meta‐regulation and public sector audit. Law & Policy, 25(3), 203-219.
Reinders Folmer, C. P. (2021). Crowding-out effects of laws, policies, and incentives on compliant behaviour. In B. van Rooij & D. D. Sokol (Eds.), The Cambridge Handbook of Compliance (pp. 326-340). Cambridge University Press. https://doi.org/10.1017/9781108759458.023
Session 10. Taxation and expenditure
Analyzing fiscal tools as policy instruments. Exploring tax expenditures, subsidies, and direct spending programs. Understanding how economic incentives shape behavior and are expected to achieve policy goals.
Backing materials:
McIlroy-Young, B., Henstra, D., & Thistlethwaite, J. (2022). Treasure tools: using public funds to achieve policy objectives. In Howlett, M, (Ed). The Routledge Handbook of Policy Tools (pp. 332-344). Routledge.
Hakelberg, L., & Seelkopf, L. (2021). Introduction. In Id. (ds.), Handbook on the Politics of Taxation. Edward Elgar, pp. 1-15. https://doi.org/10.4337/9781788979429.00008
Guerra, A., & Harrington, B. (2021). Why do people pay taxes? Explaining tax compliance by individuals. In Hakelberg, L., & Seelkopf, L. (Eds.). Handbook on the Politics of Taxation. Edward Elgar, pp. 355-373. https://doi.org/10.4337/9781788979429.00036
Burton, M., & Sadiq, K. (2013). Tax Expenditure Management: A Critical Assessment Cambridge University Press, Ch.2. https://doi.org/10.1017/CBO9780511910142.002
Clements, B. and Hugounenq, R. and Schwartz, G., (1995). Government subsidies: concepts, international trends, and reform options . IMF Working Paper No. 95/91, https://ssrn.com/abstract=883238
Pope, K. R. (2017). All the Queen's Horses. https://www.dailymotion.com/video/x9hl3zg
Session 11. Information
Information as a policy instrument: from public campaigns to educational testing. Understanding persuasion mechanisms, framing effects, and the strategic use of information in policy effectiveness.
Backing materials:
Vedung, E., & van der Doelen F.C.J. (2017). The sermon: information programs in the public policy process—choice, effects, and evaluation. In Bemelmans-Videc, M.-L., Rist, R.C., & Vedung, E. (Eds.). Carrots, Sticks and Sermons (pp. 103-128). Routledge.
McDonnell, L.M. (2004). Politics, Persuasion, and Educational Testing, Harvard University Press, Ch.2. https://doi.org/10.4159/9780674040786-005
Druckman, J. N. (2022). A framework for the study of persuasion. Annual Review of Political Science, 25(1), 65-88. https://doi.org/10.1146/annurev-polisci-051120-110428
Session 12. Nudges and shoves
Applying behavioral insights to policy design through choice architecture. Examining ethical dimensions of libertarian paternalism, manipulation concerns, and the effectiveness of behavioral interventions across policy domains.
Backing materials:
Sunstein, C. R. (2020). Behavioral Science and Public Policy. Cambridge University Press. https://doi.org/10.1017/9781108973144
Miller, D. & Prentice, D. (2013). Psychological levers of behavior change. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 301-309). Princeton University Press. https://doi.org/10.1515/9781400845347-021
John, P. (2013). All tools are informational now: how information and persuasion define the tools of government. SSRN. http://dx.doi.org/10.2139/ssrn.2141382
Weber, E. (2013). Doing the right thing willingly: Using the insights of behavioral decision research for better environmental decisions. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 380-397). Princeton University Press. https://doi.org/10.1515/9781400845347-026
Thaler, R., Sunstein, C. & Balz, J. (2013). Choice architecture. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 428-439). Princeton University Press. https://doi.org/10.1515/9781400845347-029
Lichtenberg, J. (2013). Paternalism, manipulation, freedom, and the good. In E. Shafir (Ed.), The Behavioral Foundations of Public Policy (pp. 494-498). Princeton University Press. https://doi.org/10.1515/9781400845347-034
Malanowski, S. C., Baima, N. R., & Kennedy, A. G. (2022, July). Science, shame, and trust: against shaming policies. In: Resch, M.M., Formánek, N., Joshy, A., Kaminski, A. (eds) The Science and Art of Simulation (pp. 147-160). Springer. https://doi.org/10.1007/978-3-031-68058-8_10
Session 13. Group exercise
Building on Session 7 analysis, revisit and refine your Coleman's boat. Develop a credible theory of change. Select the policy tools that would activate the hypothesized mechanisms. Create a visual logic model. Present your model in class, and engage in peer discussion.
Deliverable to submit: A visual logic model and a max 1000-word written justification explaining your chosen theory of change, a selection of substantive policy tools, and a discussion of the behavioral assumptions that promise to cushion, reduce, or eliminate the policy problem you set in the previous deliverable, according to the following template:
a) Title
b) Theory of change
c) Substantive tools
d) Logic model (figure)
e) Ethical and/or practical reflections
Your deliverable will be graded according to the following rubric:
- max. 3 pts to the coherence of your theory of change (logical, well-justified chain-of-reasoning to expected outcomes)
- max 3 pts to the appropriateness of tool selection (behavioral assumptions credibly triggering the identified change);
- max 1 pt to the logic model (complete, accurate, visually clar);
- max 1 pt to writing quality (clarity, structure)
Module A3.
Making substantive tools work
Session 14. Procedural policy tools
Managing governance through process: consultation mechanisms, participatory instruments, and deliberative approaches. Understanding how procedural tools complement substantive policy instruments in modern governance.
Backing materials:
Howlett, M. (2000). Managing the "hollow state": Procedural policy instruments and modern governance. Canadian Public Administration, 43(4), 412-431. https://doi.org/10.1111/j.1754-7121.2000.tb01152.x
Fung, A. (2006). Varieties of participation in complex governance. Public Administration Review, 66, 66-75. https://doi.org/10.1111/j.1540-6210.2006.00667.x
Fraussen, B. (2022). Consultation tools and agenda-setting. In Howlett, M. (Ed). The Routledge Handbook of Policy Tools (pp. 149-159). Routledge. https://hdl.handle.net/1887/3502352
Wagner, W., West, W., McGarity, T., & Peters, L. (2021). Deliberative rulemaking. Administrative Law Review, 73(3), 609-687. https://www.jstor.org/stable/27178501
Balla, S. J., Beck, A. R., Cubbison, W. C., & Prasad, A. (2019). Where's the spam? Interest groups and mass comment campaigns in agency rulemaking. Policy & Internet, 11(4), 460-479. https://doi.org/10.1002/poi3.224
Session 15. Accountability tools
Transparency, monitoring, and evaluation as policy instruments. Examining fiscal openness, corruption control, and trust-building mechanisms. Understanding how accountability tools enhance policy effectiveness and legitimacy.
Backing materials:
De Renzio, P., & Wehner, J. (2017). The impacts of fiscal openness. The World Bank Research Observer, 32(2), 185-210. https://doi.org/10.1093/wbro/lkx004
Hollyer, J. R., Rosendorff, B. P., & Vreeland, J. R. (2015). Transparency, protest, and autocratic instability. The American Political Science Review, 109(4), 764-784. http://www.jstor.org/stable/24809509
Bourgeois, I., & Maltais, S. (2023). Translating evaluation policy into practice in government organizations. American Journal of Evaluation, 44(3), 353-373. https://doi.org/10.1177/10982140221079837
Olken, B. A. (2007). Monitoring corruption: evidence from a field experiment in Indonesia. Journal of Political Economy, 115(2), 200-249. https://www.jstor.org/stable/10.1086/517935
Ostrom, E. (2009). Building trust to solve commons dilemmas: taking small steps to test an evolving theory of collective action. In: Levin, S.A. (ed.) Games, Groups, and the Global Good (pp. 207-228). Springer. https://doi.org/10.1007/978-3-540-85436-4_13; https://ssrn.com/abstract=1304695
Session 16. Guided group exercise
Embed substantive and procedural tools within an action situation.
Deliverable to submit: Building on the results of Session 13, elaborate a complete action situation scheme that integrates substantive and procedural policy tools and a max 1500-word narrative discussing how the action situation setting can secure the expected policy outcomes, following this template:
a) Title
b) Procedural and substantive tools as the seven elements of the action situation
c) The action situation (figure)
d) Why these joint constraints promise effectiveness?
e) Reflections on limitations
Your deliverable will be assigned:
- max 2 pts to the diagram of the action situation (complete, clear)
- max 3 pts to the discussion of the seven elements (complete and consistent rendering of procedural and substantive tools)
- max 2 pts to the explanation of constraints' effectiveness (consistency, credibility)
- max 1 pt to the identification of limits (credibility)
- max 1 pt to the writing quality (structure, clarity)
Module B.
Establishing policy design effectiveness
Instructor: A. Damonte
This module introduces students to methodological strategies for assessing whether and how policy designs produce intended causal effects. It presents both design-driven approaches (based on counterfactual reasoning and quasi-experiments) and model-driven approaches (focused on causal mechanisms and graphical models). Special emphasis is placed on the operationalization of causal claims through adequate conceptualization and measurement strategies. The module prepares students to design policy evaluations that can credibly test the effectiveness claims of policy interventions.
Session 17. Design-driven strategies for establishing causation
Applying Mill's methods, counterfactual reasoning, and natural experiments to policy evaluation. Understanding strengths and limitations of quasi-experimental approaches to causal inference.
Backing materials:
Ducheyne, S. (2008): J.S. Mill's canons of induction: from true causes to provisional ones, History and Philosophy of Logic, 29:4, 361-376. http://doi.org/10.1080/01445340802164377
Fearon, J. D. (1991). Counterfactuals and hypothesis testing in political science. World Politics, 43(2), 169-195. https://doi.org/10.2307/2010470
Dunning, T. (2008). Improving causal inference: Strengths and limitations of natural experiments. Political Research Quarterly, 61(2), 282-293. https://doi.org/10.1177/1065912907306470
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945-960. https://doi.org/10.1080/01621459.1986.10478354
Session 18. Model-driven strategies for establishing causation
From Aristotelian causation to modern mechanistic explanations. Understanding causal mechanisms, INUS conditions, and causal graphs. Identifying good and bad controls in causal research design.
Backing materials:
Moravcsik, J. M. E. (1974). Aristotle on adequate explanations. Synthese, 28(1), 3-17. http://www.jstor.org/stable/20114949
Glennan, S., Illari, P. & Weber, E. (2022). Six theses on mechanisms and mechanistic science. Journal for General Philosophy of Science 53, 143-161. https://doi.org/10.1007/s10838-021-09587-x
Mackie, J. L. (1965). Causes and conditions. American Philosophical Quarterly, 2(4), 245-264. https://www.jstor.org/stable/20009173
Cinelli, C., Forney, A., & Pearl, J. (2024). A crash course in good and bad controls. Sociological Methods & Research, 53(3), 1071-1104. https://doi.org/10.1177/00491241221099552
Session 19. Building and findings indicators
Constructing valid and reliable indicators, addressing measurement challenges, and linking indicators to theories of change and logic models.
Backing materials:
Goertz, G. (2020). Concept structure: aggregation and substitutability. In Id. Social science concepts and measurement. Princeton University Press, Ch.6.
Adcock, R., & Collier, D. (2001). Measurement validity: a shared standard for qualitative and quantitative research. American Political Science Review, 95(3), 529-546. https://doi.org/10.1017/S0003055401003100
Session 20. Capstone
Students integrate constraints, institutional structure, and actor motivations into a complete design blueprint, ready for empirical testing.
Deliverable to submit: a research proposal (max 1500 words) that includes
- A clear causal question or hypothesis
- The policy design to be tested;
- The proposed methods for establishing causal inference
- Indicators for measuring design features and outcomes
The deliverable will be assigned:
- max 2 pts to the causal question (well-formulated, clear, consistent with the course concepts)
- max 3 pts to the selected methodology (appropriate, consistent with the driving question, well-justified)
- max 2 pts to indicator (consistent and appropriate constructs)
- max 1 pt to writing quality (well-structured, clear)
Module C.
X-QCA
Instructor: A. Damonte
This module guides students in designing an effective configurational strategy to assess the strength of explanatory claims. It introduces the core steps of Qualitative Comparative Analysis for explanatory research — from case and condition selection, through calibration and the analysis of necessity and sufficiency, to robustness testing. Particular attention is given to the practical challenges of replicability and transparency in published QCA, equipping students to apply configurational methods rigorously and critically in policy design research.
Session 21. Where to begin: variable selection and calibration
This session introduces key issues in case selection, variable selection, and calibration when designing a QCA. Students will learn how to select conditions and outcomes that reflect relevant theoretical claims, and how to calibrate raw variables into crisp or fuzzy sets. The session will also explore the epistemological underpinnings of calibration as a central act of conceptualization.
Backing materials:
Amenta, E., & Poulsen, J. D. (1994). Where to begin: a survey of five approaches to selecting independent variables for Qualitative Comparative Analysis. Sociological Methods & Research, 23(1), 22-53. https://doi.org/10.1177/0049124194023001002
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.4. https://bookdown.org/dusadrian/QCAbook/calibration.html
Session 22. Hands-on
In this session you will learn how to turn variables into conditions with R.
Working in small groups, students will use a provided dataset and script with incomplete comments. They will be invited to employ AI agents (LLMs) to generate prompts that comment functions, possibly improve them, and correct arguments if needed. The accuracy and appropriateness of these prompts and comments will be shared and discussed during class to foster learning and critical reflection on tool usage.
Session 23. Parameters of fit and the Analysis of individual Necessity
The session focuses on the theoretical meaning and empirical use of consistency, coverage, and relevance in the analysis of necessity. Students will learn to compute and interpret these parameters, and to distinguish between trivial and meaningful necessary conditions.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.5. https://bookdown.org/dusadrian/QCAbook/analysisofnecessity.html
Session 24. Parameters of fit and the Truth Table
Students will learn how to construct and interpret truth tables for sufficiency analysis. The session will clarify the different types of configurations in a truth table and how consistency, coverage, and proportional reduction of inconsistency parameters can point to violations to the claim of joint sufficiency.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.6, 7. https://bookdown.org/dusadrian/QCAbook/analysisofsufficiency.html
Session 25. Hands-on
In this session we will learn how to check for trivial conditions and inconsistencies in the truth table, and how to manage them.
As in the previous hands-on session, student groups will be given a dataset and a script without comments and invited to use any AI agent to elaborate prompts and get comments of the functions' meaning and possibly improve them or correct arguments. The accuracy and appropriateness of these comments will be shared and discussed during class.
Session 26. Getting to solutions
This session explains how to minimize truth tables and derive complex, parsimonious, and intermediate solutions. Students will learn the special counterfactual thinking embedded in QCA, the different causal assumptions beneath each type of solution, and how to manage model ambiguity.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Duşa A. (2018). QCA with R: A Comprehensive Resource. Springer, Cham, Ch.8. https://bookdown.org/dusadrian/QCAbook/minimize.html
Session 27. Hands-on
Building on the previous sessions, students will work in groups to obtain complex, parsimonious, and possibly intermediate solutions using R.
Again, groups will be given a dataset and a script without comments and invited to use AI agents to elaborate prompts to get comments of the functions' meaning and possibly improve them or correct arguments. The accuracy and appropriateness of these comments will be shared and discussed during class.
Session 28. Robustness tests
The session will introduce the Oana-Schneider robustness protocol and alternative strategies. The discussion will address when different robustness tests are appropriate and how their results can be used to support or qualify explanatory claims.
Backing materials:
Damonte, A. (2023). Testing joint sufficiency twice: Explanatory Qualitative Comparative Analysis. In: Damonte, A., Negri, F. (eds) Causality in Policy Studies. Texts in Quantitative Political Analysis. Springer, Cham. https://doi.org/10.1007/978-3-031-12982-7_7
Oana, I.-E., & Schneider, C. Q. (2021). A robustness test protocol for applied QCA: Theory and R software application. Sociological Methods & Research, 53(1), 57-88. https://doi.org/10.1177/00491241211036158
Session 29. Hands-on
In this session we will learn how to perform different robustness test and establish when they are relevant.
As in earlier practical sessions, they will use AI agents to annotate and improve scripts. The accuracy and appropriateness of these comments will be shared and discussed during class.
Session 30. Q&A session
This session is dedicated to clarifying technical and methodological aspects of applying QCA for probing explanatory theories in preparation for the module's final deliverable. Students are invited to bring all their questions and doubts for collective discussion before replicating a published QCA.
The deliverable asks students to produce an annotated R script that answers the following questions about a selected published QCA:
1) the model: is it configurational?
2) case and raw variable selections: do they afford proper analysis?
3) calibration: is it replicable?
4) directional expectations: are they empirically supported?
5) truth table: is there any inconsistent primitive?
6) solutions: are the deserving ones discussed?
7) are solutions 'robust'?
The deliverable will demonstrate students' twofold capacity to:
- accurately run the main steps of the original QCA using the original raw data and appropriate R functions (replication);
- critically assess whether the published article is transparent and replicable, by comparing their outputs to the published results and commenting on any discrepancies, ambiguities, or omissions (critical assessment).
The deliverable will be assigned
- max 15 pts for technical execution of properly set R functions (complete, accurate, possibly creative);
- max 15 pts for critical thinking about the replicability of the published QCA (insightful, rigorous, balanced);
- max 3 pts for clarity in commenting and proper documentation (readable, clear, professional script and documentation).
Module D.
The Statistics of Causal Inference
Instructor: A. De Angelis
This module guides students to designing an effective identification strategy to assess causal effects in their own empirical projects. By drawing on the Potential Outcome Framework, an influential theoretical account of causal inference, this course introduces the most popular research methods for causal inference, including experimental and quasi-experimental designs such as Instrumental Variables, Difference-in-Difference estimation and Regression Discontinuity Design.
Session 31. Introduction and review
Introduction, organization, and rules of the seminar. What are we going to learn? Why are we using R in this seminar? How can we work effectively? Short refresher on statistics and multivariate regression.
Session 32. Directed Acyclic Graphs
DAGs for causal identification. Exercise with replication paper. Colliders & confounders. Working reproducibly with data science projects.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 118-141. https://mixtape.scunning.com
Imbens (2020). "Potential Outcome and Directed Acyclic Graph approaches to causality: relevance for empirical practice in economics". Journal of Economic Literature 58(4): 1129-1179. https://www.nber.org/system/files/working_papers/w26104/w26104.pdf
Other assignment:
Code one or two published papers that use one of the causal identification strategies covered in this course at the shared sheet file in the tab "list of replication papers". Read the instructions before proceeding.
Session 33. The Potential Outcome Framework
Randomization and experimental design. Potential outcomes. Switching equation. ATE and ATT. SDO and selection bias. Independence assumption. SUTVA assumption.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 142-168. https://mixtape.scunning.com
Other assignments:
Start thinking of a potential research question and hypotheses for your master thesis: 1. draft your research question; 2. Develop one or two hypotheses based on your question; 3. Find 2/3 scientific references that connect your question to existing theories/debates in the field. Follow the instructions below. Upload a simple .txt file named `SURNAME-RQ.txt` before class.
Instructions: a good research questions should start from your intrinsic interests and move your passion. Strong research questions are: 1. Clear; 2. Specific (i.e., focused and narrow, not too broad or general, which leads to vague questions); 3. Researchable (answerable with actual data collections/analysis rather than abstract reasoning alone); 4. Impactful (relevant for meaningful issues with real-world or theoretical implications); 5. Rooted in ongoing scientific debates and existing literature of a specific field of studies. Strong research hypotheses are: 1. Measurable and testable; 2. Specific and directed (i.e., indicate the type and direction, e.g. positive, negative, causal ); 3. Grounded in existing theories and prior research.
Session 34. Matching and Subclassification
Substratification and Conditional Independence. Curse of dimensionality. Exact Matching.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 204-242. Quickly skim through the following sections: "Some background" pp. 205-211", "Subclassification exercise: Titanic data set" pp. 212-218. Skip section "Bias correction" pp. 232-239. https://mixtape.scunning.com
Other assignments:
- Sign up for mandatory office hour to discuss the replication paper and career orientation.
- Think about a potential empirical strategy that uses matching or re-weighting to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 35. Regression Discontinuity
Cutoff point and running variable. Continuity assumption. Local Average Treatment Effect (LATE).
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 278-291. https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses RDD to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 36. Instrumental Variables
Intuition of IVs. Exclusion restriction. Homogeneous Treatment Effects. Two-Stage Least Squares (2SLS), Heterogeneous Treatment Effects.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 371-383, and then pp. 401-407. https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses IVs to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 37. Panel Data
Panel data structure. Estimations (POLS and FEs). Identifying assumptions.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 445-456 ("Data Exercise" excluded). https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses panel data to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 38. Difference-in-Differences
Diff-in-diffs design. John Snow's falsification of miasma theory. Estimation. Parallel trends assumption.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 467-488. https://mixtape.scunning.com
Other assignments:
- Think about a potential empirical strategy that uses DiD to test the hypotheses in your research design. Write one paragraph, use max. 20 minutes.
Session 39. Synthetic Control
Comparative case study and synthetic control model. Picking synthetic controls. The case of California's tobacco control law.
Reading materials:
Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press, pp. 584-616. https://mixtape.scunning.com
Session 40. Final Presentation
Final presentation on the status of replication work (significance, progress, intended goals).
Deliverable: a final short replication paper (2/3 of the grade). The short paper must critically replicate a quasi-experimental analysis of a published peer-reviewed study. The published paper must be first coded in this shared class list. The resulting short replication paper should consist of about minimum 2,000 and ideally 4,000 words and be structured as follows:
a. Abstract. A 150 word abstract of the replication study. The title page should include name, surname, date of submission, word count (all texts excluding references, tables, figure notes and footnotes).
b. Introduction. Background on the study, brief summary, and explanation why its replication is important. Outline key debate
c. Literature review. Outline key debates and position the study in the broader literature and cite related studies to emphasize its importance.
d. Methods and replication strategy. Short critical assessment of the identification strategy linking to the description of the study in terms of potential outcomes framework. Description of original data, methods, and models. Clear statement of the scope of the replication: direct, robustness checks, or alternative specifications?
e. Results. A side-by-side comparison of the published and the replicated figures and tables; additional analysis discussing the implications for the stability of the findings; significant differences should be connected to potential explanations.
f. Discussion. A clear interpretation of the findings clarifying how the replication results support/refine/challenge the original conclusions. Implications of the replication findings for theory and practice. A paragraph with the limitations in your replication—data, methods, scope. A concluding statement with an overall assessment of the replication work and its implications for our understanding of the underlying phenomena.
g. References. Including the replicated article in Harvard reference style and eventually other materials and published papers referenced in the text.
Additionally, students must deliver the full replication project folder with all the reproducible and commented code.
Rubric: Excellent replication papers will try to move beyond the replication including extensions of the analysis (e.g., alternative operationalizations, measures, robustness tests ) and/or discussing proposals about how to tackle identification problems or untested assumptions.
Prerequisiti
This course is designed to be accessible to students with a wide range of backgrounds. Support and guidance will be provided throughout the course and during office hours to help all students build the necessary skills.
However, students who have previously taken "Multivariate Analysis for Social Scientists" may find Module D more manageable.
Besides, starting with Module C, the course will include practical, hands-on components using R. Students are strongly encouraged to bring their laptops to class and ensure that both R and RStudio are installed and updated to the latest versions.
However, students who have previously taken "Multivariate Analysis for Social Scientists" may find Module D more manageable.
Besides, starting with Module C, the course will include practical, hands-on components using R. Students are strongly encouraged to bring their laptops to class and ensure that both R and RStudio are installed and updated to the latest versions.
Metodi didattici
The teaching methods of this course are carefully designed to promote deep learning, active engagement, and the progressive development of theoretical, analytical, and applied competences in the analysis and evaluation of policy design.
The course is structured in four fully integrated modules (Modules A-D), covering 40 sessions, and combines a variety of teaching approaches to ensure that students acquire both knowledge and the capacity to apply that knowledge autonomously across diverse policy contexts.
Teaching methods explicitly address the heterogeneity of student backgrounds and learning styles, and are aligned with the expected learning outcomes and the structure of deliverables.
Lectures and guided readings provide the theoretical foundations and conceptual frameworks necessary to analyze complex policy problems, understand policy tools and their behavioral assumptions, and master the logic of causal inference (Modules A-D; Sessions 1-6, 17-19, 21, 23, 24, 26, 28, 31-34, 36-39). Lectures are always supported by carefully selected readings (classic and cutting-edge), which students are expected to engage with critically.
Problem-oriented and interactive sessions promote active learning and the ability to apply concepts to real-world policy issues. These include group exercises for diagnosing policy problems and designing interventions (Module A, Sessions 7, 13, 16), peer discussion and feedback activities (Module C, Session 30; Module D, Session 40), and structured discussions around ethical and normative implications of policy choices (Sessions 12, 13, 16, 38, 40).
Hands-on sessions in Modules C and D (Sessions 22, 25, 27, 29, 32-39) develop students' technical competence in applying causal inference methods and conducting transparent, reproducible analyses using the R programming language. During these sessions, students practice the preparation of replicable, well-documented R scripts for both configurational and statistical approaches, supporting the development of lifelong learning skills and the capacity to engage with evolving research tools.
The course experiments with integration of AI tools and agents. In selected hands-on sessions (Module C, Sessions 22, 25, 27, 29), students are guided to use AI agents (LLMs) to support learning of R functions and script construction. This component fosters digital literacy, critical engagement with emerging tools, and autonomous technical learning — key competences in contemporary evidence-based policy research.
The course ensures progressive and cumulative learning. The structure of deliverables (D1-D4, Module C and D replications) is carefully designed to promote cumulative skill acquisition. Students progressively move from diagnosing policy problems (Deliverable 1), to selecting and justifying policy tools (Deliverable 2), to designing complete intervention logics (Deliverable 3), and finally to applying causal inference strategies to evaluate designs (Deliverable 4), supported by advanced replication exercises in Modules C and D. This progressive structure ensures that knowledge and skills are consolidated and applied in increasingly complex and authentic tasks.
The course encourages an open selection of topics: Students are encouraged to select their own policy topics for deliverables and replications, fostering personal engagement, the ability to transfer skills across policy domains, and the development of independent critical judgement — essential for future professional practice and lifelong learning.
Moreover, the course supports for different learning paths. Teaching explicitly accommodates both students with strong prior methodological training and those with weaker backgrounds. Foundational theoretical and methodological concepts are reviewed systematically (Sessions 1-6; Module D, Session 31), and individual guidance is provided through office hours and feedback on deliverables. The use of AI-supported coding sessions allows students to develop technical capacities at their own pace, with instructor supervision.
A dedicated Ariel platform and Teams channel ensure full accessibility of all materials (slides, readings, datasets, code), announcements, and recordings. This allows students to work flexibly and supports participation for those unable to attend specific sessions.
In case of emergency, the course can be fully delivered online, with synchronous and asynchronous activities designed to preserve active engagement and the integrity of learning outcomes.
Overall, the teaching methods are explicitly aligned with the goal of preparing students for independent, evidence-based, and methodologically pluralist policy analysis and evaluation, and support the development of lifelong learning capacities — including critical reflection on the use of causal evidence in public policy designs.
The course is structured in four fully integrated modules (Modules A-D), covering 40 sessions, and combines a variety of teaching approaches to ensure that students acquire both knowledge and the capacity to apply that knowledge autonomously across diverse policy contexts.
Teaching methods explicitly address the heterogeneity of student backgrounds and learning styles, and are aligned with the expected learning outcomes and the structure of deliverables.
Lectures and guided readings provide the theoretical foundations and conceptual frameworks necessary to analyze complex policy problems, understand policy tools and their behavioral assumptions, and master the logic of causal inference (Modules A-D; Sessions 1-6, 17-19, 21, 23, 24, 26, 28, 31-34, 36-39). Lectures are always supported by carefully selected readings (classic and cutting-edge), which students are expected to engage with critically.
Problem-oriented and interactive sessions promote active learning and the ability to apply concepts to real-world policy issues. These include group exercises for diagnosing policy problems and designing interventions (Module A, Sessions 7, 13, 16), peer discussion and feedback activities (Module C, Session 30; Module D, Session 40), and structured discussions around ethical and normative implications of policy choices (Sessions 12, 13, 16, 38, 40).
Hands-on sessions in Modules C and D (Sessions 22, 25, 27, 29, 32-39) develop students' technical competence in applying causal inference methods and conducting transparent, reproducible analyses using the R programming language. During these sessions, students practice the preparation of replicable, well-documented R scripts for both configurational and statistical approaches, supporting the development of lifelong learning skills and the capacity to engage with evolving research tools.
The course experiments with integration of AI tools and agents. In selected hands-on sessions (Module C, Sessions 22, 25, 27, 29), students are guided to use AI agents (LLMs) to support learning of R functions and script construction. This component fosters digital literacy, critical engagement with emerging tools, and autonomous technical learning — key competences in contemporary evidence-based policy research.
The course ensures progressive and cumulative learning. The structure of deliverables (D1-D4, Module C and D replications) is carefully designed to promote cumulative skill acquisition. Students progressively move from diagnosing policy problems (Deliverable 1), to selecting and justifying policy tools (Deliverable 2), to designing complete intervention logics (Deliverable 3), and finally to applying causal inference strategies to evaluate designs (Deliverable 4), supported by advanced replication exercises in Modules C and D. This progressive structure ensures that knowledge and skills are consolidated and applied in increasingly complex and authentic tasks.
The course encourages an open selection of topics: Students are encouraged to select their own policy topics for deliverables and replications, fostering personal engagement, the ability to transfer skills across policy domains, and the development of independent critical judgement — essential for future professional practice and lifelong learning.
Moreover, the course supports for different learning paths. Teaching explicitly accommodates both students with strong prior methodological training and those with weaker backgrounds. Foundational theoretical and methodological concepts are reviewed systematically (Sessions 1-6; Module D, Session 31), and individual guidance is provided through office hours and feedback on deliverables. The use of AI-supported coding sessions allows students to develop technical capacities at their own pace, with instructor supervision.
A dedicated Ariel platform and Teams channel ensure full accessibility of all materials (slides, readings, datasets, code), announcements, and recordings. This allows students to work flexibly and supports participation for those unable to attend specific sessions.
In case of emergency, the course can be fully delivered online, with synchronous and asynchronous activities designed to preserve active engagement and the integrity of learning outcomes.
Overall, the teaching methods are explicitly aligned with the goal of preparing students for independent, evidence-based, and methodologically pluralist policy analysis and evaluation, and support the development of lifelong learning capacities — including critical reflection on the use of causal evidence in public policy designs.
Materiale di riferimento
In addition to backing and reading materials, anyone wishing to improve their familiarity with policy design, or to approach the course topics from a different perspective, can refer to:
Peters, B. G. (2018). Policy Problems and Policy Design. Edward Elgar Publishing.
Knowlton, L. W., & Phillips, C. C. (2012). The Logic Model Guidebook: Better Strategies For Great Results. Sage.
Ostrom, E. (2009). Understanding Institutional Diversity. Princeton University Press.
Damonte, A., & Negri, F. (2023). Causality In Policy Studies: A Pluralist Toolbox. Springer. https://doi.org/10.1007/978-3-031-12982-7
Angrist, J., & Pischke J-S (2014). Mastering Metrics: The Path from Cause to Effect. Princeton University Press.
Please note that course materials will be made available through the Ariel website. Changes may occur to better fit interests and needs.
Peters, B. G. (2018). Policy Problems and Policy Design. Edward Elgar Publishing.
Knowlton, L. W., & Phillips, C. C. (2012). The Logic Model Guidebook: Better Strategies For Great Results. Sage.
Ostrom, E. (2009). Understanding Institutional Diversity. Princeton University Press.
Damonte, A., & Negri, F. (2023). Causality In Policy Studies: A Pluralist Toolbox. Springer. https://doi.org/10.1007/978-3-031-12982-7
Angrist, J., & Pischke J-S (2014). Mastering Metrics: The Path from Cause to Effect. Princeton University Press.
Please note that course materials will be made available through the Ariel website. Changes may occur to better fit interests and needs.
Modalità di verifica dell’apprendimento e criteri di valutazione
The evaluation of student learning is designed to promote engagement with course content, encourage application of theoretical concepts, and foster critical thinking and technical skills. Recognizing the diversity of student backgrounds and circumstances, the course employs a flexible and inclusive assessment strategy. Participation in classroom discussions is not required for success in the course, but it is warmly encouraged for it will greatly improve students' deliverables.
The evaluation components are:
* Modules A+B (max. 33 pts.):
- Deliverable 1 (max 8 pts): Mechanistic analysis. It consists of a written report (max 1500 words) unpacking a selected policy challenge through Coleman's boat, with visualization of hypothesized mechanisms.
- Deliverable 2 (max. 8 pts): Logic model and tool justification. It consists of a visual logic model plus a written justification (max 1000 words), elaborating a theory of change and corresponding selection of substantive policy tools.
- Deliverable 3 (max. 9 pts): Action situation design. It consists of the scheme of one or more action situations incorporating substantive and procedural policy tools, accompanied by a written narrative (max. 1500 words).
- Deliverable 4 (max 8 pts): Research proposal sketch. It consists of a short research proposal (max 1500 words) for testing causal claims about policy designs.
* Module C :
- Deliverable 5 (max. 33 pts): produce an annotated R script that answers the following questions about a selected published QCA:
1) the model: is it configurational?
2) case and raw variable selections: do they afford proper analysis?
3) calibration: is it replicable?
4) directional expectations: are they empirically supported?
5) truth table: is there any inconsistent primitive?
6) solutions: are the deserving ones discussed?
7) are solutions 'robust'?
* Module D (max. 33 pts):
- Active Participation and read + think (max. 11 pts.). Students must study online using Perusall, a platform designed to let students interact, supporting each other to learn better. To complete an assignment, students must critically engage with the text(s) posting a minimum of five short annotations for each assignment on Perusall (e.g., asking questions, addressing questions from other students, posing criticisms, creating examples, adding links to additional resources, or presenting applications). Class absences do not exempt from reading assignments.
- A final short replication paper (max 22 pts). The short paper must critically replicate a quasi-experimental analysis of a published peer-reviewed study. The published paper must be first coded in a shared class list. The resulting short replication paper should consist of about minimum 2,000 and ideally 4,000 words and must include: a. Title page, b. Abstract. c. Introduction. d. Literature review. e. Methods and replication strategy. f. Results. g. Discussion. h. References. Additionally, students must deliver the full replication project folder with all the reproducible and commented code.
Excellent replication papers will try to move beyond the replication including extensions of the analysis (e.g., alternative operationalizations, measures, robustness tests ) and/or discussing proposals about how to tackle identification problems or untested assumptions.
* Final colloquium (±3 points). A short individual conversation will focus on clarifying concepts, discussing methodological choices, and offering feedback on students' deliverables. It can adjust the baseline score (calculated as the average of Modules AB, Module C, and Module D) according to the following criteria:
+3 pts: Proof of clarity, integration of concepts, and methodological awareness.
0 pts: Adequate demonstration of written outcomes.
-3 pts: Significant misunderstandings or superficial engagement with course material.
More details on the deliverables will be provided through the Ariel and Teams websites.
Overall grading rubric:
30L (A+): Exceptional understanding and application of concepts, plus original elaboration of the course and external knowledge
30 (A): Excellent understanding and application of the course concepts
27-29 (B): Good understanding and application with minor gaps
24-26 (C): Adequate understanding and application with significant gaps
18-23 (D): Perfunctory performance
F (<18): Failure to meet basic requirements
With learning methods and materials, these deliverables ensure that, by the end of the course, students will be able to:
* demonstrate advanced knowledge and understanding - specifically,
- Understand and explain the theoretical foundations of policy design, including institutional, behavioral, and mechanistic perspectives, and their relevance for analyzing public policy processes. These conceptual foundations are developed in Module A (Sessions 1-6), reinforced throughout Modules B-D (Sessions 17-40), and assessed across Deliverables 1-4, Module C and D replication work, and the Final Colloquium.
- Understand and apply frameworks for diagnosing and deconstructing complex policy problems, including Coleman's boat, logic models, and theories of change, and explain their role in structuring effective policy designs. These competencies are developed in Module A (Sessions 2-3, 6-7, 13, 16), and systematically assessed through Deliverables 1 (Coleman's boat analysis), 2 (logic model), and 3 (action situation).
- Understand and critically assess the relationship between policy instruments (substantive and procedural), behavioral assumptions of compliance, institutional contexts, and policy outcomes. These concepts are developed in Module A (Sessions 8-16), further explored in Modules B-D, and assessed through Deliverables 2, 3, 4, and Final Colloquium.
- Understand and evaluate the ethical dimensions and normative trade-offs involved in policy design — including considerations of legitimacy, democratic accountability, and individual autonomy — and reflect on their implications for policy effectiveness and public trust. These competencies are developed in Module A (Sessions 12-13, 16), reinforced in comparative methodological discussions in Modules B, C, D, and assessed in Deliverables 3 and 4, replication papers, and Final Colloquium.
- Understand and explain core principles and strategies of causal inference for policy evaluation, including design-driven approaches — experimental, quasi-experimental, and natural experiments (Modules B and D: Sessions 17-20, 31-39); model-driven approaches — mechanistic reasoning, causal graphs (Module B: Sessions 18-19); configurational comparative methods — QCA (Module C: Sessions 21-30). Mastery of these concepts is assessed in Deliverable 4, Module C critical QCA replication, Module D replication paper, and Final Colloquium.
- Understand the epistemological foundations of different methodological paradigms (behavioral, institutional, experimental, configurational), and critically evaluate their implications for the validity, generalizability, and robustness of causal policy knowledge. These concepts are developed across all modules (A-D), reinforced in comparative sessions (B18-19, C28-30, D38-40), and assessed through all deliverables and Final Colloquium.
- Understand the importance of transparency, replicability, and reproducibility in empirical policy research, and explain the role of open scientific practices in strengthening evidence-based policy. These concepts are developed through Module C and D hands-on sessions (C22, 25, 27, 29, 30; D32-40), and assessed through replication deliverables in Modules C and D, and Final Colloquium.
- Understand how to transfer and apply these theoretical and methodological approaches flexibly across diverse policy areas and institutional contexts, demonstrating capacity to adapt knowledge to different public policy challenges. This goal is developed through the open selection of policy topics for Deliverables 1-4, Modules C and D replication assignments, and critically discussed in the Final Colloquium.
* apply their knowledge and understanding - specifically,
- Diagnose and deconstruct complex policy problems, using advanced frameworks (Coleman's boat, logic models, theories of change) to identify causal mechanisms, actors, and structural factors influencing policy outcomes. These skills are developed in Module A (Sessions 2, 3, 6, 7, 13, 16); assessed through Deliverable 1 (Coleman's boat analysis), Deliverable 2 (logic model), Deliverable 3 (action situation).
- Design and justify coherent, effective, and context-sensitive policy interventions, selecting and combining appropriate substantive and procedural tools based on sound causal reasoning and fit to institutional and behavioral contexts. These competencies are developed in Module A (Sessions 8-16); reinforced through group exercises (Sessions 13, 16); assessed in Deliverables 2 and 3.
- Operationalize policy designs by developing valid, reliable, and theoretically grounded indicators of causal mechanisms and expected outcomes, enabling subsequent empirical testing and evaluation. These competencies are developed in Module B (Sessions 19, 20); Module C (Sessions 21, 24, 26); assessed in Deliverable 4 (Capstone proposal), Module C replication deliverable, and Module D final replication paper.
- Apply a broad repertoire of causal inference strategies to evaluate the effects of policy interventions, including design-driven approaches — experimental, quasi-experimental, and natural experiment designs (Modules B and D: Sessions 17-20, 31-39); model-driven approaches — mechanistic reasoning, graphical causal models (Module B: Sessions 18-19); configurational methods — Qualitative Comparative Analysis (Module C: Sessions 21-30). These competencies are systematically assessed through Deliverable 4; Module C critical QCA replication; and Module D final replication paper.
- Critically replicate and transparently document published policy research, using evolving computational tools (R), to strengthen skills in transparent, reproducible research and to critically evaluate existing evidence bases in policy studies.The goal is developed through practical hands-on sessions in Module C (Sessions 22, 25, 27, 29, 30) and Module D (Sessions 32-40); assessed in Module C critical QCA replication and Module D final replication paper.
- Integrate multiple methodological approaches — combining diagnostic frameworks, experimental and quasi-experimental designs, and configurational analysis — to provide comprehensive, evidence-based evaluations of policy designs. This integrative competence is developed across Modules A-D; cumulative mastery is assessed through Deliverables 1-4, Module C and D replication assignments, and the Final Colloquium.
- Apply knowledge flexibly across a range of substantive policy areas and institutional contexts, demonstrating the capacity to adapt analytical tools and causal inference strategies to diverse public policy challenges. The goal is supported through the open choice of policy problems for Deliverables 1-4 and for replication assignments in Modules C and D; discussed in Final Colloquium.
* Capacity to make judgements:
- Critically assess the adequacy, effectiveness, and ethical implications of different policy tools and interventions, considering the fit between instruments, behavioral assumptions, institutional contexts, and policy objectives. The goal is developed in Module A2 and A3 (Sessions 8-16); Deliverables 2, 3; discussions in group exercises (Sessions 13, 16); Final Colloquium.
- Evaluate the methodological rigor, robustness, and transparency of causal claims in published empirical studies, identifying possible limitations, untested assumptions, and weaknesses in research design. The goal is practiced in Module C (Sessions 24, 26, 28, 30); Module D (Sessions 35-39); assessed through Module C deliverable (critical QCA replication) and Module D final replication paper.
- Formulate informed, well-reasoned methodological choices when designing causal analyses of policy effectiveness, balancing methodological rigor with practical constraints and contextual considerations. The goal is developed in Module B (Sessions 17-20); Module D (Sessions 31-39); assessed through Deliverable 4 (Capstone proposal), Module D final replication paper, and Final Colloquium.
- Exercise reflexivity in the interpretation of causal findings, recognizing the limits of available evidence, the influence of normative assumptions, and the potential impact on policy decisions. The goal is fostered in discussions in Module B (Sessions 19-20); ethical and reflexive discussions in Module A (Sessions 12, 13, 16), Module C (Sessions 28, 30), and Module D (Sessions 38, 40); assessed in Deliverables 3 and 4, Module D replication paper, and Final Colloquium.
- Balance competing criteria in policy evaluation, such as technical effectiveness, democratic accountability, and normative values, when formulating or recommending policy designs. The goal is developed across Modules A, B, C, D and explicitly addressed in Module A (Sessions 12, 13, 16), Module C (Sessions 28, 30), Module D (Sessions 38, 40); assessed in Deliverables 3, 4, replication assignments, and Final Colloquium.
* Communication Skills:
- Present policy designs, causal models, and evaluation strategies using professional and effective formats, including visual representations (e.g. Coleman's boat diagrams, logic models, action situation diagrams) and structured written reports. The goal is developed in Module A (Sessions 7, 13, 16); Deliverables 1, 2, 3.
- Communicate methodological reasoning and causal inference strategies clearly and appropriately to different audiences (specialist, policy practitioners, and academic), both in written form (Deliverable 4, Module D replication paper) and orally (Final Colloquium, Session 40 presentation).The goal is supported by Module B Capstone proposal (Session 20); Module D final replication paper (Sessions 40); Final Colloquium.
- Prepare transparent, reproducible, and clearly annotated R scripts to communicate technical procedures and results in an accessible and professional manner. The goal is developed in Module C (Sessions 22, 25, 27, 29, 30); Module C deliverable (critical replication of QCA); Module D (Sessions 32-40); Module D replication paper.
- Engage effectively in group discussions and peer feedback activities, demonstrating the ability to articulate, explain, and defend methodological choices in a collaborative environment. This is practiced during group exercises (Sessions 7, 13, 16); hands-on collaborative sessions (Module C: Sessions 22, 25, 27, 29; Module D: Sessions 32-39); Session 30 Q&A; Session 40 presentation and discussion.
- Critically reflect on the communication of causal claims and policy recommendations, assessing the clarity, transparency, and ethical implications of different modes of presenting evidence and findings. The goal is fostered throughout comparative methodological sessions (Modules B, C, D); ethics-focused sessions (Sessions 12, 13, 16, 28, 30, 38, 40); Final Colloquium.
* Learning Skills:
- Conduct independent research on emerging approaches to policy design and causal inference, drawing on the conceptual frameworks and methodological approaches introduced across Modules A, B, C, and D (Sessions 1-40). The goal is supported by: assigned readings; Deliverables 1-4; Module C and D replication assignments.
- Critically replicate published empirical studies using evolving software tools (R), strengthening autonomous technical learning capacity through practical experience in transparent, reproducible coding and workflows. The goal is supported by: Module C (Sessions 22, 25, 27, 29, 30); Module D (Sessions 32-40); Module C deliverable (critical replication of QCA); Module D final replication paper.
- Synthesize insights from multiple disciplinary perspectives (political science, public policy, behavioral economics, evaluation research, causal inference), to frame research problems and design evidence-based policy solutions. The goal is developed through: Modules A and B (Sessions 1-20); Deliverables 1-4; Capstone proposal (Session 20); Final colloquium.
- Develop a reflexive and pluralist methodological orientation that supports engagement with innovative and emerging practices in evidence-based policy making, by critically evaluating the strengths and limits of alternative approaches (behavioral, institutional, experimental, configurational). The goal is supported by comparative coverage across Modules B, C, D; sessions on ethical and normative trade-offs (Sessions 12, 13, 16, 28, 30, 38, 40); Final colloquium.
- Cultivate skills for lifelong learning and self-directed professional development through the systematic use of transparent, reproducible workflows and critical reflection on evolving bodies of research evidence. The goal is embedded in ongoing hands-on sessions (Module C, Module D); replication assignments; development of annotated R scripts (Module C); final replication paper (Module D); Final colloquium discussion.
The evaluation components are:
* Modules A+B (max. 33 pts.):
- Deliverable 1 (max 8 pts): Mechanistic analysis. It consists of a written report (max 1500 words) unpacking a selected policy challenge through Coleman's boat, with visualization of hypothesized mechanisms.
- Deliverable 2 (max. 8 pts): Logic model and tool justification. It consists of a visual logic model plus a written justification (max 1000 words), elaborating a theory of change and corresponding selection of substantive policy tools.
- Deliverable 3 (max. 9 pts): Action situation design. It consists of the scheme of one or more action situations incorporating substantive and procedural policy tools, accompanied by a written narrative (max. 1500 words).
- Deliverable 4 (max 8 pts): Research proposal sketch. It consists of a short research proposal (max 1500 words) for testing causal claims about policy designs.
* Module C :
- Deliverable 5 (max. 33 pts): produce an annotated R script that answers the following questions about a selected published QCA:
1) the model: is it configurational?
2) case and raw variable selections: do they afford proper analysis?
3) calibration: is it replicable?
4) directional expectations: are they empirically supported?
5) truth table: is there any inconsistent primitive?
6) solutions: are the deserving ones discussed?
7) are solutions 'robust'?
* Module D (max. 33 pts):
- Active Participation and read + think (max. 11 pts.). Students must study online using Perusall, a platform designed to let students interact, supporting each other to learn better. To complete an assignment, students must critically engage with the text(s) posting a minimum of five short annotations for each assignment on Perusall (e.g., asking questions, addressing questions from other students, posing criticisms, creating examples, adding links to additional resources, or presenting applications). Class absences do not exempt from reading assignments.
- A final short replication paper (max 22 pts). The short paper must critically replicate a quasi-experimental analysis of a published peer-reviewed study. The published paper must be first coded in a shared class list. The resulting short replication paper should consist of about minimum 2,000 and ideally 4,000 words and must include: a. Title page, b. Abstract. c. Introduction. d. Literature review. e. Methods and replication strategy. f. Results. g. Discussion. h. References. Additionally, students must deliver the full replication project folder with all the reproducible and commented code.
Excellent replication papers will try to move beyond the replication including extensions of the analysis (e.g., alternative operationalizations, measures, robustness tests ) and/or discussing proposals about how to tackle identification problems or untested assumptions.
* Final colloquium (±3 points). A short individual conversation will focus on clarifying concepts, discussing methodological choices, and offering feedback on students' deliverables. It can adjust the baseline score (calculated as the average of Modules AB, Module C, and Module D) according to the following criteria:
+3 pts: Proof of clarity, integration of concepts, and methodological awareness.
0 pts: Adequate demonstration of written outcomes.
-3 pts: Significant misunderstandings or superficial engagement with course material.
More details on the deliverables will be provided through the Ariel and Teams websites.
Overall grading rubric:
30L (A+): Exceptional understanding and application of concepts, plus original elaboration of the course and external knowledge
30 (A): Excellent understanding and application of the course concepts
27-29 (B): Good understanding and application with minor gaps
24-26 (C): Adequate understanding and application with significant gaps
18-23 (D): Perfunctory performance
F (<18): Failure to meet basic requirements
With learning methods and materials, these deliverables ensure that, by the end of the course, students will be able to:
* demonstrate advanced knowledge and understanding - specifically,
- Understand and explain the theoretical foundations of policy design, including institutional, behavioral, and mechanistic perspectives, and their relevance for analyzing public policy processes. These conceptual foundations are developed in Module A (Sessions 1-6), reinforced throughout Modules B-D (Sessions 17-40), and assessed across Deliverables 1-4, Module C and D replication work, and the Final Colloquium.
- Understand and apply frameworks for diagnosing and deconstructing complex policy problems, including Coleman's boat, logic models, and theories of change, and explain their role in structuring effective policy designs. These competencies are developed in Module A (Sessions 2-3, 6-7, 13, 16), and systematically assessed through Deliverables 1 (Coleman's boat analysis), 2 (logic model), and 3 (action situation).
- Understand and critically assess the relationship between policy instruments (substantive and procedural), behavioral assumptions of compliance, institutional contexts, and policy outcomes. These concepts are developed in Module A (Sessions 8-16), further explored in Modules B-D, and assessed through Deliverables 2, 3, 4, and Final Colloquium.
- Understand and evaluate the ethical dimensions and normative trade-offs involved in policy design — including considerations of legitimacy, democratic accountability, and individual autonomy — and reflect on their implications for policy effectiveness and public trust. These competencies are developed in Module A (Sessions 12-13, 16), reinforced in comparative methodological discussions in Modules B, C, D, and assessed in Deliverables 3 and 4, replication papers, and Final Colloquium.
- Understand and explain core principles and strategies of causal inference for policy evaluation, including design-driven approaches — experimental, quasi-experimental, and natural experiments (Modules B and D: Sessions 17-20, 31-39); model-driven approaches — mechanistic reasoning, causal graphs (Module B: Sessions 18-19); configurational comparative methods — QCA (Module C: Sessions 21-30). Mastery of these concepts is assessed in Deliverable 4, Module C critical QCA replication, Module D replication paper, and Final Colloquium.
- Understand the epistemological foundations of different methodological paradigms (behavioral, institutional, experimental, configurational), and critically evaluate their implications for the validity, generalizability, and robustness of causal policy knowledge. These concepts are developed across all modules (A-D), reinforced in comparative sessions (B18-19, C28-30, D38-40), and assessed through all deliverables and Final Colloquium.
- Understand the importance of transparency, replicability, and reproducibility in empirical policy research, and explain the role of open scientific practices in strengthening evidence-based policy. These concepts are developed through Module C and D hands-on sessions (C22, 25, 27, 29, 30; D32-40), and assessed through replication deliverables in Modules C and D, and Final Colloquium.
- Understand how to transfer and apply these theoretical and methodological approaches flexibly across diverse policy areas and institutional contexts, demonstrating capacity to adapt knowledge to different public policy challenges. This goal is developed through the open selection of policy topics for Deliverables 1-4, Modules C and D replication assignments, and critically discussed in the Final Colloquium.
* apply their knowledge and understanding - specifically,
- Diagnose and deconstruct complex policy problems, using advanced frameworks (Coleman's boat, logic models, theories of change) to identify causal mechanisms, actors, and structural factors influencing policy outcomes. These skills are developed in Module A (Sessions 2, 3, 6, 7, 13, 16); assessed through Deliverable 1 (Coleman's boat analysis), Deliverable 2 (logic model), Deliverable 3 (action situation).
- Design and justify coherent, effective, and context-sensitive policy interventions, selecting and combining appropriate substantive and procedural tools based on sound causal reasoning and fit to institutional and behavioral contexts. These competencies are developed in Module A (Sessions 8-16); reinforced through group exercises (Sessions 13, 16); assessed in Deliverables 2 and 3.
- Operationalize policy designs by developing valid, reliable, and theoretically grounded indicators of causal mechanisms and expected outcomes, enabling subsequent empirical testing and evaluation. These competencies are developed in Module B (Sessions 19, 20); Module C (Sessions 21, 24, 26); assessed in Deliverable 4 (Capstone proposal), Module C replication deliverable, and Module D final replication paper.
- Apply a broad repertoire of causal inference strategies to evaluate the effects of policy interventions, including design-driven approaches — experimental, quasi-experimental, and natural experiment designs (Modules B and D: Sessions 17-20, 31-39); model-driven approaches — mechanistic reasoning, graphical causal models (Module B: Sessions 18-19); configurational methods — Qualitative Comparative Analysis (Module C: Sessions 21-30). These competencies are systematically assessed through Deliverable 4; Module C critical QCA replication; and Module D final replication paper.
- Critically replicate and transparently document published policy research, using evolving computational tools (R), to strengthen skills in transparent, reproducible research and to critically evaluate existing evidence bases in policy studies.The goal is developed through practical hands-on sessions in Module C (Sessions 22, 25, 27, 29, 30) and Module D (Sessions 32-40); assessed in Module C critical QCA replication and Module D final replication paper.
- Integrate multiple methodological approaches — combining diagnostic frameworks, experimental and quasi-experimental designs, and configurational analysis — to provide comprehensive, evidence-based evaluations of policy designs. This integrative competence is developed across Modules A-D; cumulative mastery is assessed through Deliverables 1-4, Module C and D replication assignments, and the Final Colloquium.
- Apply knowledge flexibly across a range of substantive policy areas and institutional contexts, demonstrating the capacity to adapt analytical tools and causal inference strategies to diverse public policy challenges. The goal is supported through the open choice of policy problems for Deliverables 1-4 and for replication assignments in Modules C and D; discussed in Final Colloquium.
* Capacity to make judgements:
- Critically assess the adequacy, effectiveness, and ethical implications of different policy tools and interventions, considering the fit between instruments, behavioral assumptions, institutional contexts, and policy objectives. The goal is developed in Module A2 and A3 (Sessions 8-16); Deliverables 2, 3; discussions in group exercises (Sessions 13, 16); Final Colloquium.
- Evaluate the methodological rigor, robustness, and transparency of causal claims in published empirical studies, identifying possible limitations, untested assumptions, and weaknesses in research design. The goal is practiced in Module C (Sessions 24, 26, 28, 30); Module D (Sessions 35-39); assessed through Module C deliverable (critical QCA replication) and Module D final replication paper.
- Formulate informed, well-reasoned methodological choices when designing causal analyses of policy effectiveness, balancing methodological rigor with practical constraints and contextual considerations. The goal is developed in Module B (Sessions 17-20); Module D (Sessions 31-39); assessed through Deliverable 4 (Capstone proposal), Module D final replication paper, and Final Colloquium.
- Exercise reflexivity in the interpretation of causal findings, recognizing the limits of available evidence, the influence of normative assumptions, and the potential impact on policy decisions. The goal is fostered in discussions in Module B (Sessions 19-20); ethical and reflexive discussions in Module A (Sessions 12, 13, 16), Module C (Sessions 28, 30), and Module D (Sessions 38, 40); assessed in Deliverables 3 and 4, Module D replication paper, and Final Colloquium.
- Balance competing criteria in policy evaluation, such as technical effectiveness, democratic accountability, and normative values, when formulating or recommending policy designs. The goal is developed across Modules A, B, C, D and explicitly addressed in Module A (Sessions 12, 13, 16), Module C (Sessions 28, 30), Module D (Sessions 38, 40); assessed in Deliverables 3, 4, replication assignments, and Final Colloquium.
* Communication Skills:
- Present policy designs, causal models, and evaluation strategies using professional and effective formats, including visual representations (e.g. Coleman's boat diagrams, logic models, action situation diagrams) and structured written reports. The goal is developed in Module A (Sessions 7, 13, 16); Deliverables 1, 2, 3.
- Communicate methodological reasoning and causal inference strategies clearly and appropriately to different audiences (specialist, policy practitioners, and academic), both in written form (Deliverable 4, Module D replication paper) and orally (Final Colloquium, Session 40 presentation).The goal is supported by Module B Capstone proposal (Session 20); Module D final replication paper (Sessions 40); Final Colloquium.
- Prepare transparent, reproducible, and clearly annotated R scripts to communicate technical procedures and results in an accessible and professional manner. The goal is developed in Module C (Sessions 22, 25, 27, 29, 30); Module C deliverable (critical replication of QCA); Module D (Sessions 32-40); Module D replication paper.
- Engage effectively in group discussions and peer feedback activities, demonstrating the ability to articulate, explain, and defend methodological choices in a collaborative environment. This is practiced during group exercises (Sessions 7, 13, 16); hands-on collaborative sessions (Module C: Sessions 22, 25, 27, 29; Module D: Sessions 32-39); Session 30 Q&A; Session 40 presentation and discussion.
- Critically reflect on the communication of causal claims and policy recommendations, assessing the clarity, transparency, and ethical implications of different modes of presenting evidence and findings. The goal is fostered throughout comparative methodological sessions (Modules B, C, D); ethics-focused sessions (Sessions 12, 13, 16, 28, 30, 38, 40); Final Colloquium.
* Learning Skills:
- Conduct independent research on emerging approaches to policy design and causal inference, drawing on the conceptual frameworks and methodological approaches introduced across Modules A, B, C, and D (Sessions 1-40). The goal is supported by: assigned readings; Deliverables 1-4; Module C and D replication assignments.
- Critically replicate published empirical studies using evolving software tools (R), strengthening autonomous technical learning capacity through practical experience in transparent, reproducible coding and workflows. The goal is supported by: Module C (Sessions 22, 25, 27, 29, 30); Module D (Sessions 32-40); Module C deliverable (critical replication of QCA); Module D final replication paper.
- Synthesize insights from multiple disciplinary perspectives (political science, public policy, behavioral economics, evaluation research, causal inference), to frame research problems and design evidence-based policy solutions. The goal is developed through: Modules A and B (Sessions 1-20); Deliverables 1-4; Capstone proposal (Session 20); Final colloquium.
- Develop a reflexive and pluralist methodological orientation that supports engagement with innovative and emerging practices in evidence-based policy making, by critically evaluating the strengths and limits of alternative approaches (behavioral, institutional, experimental, configurational). The goal is supported by comparative coverage across Modules B, C, D; sessions on ethical and normative trade-offs (Sessions 12, 13, 16, 28, 30, 38, 40); Final colloquium.
- Cultivate skills for lifelong learning and self-directed professional development through the systematic use of transparent, reproducible workflows and critical reflection on evolving bodies of research evidence. The goal is embedded in ongoing hands-on sessions (Module C, Module D); replication assignments; development of annotated R scripts (Module C); final replication paper (Module D); Final colloquium discussion.
INF/01 - INFORMATICA - CFU: 6
SPS/04 - SCIENZA POLITICA - CFU: 6
SPS/04 - SCIENZA POLITICA - CFU: 6
Lezioni: 80 ore
Docenti:
Damonte Alessia, De Angelis Andrea
Docente/i
Ricevimento:
Venerdì 13.30-14.30 (studenti) - 14.30-16.30 (tesisti e dottorandi)
sopralzo, II piano, stanza 12 | VirtualOffice in Teams
Ricevimento:
To be arranged via email