Danil Dmitriev
Welcome to my website! I am a postdoctoral associate at the Democratic Innovations program of Institution for Social and Policy Studies at Yale University during the 2024-2025 academic year.
I am a microeconomic theorist with a focus on learning, organizational economics, and political economy. Here are my research statement and teaching statement.
Job Market Paper
Truthful information can make rational agents abandon the truth forever.
Show Abstract
This paper studies how strategic information disclosure can consistently lead rational agents to abandon their initially correct model of the world in favor of a misspecified one. We study a dynamic game between a biased sender and an agent. Over an infinite horizon, the agent chooses between two ''bandit arms'' — representing alternative policies, projects, etc. — with uncertain success rates, while the sender discloses verifiable information to sway the agent towards the sender’s preferred (inferior) arm. The agent initially assumes that the sender is biased but also entertains an alternative (incorrect) model where the sender is unbiased. The agent updates their beliefs and switches models when the Bayes Factor is sufficiently high. We show how the sender can successfully mislead the agent and convince them to choose the sender-preferred arm in the long run. Moreover, we characterize when the sender can achieve this outcome with certainty.
Publications
Learning from Shared News: When Abundant Information Leads to Belief Polarization (with Renee Bowen and Simone Galperti)
📄 PDF (Quarterly Journal of Economics, Vol. 138, n. 2, (May 2023))
📄 PDF (Quarterly Journal of Economics, Vol. 138, n. 2, (May 2023))
Can polarization of opinions arise purely due to how people share verifiable news?
Show Abstract
We study learning via shared news. Each period agents receive the same quantity
and quality of first-hand information and can share it with friends. Some friends
(possibly few) share selectively, generating heterogeneous news diets across agents akin
to echo chambers. Agents are aware of selective sharing and update beliefs by Bayes’
rule. Contrary to standard learning results, we show that beliefs can diverge in this
environment leading to polarization. This requires that (i) agents hold misperceptions
(even minor) about friends’ sharing and (ii) information quality is sufficiently low.
Polarization can worsen when agents’ social connections expand. When the quantity of
first-hand information becomes large, agents can hold opposite extreme beliefs resulting
in severe polarization. We find that news aggregators can curb polarization caused by
news sharing. Our results hold without media bias or fake news, so eliminating these
is not sufficient to reduce polarization. When fake news is included, it can lead to
polarization but only through misperceived selective sharing. We apply our theory to
shed light on the evolution of public opinions about climate change in the US.
Working Papers
How should a principal motivate an agent to frequently experiment with new ideas?
Show Abstract
How should one incentivize creativity when acting creatively is costly? We address this question using a model of delegated bandit experimentation. A principal wants an agent to constantly switch to new arms to maximize the chance of success, while the agent faces a fixed cost of switching. We show that the principal's optimal bonus scheme is maximally uncertain---the agent receives transfers for success, but their distribution has extreme variance. Despite being stationary, this bonus scheme achieves the principal's first-best outcome. We also show that the joint surplus is strictly increasing in the agent's outside value if that value is low. To illuminate the value of opaque incentives in practice, we apply our results to the YouTube Partner Program. We argue that it uses inefficiently transparent bonuses and that better experimentation and larger profit can be achieved by using appropriate opaque bonuses.
How does one keep a voting mechanism from being hijacked by external saboteurs?
Show Abstract
Online voting mechanisms (e.g. polls) are a potentially powerful, cost-effective means of collecting {large} amounts of data about preferences. However, such large-scale data collection has proven to be vulnerable to sabotage (e.g. by internet trolls) if proper precautions are not taken. We consider the problem of designing a voting mechanism that is robust to derailment by external groups. We show that plurality voting and other standard mechanisms are often not robust to sabotage; in fact it is sometimes preferable to not run any poll at all. The optimal voting mechanism is found to make saboteurs indifferent between each alternative they can vote for, since this undermines their ability to adversely affect the designer's predictions of other voters' preferences.
Is the use of commitment devices correlated with actual dynamic inconsistency in the supply of effort over time?
Show Abstract
We present a laboratory experiment designed to measure both actual and perceived dynamic inconsistency using a novel convex commitment device. Participants supply effort in the form of unpleasant tasks over time, and can commit to future effort at a cost. We find that participants demand a great deal of commitment, implying they believe they are significantly dynamically inconsistent, despite evidence of little to no dynamic inconsistency. The results suggest caution when employing commitment devices, as their usage may be unrelated to the problem they are trying to solve.
Work in Progress
Policy Experimentation under Disagreement
How do governments experiment with new policies when parties disagree about them?
Influence of Money in Elections
How does the size of an election affect its susceptibility to wealthy interests?