BAYESIAN OBSERVATIONAL LEARNING: WHEN MORE INFORMATION CAN FAILPublic Deposited
Agents of online platforms often have to make purchase decisions in the face the uncertainty of the true value of an item. Observing the other agents’ actions and/or reviews is one common method to learn about an item. However such observational learning can lead to an information cascade, in which agents ignore their private information and blindly follow the actions of others. Even though individually optimal, information cascades can result in agents choosing an inferior action with positive probability, leading to a loss in social welfare. This phenomenon has been long studied using models of Bayesian observational learning. This thesis considers the impacts of three different information structures imposed in such models: 1) information is suppressed in the form of noise, in particular by introducing observation errors; 2) additional information is introduced in the form of reviews; and 3) information is suppressed by introducing uncertainty in the randomness of agents’ arrivals in discrete-time slots. We analyze the first case by studying a simple random walk and show that with noise, both correct and incorrect cascades happen, with the same level of fragility. However, with noise, it is harder to overturn a cascade from one direction to the other. We show, somewhat surprisingly, that in certain cases, increasing the observation error rate (i.e., degrading the information quality) can lead to higher welfare for all but a finite number of agents. In the second instance, we utilize a combination of a martingale and Markov chain formulation to study convergence behaviors of agents’ actions as a function of the signals quality and the reviews strength. We discover that for a good state, the probability of wrong cascades is not monotonic in the signal quality, the review strength, and the fraction of reviews used. In addition, for a bad state, and the expected time until a correct cascade is decreasing in the review strength and the fraction of reviews used, but not in the signal quality. In the third instance, we show that by introducing some uncertainty through reducing the probability of an agent arriving in each time slot, the probability of a wrong cascade can be reduced. In all models, fundamentally non-intuitive results are discovered. We find out that more information does not universally lead to improvements. In particular, less observation noise, more reviews, better reviews, or less uncertainty in the agents’ arrivals does not necessarily reduce the probability that agents make the wrong decision.