What is the Russia Watcher?
The Russia Watcher is a survey project designed to collect high-frequency public opinion data in Russia. It was created in response to the war in Ukraine as a mechanism for understanding how public attitudes toward the conflict were developing and why Russians were continuing to support the war. Its high frequency allows us to observe the ways in which Russians react to events as they develop on the ground.
Who Runs the Russia Watcher?
The Russia Watcher is run by a team of political scientists at Princeton University: Grigore Pop-Eleches, Isabelle DeSisto, Laura Howells, and Jacob Tucker.
Who funds the Russia Watcher?
The Russia Watcher is generously funded by a variety of sponsors including the Liechtenstein Institute on Self Determination at Princeton University, The Princeton School of Public and International Affairs, The Mamdouha S. Bobst Center for Peace and Justice, The Program in Russian, East European and Eurasian Studies at the Princeton Institute for International and Regional Studies, and The Data Driven Social Science Initiative at Princeton University.
Respondents
Days
Questions
Can we trust public opinion data in Russia?
Since the beginning of Russia's full-scale invasion of Ukraine on February 24, 2022, this is a question that is often posed in response to surveys showing high levels of support for the conflict. When it comes to understanding Russian public opinion using survey data, there are two potential sources of bias: preference falsification and sampling bias. The former, preference falsification, occurs when respondents feel that they cannot answer survey questions honestly and therefore give what they think is the socially or politically acceptable answer. The latter, sampling bias, occurs when a survey is conducted in a way such that it does not acquire a representative sample of the population. In designing this project, we take each of these concerns seriously:
DO RUSSIANS HONESTLY ANSWER QUESTIONS ABOUT THE WAR IN UKRAINE?
The evidence that we have collected so far suggests that they do. We have tested the hypothesis that people are misreporting their opinions in a number of ways, including list experiments and asking a series of questions about the degree to which respondents think that they can express their politial opinions openly. List experiments are methods that use a randomly controlled trial to anonymize responses in a way that allows respondents to give honest answers without worrying that their answers become known to the surveyor. They are commonly used to estimate support for sensitive issues. In all of these tests, we have not found evidence that our respondents are falsifying their preferences. The details of these tests can be found on our methods page.
IS THE SURVEY SAMPLE REPRESENTATIVE OF THE RUSSIAN POPULATION?
There are many ways to take a survey sample of a population, and each has advantages and disadvantages. We sample using an internet-based method called Random Device Engagement (RDE), which was developed as the online correllary to Random Digit Dialing (RDD). RDE sampling has been shown to perform as well if not better than many traditional sampling techniques and our tests of the survey samples we have collected appear to perform just as well. We have sought to validate the samples in a number of ways including comparisons with known characteristics of the Russian population and comparing the responses to other surveys that were conducted at the same time. The details of these tests can be found on the methods page.
Even though some sampling methods perform better than others, no survey sample is perfect. To account for the observable sampling imbalances, we use reweighting techniques in each of the results that we present. In many of our research articles we use traditional rake weights. To accurately estimate change over time, we estimate daily support using a dynamic multilevel regression with poststratification. The full documentation for these methods can be found on the methods page.