Skip to main content

Posts

Posting Elsewhere

FYI: most of my blog-type writing on software development and political economy is now on Medium or also at Bruegel (only political economy at Bruegel).
Recent posts

End of 2015 Blog Roundup

Over the past few months I've mostly been blogging at a number of other venues. These include: A piece with Mark Hallerberg in Democracy Audit UK summarising our research on how, despite previous findings, democratic governments run similarly sizable bank bailout tabs as autocracies. This wasn't noticed in previous work, because democratic governments have incentives (possiblilty of losing elections) to shift the realisation of these costs into the future. A post over at Bruegel introducing the Financial Supervisory Transparency Index that Mark Copelovitch, Mark Hallerberg, and I created. We also discuss supervisory transparency's implications for a European capital markets union. At VoxUkraine, I discuss the causes and possible solutions to brawling in the Ukranian parliament based on my recent research in the Journal of Peace Research . I didn't write this one, but my co-author Tom Pepinsky, wrote a nice piece about a new working paper we have on the (dif

More Corrections the the DPI’s yrcurnt Election Timing Variable: OECD Edition

Previously on The Political Methodologist, I posted updates to the Database of Political Institutions' election timing variable: yrcurnt . That set of corrections was only for the current 28 EU member states. I’ve now expanded the corrections to include most other OECD countries. 1 Again, there were many missing elections: Change list Country Changes Australia Corrects missing 1998 election year. Canada Corrects missing 2000, 2006, 2008, 2011 election years. Iceland Corrects missing 2009 election year. Ireland Corrects missing 2011 election. Japan Corrects missing 2005 and 2012 elections. Corrects misclassification of the 2003 and 2009 elections, which were originally erroneously labeled as being in 2004 and 2008, respectively.  Import into R To import the most recent corrected version of the data into R simply use: election_time <- rio::import('https://raw.githubusercontent.com/christophergandrud/yrcurnt_corrected/master/data/yrcurnt

A Link Between topicmodels LDA and LDAvis

Carson Sievert and Kenny Shirley have put together the really nice LDAvis R package. It provides a Shiny-based interactive interface for exploring the output from Latent Dirichlet Allocation topic models. If you've never used it, I highly recommend checking out their XKCD example (this paper also has some nice background). LDAvis doesn't fit topic models, it just visualises the output. As such it is agnostic about what package you use to fit your LDA topic model. They have a useful example of how to use output from the lda package. I wanted to use LDAvis with output from the topicmodels package. It works really nicely with texts preprocessed using the tm package. The trick is extracting the information LDAvis requires from the model and placing it into a specifically structured JSON formatted object. To make the conversion from topicmodels output to LDAvis JSON input easier, I created a linking function called topicmodels_json_ldavis . The full function is below. To

Simulated or Real: What type of data should we use when teaching social science statistics?

I just finished teaching a new course on collaborative data science to social science students. The materials are on GitHub if you're interested. What did we do and why? Maybe the most unusual thing about this class from a statistics pedagogy perspective was that it was entirely focused on real world data; data that the students gathered themselves. I gave them virtually no instruction on what data to gather. They gathered data they felt would help them answer their research questions. Students directly confronted the data warts that usually consume a large proportion of researchers' actual time. My intention was that the students systematically learn tools and best practices for how to address these warts. This is in contrast to many social scientists' statistics education. Typically, students are presented with pre-arranged data. They are then asked to perform some statistical function with it. The end. This leaves students underprepared for actually using statist

Set up R/Stan on Amazon EC2

A few months ago I posted the script that I use to set up my R/JAGS working environment on an Amazon EC2 instance. Since then I've largely transitioned to using R/ Stan to estimate my models. So, I've updated my setup script (see below). There are a few other changes: I don't install/use RStudio on Amazon EC2. Instead, I just use R from the terminal. Don't get me wrong, I love RStudio. But since what I'm doing on EC2 is just running simulations (I handle the results on my local machine), RStudio is overkill. I don't install git anymore. Instead I use source_url (from devtools) and source_data (from repmis) to source scripts from GitHub. Again all of the manipulation I'm doing to these scripts is on my local machine.