Fed2Vec

Description: In this project, I present Fed2Vec, which is a Doc2Vec model trained on speeches by members of the Federal Reserve Board of Governors. I learn the word and document embeddings of Federal Reserve language to learn semantics and contextualize the language. There are many ways this model can be used to study interesting research questions related to central banks, monetary policy, and public policy in general. Some potential uses of this model could be:

  1. Find closest speeches between different speakers to compare monetary policy stances.
  2. Get embeddings calculations for speeches by other central banks and see how they compare against the Federal Reserve.
  3. Compare distances between speeches by the Federal Reserve chairs/chairmen and different governors.
  4. Use speeches to predict financial market fluctuations, macroeconomic outcomes, or future monetary policy stances

This project features webscraping, deep learning for NLP, dimensionality reduction, and visualization and analysis of word and document embeddings.

The Github repository for this project can be found here.

Avatar
Ancil Crayton
Senior Research Scientist

My research interests lie at the intersection of machine learning, economic analysis, and public policy.