Description: In this project, I present Fed2Vec, which is a Doc2Vec model trained on speeches by members of the Federal Reserve Board of Governors. I learn the word and document embeddings of Federal Reserve language to learn semantics and contextualize the language. There are many ways this model can be used to study interesting research questions related to central banks, monetary policy, and public policy in general. Some potential uses of this model could be:
- Find closest speeches between different speakers to compare monetary policy stances.
- Get embeddings calculations for speeches by other central banks and see how they compare against the Federal Reserve.
- Compare distances between speeches by the Federal Reserve chairs/chairmen and different governors.
- Use speeches to predict financial market fluctuations, macroeconomic outcomes, or future monetary policy stances
This project features webscraping, deep learning for NLP, dimensionality reduction, and visualization and analysis of word and document embeddings.
The Github repository for this project can be found here.