With the advent of Internet technologies, users now have huge collections of text, audio and video at their fingertips. Even though this explosion of information has its benefits, it has brought about serious challenges regarding the retrieval and discovery of these contents. With recent advancements in text information retrieval, specially probabilistic topic models, it would be desirable to use these successful experiences in media other than text. Regardless, the fundamental differences in the nature of observations of text and other media, has made the use of these models very limited. There are various proposals for overcoming this problem. G-LDA is one such model. G-LDA is a probabilistic topic model for audio documents which extends standard LDA to continuous spaces. In G-LDA, distribution of topics are assumed to be Gaussians which may not be true with regards to different applications. In this thesis, we propose an extension of G-LDA to Gaussian mixtures called GM-LDA. Replacing Gaussian topics with mixtures enable the model to learn multi-modal, non-Gaussian topics too. This change will allow a better modeling of documents in a more compact topic space. We also have explored the use of this model in document modeling, genre classification and song auto-tagging tasks which shows acceptable results. With respect to G-LDA, topics learned by GM-LDA perform 11% better in genre classification while reducing the number of topics by 40%. In song auto-tagging task, the performance of the topics learned are comparable with G-LDA and other models Key Words: Music information retrieval, Probabilistic topic modeling, Gaussian mixtures, Latent Dirichlet allocation, Continuous feature space