Skip to main content
Music Modeling and Generation


Music is the ultimate language. Many amazing composers throughout history have composed pieces that were both creative and deliberate. Composers such as Bach were well known for being very precise in crafting pieces with a great deal of underlying musical structure. Is it possible then for a computer to also learn to create such musical structure?


Automatic Music generation is one of the hot topics in AI Research nowadays, with many big companies like Sony, investing in reviving old classics from Beatles, Michael Jackson etc. The problem of music generation is similar to that of Language/Text generation, but is much more difficult. It is difficult to generate likable/good sounding music. Generating music with long-term structure is one of the main challenges in the field of automatic composition.


Over the years, many different techniques have been proposed, some relying on standard NLP techniques, like N-Grams, Hidden Markov Models and others on more complex Deep-Learning based methods. Standard NLP techniques, like constructing language models from musical notes, is often not able to capture the essence of music, wherein musical notes, far-apart from each other may be related and complementing to each other.


A popular approach to solve this problem, is the use of a special kind of Recurrent Neural Network called LSTMs. Long Short Term Memory networks – usually just called “LSTMs”,  are a special kind of RNN, capable of learning long-term dependencies. They were introduced by Hochreiter & Schmidhuber (1997). They work tremendously well on a large variety of problems, and are now widely used for research in Music Generation.
LSTMs are explicitly designed to avoid the long-term dependency problem. Remembering information for long periods of time is practically their default behavior. All recurrent neural networks have the form of a chain of repeating modules of neural network.
Different ways for encoding music have been proposed, for use in LSTM; Some works have proposed using instrument notes, for constructing language vocabulary and subsequent language. Some works, on the other hand have focused more on converting raw-music files into text representable form for further processing.




Music generation is still an active area of research. The following are some areas, where more work is required.
  • Creating music with musical rhythm, more complex structure, and utilizing all types of notes/Larger vocabulary
  • Creating a model capable of learning long-term structure and possessing the ability to build off a melody and return to it throughout the piece


Some popular systems for Automatic Music Generation are MorpheuS, GRUV etc.


Comments

Popular posts from this blog

NLP in Video Games

From the last few decades, NLP (Natural Language Processing) has obtained a high level of success in the field  of Computer Science, Artificial Intelligence and Computational Logistics. NLP can also be used in video games, in fact, it is very interesting to use NLP in video games, as we can see games like Serious Games includes Communication aspects. In video games, the communication includes linguistic information that is passed either through spoken content or written content. Now the question is why and where can we use NLP in video games?  There are some games that are related to pedagogy or teaching (Serious Games). So, NLP can be used in these games to achieve these objectives in the real sense. In other games, one can use the speech control using NLP so that the player can play the game by concentrating only on visuals rather on I/O. These things at last increases the realism of the game. Hence, this is the reason for using NLP in games.  We ...

Discourse Analysis

NLP makes machine to understand human language but we are facing issues like word ambiguity, sarcastic sentiments analysis and many more. One of the issue is to predict correctly relation between words like " Patrick went to the club on last Friday. He met Richard ." Here, ' He' refers to 'Patrick'. This kind of issue makes Discourse analysis one of the important applications of Natural Language Processing. What is Discourse Analysis ? The word discourse in linguistic terms means language in use. Discourse analysis may be defined as the process of performing text or language analysis, which involves text interpretation and knowing the social interactions. Discourse analysis may involve dealing with morphemes, n-grams, tenses, verbal aspects, page layouts, and so on. It is often used to refer to the analysis of conversations or verbal discourse. It is useful for performing tasks, like A naphora Resolution (AR) , Named Entity Recognition (NE...

Word embeddings and an application in SMT

We all are aware of (not so) recent advancements in word representation, such as Word2Vec, GloVe etc. for various NLP tasks. Let's try to dig a little deeper of how they work, and why they are so helpful! The basics, what is a Word vector? We need a mathematical way of representing words so as to process them. We call this representation, a word vector. This representation can be as simple as a one-hot encoded vector having the size of the vocabulary.  For ex, if we had 3 words in our vocabulary {man, woman, child}, we can generate word vectors in the following manner Man : {0, 0, 1} Woman : {0, 1, 0} Child : {1, 0, 0} Such an encoding cannot be used to for any meaningful comparisons, other than checking for equality. In vectors such as Word2Vec, a word is represented as a distribution over some dimensions. Each word is assigned some particular weight for each of the dimensions. Picking up the previous example, this time the vectors can be as following (assuming a 2 dime...