Additional BiMPM research and scikit-learn
For the beginning of this week, as the server that we needed to use to run the source code of BiMPM for natural language sentence matching was still down (in addition to the countless effects of Hurricane Irma), I mainly focused on doing more research on this topic.
We wanted to use Wang, Hamza, and Florian’s work for this research project because their “matching-aggregation” framework differs from previous ones in such a way that their model matches two sentences, P and Q, in two directions (P -> Q and P <- Q). From this, in each individual direction, their model matches both of the sentences from multiple perspectives. From the experiments that they ran on the “Quora Question Pairs” dataset and their evaluations of their model (namely paraphrase identification, natural language inference, and answer selection), their results showed that their model achieves optimal performance on all tasks. We could also see that eliminating any of the matching strategies that they used hurt the performance significantly.
For the rest of the week, I was also able to begin going through scikit-learn tutorials in order to learn Machine Learning in Python.