Week Four: 9/25 – 9/29

(Finally) Running BiMPM source code

Our server is back up! For this week, I was finally able to start running the BiMPM source code and learn about training data sets using this.

I downloaded the Quora Question Pair dataset that was also used in the paper to start off.

I had to use the code below to run it with python and tensorflow:

Boost_DIR=/opt/boost-1.63.0-gcc4.9/ OpenCV_DIR=/opt/opencv2.4-gcc4.9/ PKG_CONFIG_PATH=/opt/libzip-1.2.0-gcc4.9/lib/pkgconfig PATH=/opt/openexr-2.2.0-bin/bin:/opt/protobuf-3.2.0-gcc4.9/bin/:/opt/python2.7-gcc4.9/bin:$PATH PYTHONPATH=/opt/dlib-19.3-gcc4.9/:/opt/opencv2.4-gcc4.9/lib:/opt/opencv2.4-gcc4.9/lib/python2.7/site-packages/ LD_LIBRARY_PATH=/opt/openexr-2.2.0-bin/lib:/opt/python2.7-gcc4.9/lib/:/opt/boost-1.63.0-gcc4.9/lib/:/opt/opencv2.4-gcc4.9/lib/:/opt/protobuf-3.2.0-gcc4.9/lib:/opt/glog-gcc4.9-bin/lib:/opt/gflags-gcc4.9-bin/lib:/opt/snappy-gcc4.9-bin/lib/:/opt/cuda/lib64/:/opt/libzip-1.2.0-gcc4.9/lib/ python src/SentenceMatchTrainer.py

Although it seems that there are more configurations needed to make this command work.

I used the command below (in addition to the one above) to train my model:

python BiMPM/src/SentenceMatchTrainer.py –train_path train.tsv –dev_path dev.tsv –test_path test.tsv –word_vec_path wordvec.txt –suffix sample –fix_word_vec –model_dir models –MP_dim 20

I had issues importing rnn_cell.

I still have to figure out how to play with other arguments in order to get a better performance on this dataset like the command line configuration used by Wang below:

“dropout_rate”: 0.1,
“suffix”: “quora”,
“NER_dim”: 20,
“highway_layer_num”: 1,
“with_match_highway”: true,
“optimize_type”: “adam”,
“with_highway”: true,
“max_epochs”: 10,
“with_aggregation_highway”: true,
“with_filter_layer”: false,
“lex_decompsition_dim”: -1,
“aggregation_layer_num”: 1,
“max_char_per_word”: 10,
“wo_maxpool_match”: false,
“context_layer_num”: 1,
“wo_full_match”: false,
“lambda_l2”: 0.0,
“fix_word_vec”: true,
“wo_left_match”: false,
“with_NER”: false,
“aggregation_lstm_dim”: 300,
“context_lstm_dim”: 100,
“POS_dim”: 20,
“with_lex_decomposition”: false,
“learning_rate”: 0.001,
“with_POS”: false,
“wo_right_match”: false,
“MP_dim”: 10,
“max_sent_length”: 100,
“batch_size”: 60,
“wo_max_attentive_match”: false,
“wo_char”: false,
“wo_attentive_match”: false,
“char_emb_dim”: 20,
“char_lstm_dim”: 100,
“word_level_MP_dim”: -1,
“base_dir”: “./quora”

Furthermore, I have also tried testing the model using the argument below (again, in addition the first code snippet above):

python BiMPM/src/SentenceMatchDecoder.py –in_path test.tsv –word_vec_path wordvec.txt –mode prediction –model_prefix models/SentenceMatch.sample –out_path test.prediction

Although I still have some work and research to do to make this fully functional, we are getting there!