Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use textsum?

I've been following this link to use textsum. I've trained the model using the command provided. But I don't see any folder 'train' in 'textsum/log_root/' directory. Since training is made on a sample file, will the model be able to work on real time test data? If not, how can I make training data and train the model? And most importantly how can I test / use the model to see the result summarization?

like image 757
Selva Saravana Er Avatar asked Aug 29 '16 12:08

Selva Saravana Er


1 Answers

I honestly can't answer why you would not see a train folder in the log_root directory if you have passed all your parameters correctly. One other thing to note is to make sure you wait long enough. So when you execute your training run using Textsum, are you seeing any verbose logs stating there is some error such as no file list or something. If so then your path being passed to one of the params is probably off. It is relative to the path you are calling it from as well, so you need to make sure you are at the root path where your workspace file is.

Another thing, are you using the CPU or GPU? If you are using the CPU...it takes a while for the model to get to the point where it is even able to write out the data. Now if you are using the GPU then this is much faster, but you need to wait until you see the "average_loss" logs start printing to your screen. Once you notice those, then there is good chance you will see your "train" folder with data.

As for the "real-time" test data, I am still lookin into this myself and now that I have my current data being trained in the model, I am going to be starting on that as well. The direction, I understand so far, is that once you have trained your model and have your pickle file or whatever ti is, you can then "serve" it using the info here: https://tensorflow.github.io/serving/

At that point your model is trained, and you can query against it and feed in new response so over time your model gets smarter. Again I have not proven this yet with an example but it is the approach I am going to be starting on here soon.

With regards to "testing the model", you pretty much can follow the instructions provided on the textsum git, re-generating the vocab file, and then training. Then after you get your average loss to a small enough fraction you can then run decode against the data. Then in your log_root decode folder you will see the headlines generated and their associated reference files (what the actual headline was). Hope this helps and good luck!

like image 134
xtr33me Avatar answered Oct 10 '22 01:10

xtr33me