How was the exampleWordEmbedding example in the text analytics toolbox trained, in detail?
2 ビュー (過去 30 日間)
古いコメントを表示
The documentation for readWordEmbedding gives a pre-trained embedding, saying only that it was "derived by analyzing text from Wikipedia".
How was it trained?
Should we consider it a 'high quality' word embedding, better than anything a user could generate without extensive work and CPU time? Or is it a quick and dirty starting point, and we are encouraged to train our own for better performance?
0 件のコメント
回答 (1 件)
Christopher Creutzig
2020 年 3 月 9 日
The embedding is rather low-dimensional (50 dimensions) and has a small vocabulary (with 9999 words). It is unlikely to be “high quality” unless your analysis just happens to need precisely this dataset.
For production use, it is much more likely you'll find fastTextWordEmbedding useful, which downloads data from https://www.mathworks.com/matlabcentral/fileexchange/66229-text-analytics-toolbox-model-for-fasttext-english-16-billion-token-word-embedding for you.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Modeling and Prediction についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!