フィルターのクリア

BERT encoding is very slow - Help

8 ビュー (過去 30 日間)
Zzz
Zzz 2021 年 5 月 7 日
回答済み: Ralf Elsas 2023 年 2 月 26 日
I've been following this github: https://github.com/matlab-deep-learning/transformer-models which is the MATLAB implementation of BERT.
While trying to encode my text using the tokenizer, following this script, I realize that BERT encoding takes very long to work on my dataset.
My dataset contains 1000+ text entries, each of which is ~1000 in length. I noticed that the example csv used in the github contains very short description text. My question is: how can we perform text preprocessing using BERT encoding? And how we can speed up the encoding process?
Thanks!

採用された回答

Divya Gaddipati
Divya Gaddipati 2021 年 5 月 13 日
Here are a few things that you can try to speed up the tokenizer, which were suggested by the GitHub repo author (you can also find this information here):
1. Remove redundant white-space tokenization in BasicTokenizer
2. Convert basic tokenized tokens to UTF32 in one call in FullTokenizer, and modify WordPieceTokenizer to accept UTF32 as input.
3. Only call sub.string() once in WordPieceTokenizer.
4. Remove input validation in WhitespaceTokenizer which may be called many times.
If the issue still exists, you could also create a new issue on the GitHub page itself.

その他の回答 (1 件)

Ralf Elsas
Ralf Elsas 2023 年 2 月 26 日
Hello! For everybody dealing with this issue - it can be easily solved: fastBERTtokens

カテゴリ

Help Center および File ExchangeModeling and Prediction についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by