Does the selfattentionLayer also perform softmax and scaling?

5 ビュー (過去 30 日間)
Chih
Chih 2023 年 4 月 3 日
編集済み: cui,xingxing 2024 年 4 月 27 日 2:05
A self-attention layer computes single-head or multihead self-attention of its input.
The layer:
  1. Computes the queries, keys, and values from the input
  2. Computes the scaled dot-product attention across heads using the queries, keys, and values
  3. Merges the results from the heads
  4. Performs a linear transformation on the merged result
I wonder if the layer also apply softmax to the scaling (i.e. divide (Q*K) by sqrt(dim))? My understanding is that, within step 2, this softmax and scaling should happen.
Please clarify that for me or more general users.
Thanks.

採用された回答

Rohit
Rohit 2023 年 4 月 20 日
I understand that you want to know whether ‘selfAttentionLayer’ performs softmax and scaling operations which are involved to compute attention score.
Yes, we perform both operations to compute scaled attention score and then apply softmax as required in attention mechanism.

その他の回答 (1 件)

cui,xingxing
cui,xingxing 2024 年 1 月 11 日
編集済み: cui,xingxing 2024 年 4 月 27 日 2:05
Please check out the details of the code I wrote here link.
-------------------------Off-topic interlude, 2024-------------------------------
I am currently looking for a job in the field of CV algorithm development, based in Shenzhen, Guangdong, China,or a remote support position. I would be very grateful if anyone is willing to offer me a job or make a recommendation. My preliminary resume can be found at: https://cuixing158.github.io/about/ . Thank you!
Email: cuixingxing150@gmail.com

カテゴリ

Help Center および File ExchangeInstall Products についてさらに検索

製品


リリース

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by