↑ 点击蓝字 关注视学算法
来源丨NewBeeNLP
编辑|极市平台
极市导读
如何对attention进行高效改进?本文盘点了相关论文,并梳理出它们的引用量、代码实现、算法复杂度和关键点,方便对比使用。
前几天逛github刷到一个awesome-fast-attention大列表,整理了一系列关于attention的高效改进文章,包括论文、引用量、源码实现、算法复杂度以及关键亮点。
Github地址:
https://github/Separius/awesome-fast-attention
Efficient Attention
Paper (引用量) | 源码实现 | 复杂度 | AutoRegressive | Main Idea |
---|---|---|---|---|
Generating Wikipedia by Summarizing Long Sequences[1] (208) | memory-compressed-attention[2] | compresses key and value + blocked attention | ||
CBAM: Convolutional Block Attention Module[3] (677) | attention-module[4] | combines the SE attention with a per pixel(local) weight | ||
CCNet: Criss-Cross Attention for Semantic Segmentation[5] (149) | CCNet[6] | each pixel attends to its row and column simultaneously | ||
Efficient Attention: Attention with Linear Complexities[7] (2) | efficient-attention[8] | Softmax(Q)*(Softmax(K^T)*V) | ||
Star-Transformer[9] (24) | fastNLP[10] | uses a relay(global) node and attends to/from that node | ||
Generating Long Sequences with Sparse Transformers[11] (139) | torch-blocksparse[12] | sparse block based attention | ||
GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond[13] (96) | GCNet[14] | squeeze and excitation with an attention pooling (instead of a GAP) | ||
SCRAM: Spatially Coherent Randomized Attention Maps[15] (1) | - | uses PatchMatch to find close keys | ||
Interlaced Sparse Self-Attention for Semantic Segmentation[16] (13) | IN_PAPER | combination of a short length and then long range(dilated) attention | ||
Permutohedral Attention Module for Efficient Non-Local Neural Networks[17] (2) | Permutohedral_attention_module[18] | uses permutohedral lattice approximation algorithm to approximate the attention output | ||
Large Memory Layers with Product Keys[19] (28) | XLM[20] | search for nearest neighbor keys | ||
Expectation-Maximization Attention Networks for Semantic Segmentation[21] (38) | EMANet[22] | applys expectation maximization to cluster keys into k clusters | ||
Compressive Transformers for Long-Range Sequence Modelling[23] (20) | compressive-transformer-pytorch[24] | compresses distant tokens instead of just stop_grad() ing them, more efficient version of transformerXL | ||
BP-Transformer: Modelling Long-Range Context via Binary Partitioning[25] (8) | BPT[26] | attends to distant tokens coarsely and attends to close tokens in a more fine-grained manner | ||
Axial Attention in Multidimensional Transformers[27] (5) | axial-attention[28] | apply attention on each axis separately | ||
Reformer: The Efficient Transformer[29] (69) | trax[30] | uses LSH to find close keys | ||
Transformer on a Diet[31] (2) | transformer-on-diet[32] | dilated transformer like wavenet | ||
Sparse Sinkhorn Attention[33] (4) | sinkhorn-transformer[34] | uses a cost matrix to limit attention between buckets | ||
SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection[35] (1) | - | learns the q, k connections == dynamically creates a sparse attention matrix | ||
Efficient Content-Based Sparse Attention with Routing Transformers[36] (11) | routing-transformer[37] | computes attention with same-cluster tokens (computed by online k-means) | ||
Longformer: The Long-Document Transformer[38] (15) | longformer[39] | global + blocked attention | ||
Neural Architecture Search for Lightweight Non-Local Networks[40] (2) | AutoNL[41] | computes Q(KV) and also down samples q, k, v both in spatial and channel dimensions | ||
ETC: Encoding Long and Structured Data in Transformers[42] (2) | - | combines global attention (star transformer with multiple global tokens) with local attention | ||
Multi-scale Transformer Language Models[43] (1) | IN_PAPER | UNet like + retina attetion is something close to BP-Transformer | ||
Synthesizer: Rethinking Self-Attention in Transformer Models[44] (5) | - | does not compute pairwise interactions | ||
Jukebox: A Generative Model for Music[45] (9) | jukebox[46] | better attention patterns from Sparse Transformer | ||
GMAT: Global Memory Augmentation for Transformers[47] (0) | gmat[48] | adds global tokens | ||
Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers[49] (0) | google-research[50] | calculate an unbiased stochastic approximation of the attention matrix | ||
Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer[51] (0) | - | does not compute pairwise interactions and uses fixed mask patters | ||
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention[52] (1) | fast-transformers[53] | uses phi(q)(phi(k)v) and also improves the sequential sampling step | ||
Linformer: Self-Attention with Linear Complexity[54] (3) | linformer-pytorch[55] | project key and value from nd | ||
Real-time Semantic Segmentation with Fast Attention[56] (0) | - | l2_norm(q)*(l2_norm(k)*v) | ||
Fast Transformers with Clustered Attention[57] (0) | fast-transformers[58] | groups queries together with LSH | ||
Big Bird: Transformers for Longer Sequences[59] (0) | - | ETC with random connections |
参考资料
[1] Generating Wikipedia by Summarizing Long Sequences: https://arxiv/abs/1801.10198v1
[2]memory-compressed-attention: https://github/lucidrains/memory-compressed-attention
[3] CBAM: Convolutional Block Attention Module: https://arxiv/abs/1807.06521v2
[4] attention-module: https://github/Jongchan/attention-module
[5] CCNet: Criss-Cross Attention for Semantic Segmentation: https://arxiv/abs/1811.11721v2
[6] CCNet: https://github/speedinghzl/CCNet
[7] Efficient Attention: Attention with Linear Complexities: https://arxiv/abs/1812.01243v8
[8] Efficient-attention: https://github/cmsflash/efficient-attention
[9] Star-Transformer: https://arxiv/abs/1902.09113v2
[10] fastNLP: https://github/fastnlp/fastNLP/blob/master/fastNLP/modules/encoder/star_transformer.py
[11] Generating Long Sequences with Sparse Transformers: https://arxiv/abs/1904.10509v1
[12] torch-blocksparse: https://github/ptillet/torch-blocksparse
[13] GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond: https://arxiv/abs/1904.11492v1
[14] GCNet: https://github/xvjiarui/GCNet
[15] SCRAM: Spatially Coherent Randomized Attention Maps: https://arxiv/abs/1905.10308v1
[16] Interlaced Sparse Self-Attention for Semantic Segmentation: https://arxiv/abs/1907.12273v2
[17] Permutohedral Attention Module for Efficient Non-Local Neural Networks: https://arxiv/abs/1907.00641v2
[18] Permutohedral_attention_module: https://github/SamuelJoutard/Permutohedral_attention_module
[19] Large Memory Layers with Product Keys: https://arxiv/abs/1907.05242v2
[20] XLM: https://github/facebookresearch/XLM
[21] Expectation-Maximization Attention Networks for Semantic Segmentation: https://arxiv/abs/1907.13426v2
[22] EMANet: https://github/XiaLiPKU/EMANet
[23] Compressive Transformers for Long-Range Sequence Modelling: https://arxiv/abs/1911.05507v1
[24] compressive-transformer-pytorch: https://github/lucidrains/compressive-transformer-pytorch
[25] BP-Transformer: Modelling Long-Range Context via Binary Partitioning: https://arxiv/abs/1911.04070v1
[26] BPT: https://github/yzh119/BPT
[27] Axial Attention in Multidimensional Transformers: https://arxiv/abs/1912.12180v1
[28] axial-attention: https://github/lucidrains/axial-attention
[29] Reformer: The Efficient Transformer: https://arxiv/abs/2001.04451v2
[30] trax: https://github/google/trax/tree/master/trax/models/reformer
[31] Transformer on a Diet: https://arxiv/abs/2002.06170v1
[32] transformer-on-diet: https://github/cgraywang/transformer-on-diet
[33] Sparse Sinkhorn Attention: https://arxiv/abs/2002.11296v1
[34] sinkhorn-transformer: https://github/lucidrains/sinkhorn-transformer
[35] SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection: https://arxiv/abs/2003.09833v2
[36] Efficient Content-Based Sparse Attention with Routing Transformers: https://arxiv/abs/2003.05997v1
[37] routing-transformer: https://github/lucidrains/routing-transformer
[38] Longformer: The Long-Document Transformer: https://arxiv/abs/2004.05150v1
[39] longformer: https://github/allenai/longformer
[40] Neural Architecture Search for Lightweight Non-Local Networks: https://arxiv/abs/2004.01961v1
[41] AutoNL: https://github/LiYingwei/AutoNL
[42] ETC: Encoding Long and Structured Data in Transformers: https://arxiv/abs/2004.08483v2
[43] Multi-scale Transformer Language Models: https://arxiv/abs/2005.00581v1
[44] Synthesizer: Rethinking Self-Attention in Transformer Models: https://arxiv/abs/2005.00743v1
[45] Jukebox: A Generative Model for Music: https://arxiv/abs/2005.00341v1
[46] jukebox: https://github/openai/jukebox
[47] GMAT: Global Memory Augmentation for Transformers: https://arxiv/abs/2006.03274v1
[48] gmat: https://github/ag1988/gmat
[49] Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers: https://arxiv/abs/2006.03555v1
[50] google-research: https://github/google-research/google-research/tree/master/performer/fast_self_attention
[51] Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer: https://arxiv/abs/2006.05174v1
[52] Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention: https://arxiv/abs/2006.16236v2
[53] fast-transformers: https://github/idiap/fast-transformers
[54] Linformer: Self-Attention with Linear Complexity: https://arxiv/abs/2006.04768v3
[55] linformer-pytorch: https://github/tatp22/linformer-pytorch
[56] Real-time Semantic Segmentation with Fast Attention: https://arxiv/abs/2007.03815v2
[57] Fast Transformers with Clustered Attention: https://arxiv/abs/2007.04825v1
[58] fast-transformers: https://github/idiap/fast-transformers
[59] Big Bird: Transformers for Longer Sequences: https://arxiv/abs/2007.14062v1
[60] A Survey of Long-Term Context in Transformers: https://www.pragmatic.ml/a-survey-of-methods-for-incorporating-long-term-context/
觉得有用麻烦给个在看啦~
更多推荐
魔改Attention大集合
发布评论