前言

篇幅原因,代码实现部分放在了这篇文章中,仅供自己参考😄

refer1:缓存与效果的极限拉扯:从MHA、MQA、GQA到MLA

refer2: 博客分享:从MHA、MQA、GQA到MLA

1. 代码实现

MLA 代码的实现上可以对照着苏神博客中的训练阶段 MLA 公式和推理阶段 MLA 公式再加上 DeepSeek 论文 MLA 实现图来理解

训练阶段 MLA 公式:

o t = [ o t ( 1 ) , o t ( 2 ) , ⋯   , o t ( h ) ] o t ( s ) = A t t e n t i o n ( q t ( s ) , k ≤ t ( s ) , v ≤ t ( s ) ) ≜ ∑ i ≤ t exp ⁡ ( q t ( s ) k i ( s ) ⊤ ) v i ( s ) ∑ i ≤ t exp ⁡ ( q t ( s ) k i ( s ) ⊤ ) q i ( s ) = [ c i ′ W q c ( s ) , c i ′ W q r ( s ) R i ] ∈ R d k + d r , W q c ( s ) ∈ R d c ′ , W q r ( s ) ∈ R d c ′ × d r k i ( s ) = [ c i W k c ( s ) , x i W k r ( s ) R i ] ∈ R d k + d r , W k c ( s ) ∈ R d c , W k r ( s ) ∈ R d × d r v i ( s ) = c i W v ( s ) ∈ R d v , W v ( s ) ∈ R d c × d v c i ′ = x i W c ′ ∈ R d c ′ , W c ′ ∈ R d × d c ′ c i = x i W c ∈ R d c , W c ∈ R d × d c \begin{gathered} \boldsymbol{o}_t = \left[\boldsymbol{o}_t^{(1)}, \boldsymbol{o}_t^{(2)}, \cdots, \boldsymbol{o}_t^{(h)}\right] \\[10pt] \boldsymbol{o}_t^{(s)} = Attention\left(\boldsymbol{q}_t^{(s)}, \boldsymbol{k}_{\leq t}^{(s)} ,\boldsymbol{v}_{\leq t}^{(s)}\right)\triangleq\frac{\sum_{i\leq t}\exp\left(\boldsymbol{q}_t^{(s)} \boldsymbol{k}_i^{(s)}{}^{\top}\right)\boldsymbol{v}_i^{(s)}}{\sum_{i\leq t}\exp\left(\boldsymbol{q}_t^{(s)} \boldsymbol{k}_i^{(s)}{}^{\top}\right)} \\[15pt] \boldsymbol{q}_i^{(s)} = \left[\boldsymbol{c}_i'\boldsymbol{W}_{qc}^{(s)}, \boldsymbol{c}_i'\boldsymbol{W}_{qr}^{(s)}{\color{red}{\boldsymbol{\mathcal{R}}_i}}\right]\in\mathbb{R}^{d_k + d_r},\quad \boldsymbol{W}_{qc}^{(s)}\in\mathbb{R}^{d_c'},\boldsymbol{W}_{qr}^{(s)}\in\mathbb{R}^{d_c'\times d_r}\\ \boldsymbol{k}_i^{(s)} = \left[\boldsymbol{c}_i\boldsymbol{W}_{kc}^{(s)}, \boldsymbol{x}_i\boldsymbol{W}_{kr}^{\color{red}{\smash{\bcancel{(s)}}}}{\color{red}{\boldsymbol{\mathcal{R}}_i}}\right]\in\mathbb{R}^{d_k+d_r},\quad \boldsymbol{W}_{kc}^{(s)}\in\mathbb{R}^{d_c}, \boldsymbol{W}_{kr}^{\color{red}{\smash{\bcancel{(s)}}}}\in\mathbb{R}^{d\times d_r} \\ \boldsymbol{v}_i^{(s)} = \boldsymbol{c}_i\boldsymbol{W}_v^{(s)}\in\mathbb{R}^{d_v},\quad \boldsymbol{W}_v^{(s)}\in\mathbb{R}^{d_c\times d_v} \\[10pt] \boldsymbol{c}_i' = \boldsymbol{x}_i \boldsymbol{W}_c'\in\mathbb{R}^{d_c'},\quad \boldsymbol{W}_c'\in\mathbb{R}^{d\times d_c'} \\ \boldsymbol{c}_i = \boldsymbol{x}_i \boldsymbol{W}_c\in\mathbb{R}^{d_c},\quad \boldsymbol{W}_c\in\mathbb{R}^{d\times d_c} \\ \end{gathered} ot=[ot(1),ot(2),,ot(h)]ot(s)=Attention(qt(s),kt(s),vt(s))itexp(qt(s)ki(s))itexp(qt(s)ki(s))vi(s)qi(s)=[ciWqc(s),ciWqr(s)Ri]Rdk+dr,Wqc(s)Rdc,Wqr(s)Rdc×drki(s)=[ciWkc(s),xiWkr(s) Ri]Rdk+dr,Wkc(s)Rdc,Wkr(s) Rd×drvi(s)=ciWv(s)Rdv,Wv(s)Rdc×dvci=xiWcRdc,WcRd×dcci=xiWcRdc,WcRd×dc

推理阶段 MLA 公式:

o t = [ o t ( 1 ) W v ( 1 ) , o t ( 2 ) W v ( 2 ) , ⋯   , o t ( h ) W v ( h ) ] o t ( s ) = A t t e n t i o n ( q t ( s ) , k ≤ t ( s ) , c ≤ t ) ≜ ∑ i ≤ t exp ⁡ ( q t ( s ) k i ( s ) ⊤ ) c i ∑ i ≤ t exp ⁡ ( q t ( s ) k i ( s ) ⊤ ) q i ( s ) = [ c i ′ W q c ( s ) W k c ( s ) ⊤ , c i ′ W q r ( s ) R i ] ∈ R d c + d r k i ( s ) = [ c i , x i W k r ( s ) R i ] ∈ R d c + d r W q c ( s ) ∈ R d c ′ × d k , W k c ( s ) ∈ R d c × d k , W q r ( s ) ∈ R d c ′ × d r , W k r ( s ) ∈ R d × d r c i ′ = x i W c ′ ∈ R d c ′ , W c ′ ∈ R d × d c ′ c i = x i W c ∈ R d c , W c ∈ R d × d c \begin{gathered} \boldsymbol{o}_t = \left[\boldsymbol{o}_t^{(1)}\boldsymbol{W}_v^{(1)}, \boldsymbol{o}_t^{(2)}\boldsymbol{W}_v^{(2)}, \cdots, \boldsymbol{o}_t^{(h)}\boldsymbol{W}_v^{(h)}\right] \\[10pt] \boldsymbol{o}_t^{(s)} = Attention\left(\boldsymbol{q}_t^{(s)}, \boldsymbol{k}_{\leq t}^{\color{red}{\smash{\bcancel{(s)}}}} ,\boldsymbol{c}_{\leq t}\right)\triangleq\frac{\sum_{i\leq t}\exp\left(\boldsymbol{q}_t^{(s)} \boldsymbol{k}_i^{\color{red}{\smash{\bcancel{(s)}}}}{}^{\top}\right)\boldsymbol{c}_i}{\sum_{i\leq t}\exp\left(\boldsymbol{q}_t^{(s)} \boldsymbol{k}_i^{\color{red}{\smash{\bcancel{(s)}}}}{}^{\top}\right)} \\[15pt] \boldsymbol{q}_i^{(s)} = \left[\boldsymbol{c}_i'\boldsymbol{W}_{qc}^{(s)}\boldsymbol{W}_{kc}^{(s)}{}^{\top}, \boldsymbol{c}_i'\boldsymbol{W}_{qr}^{(s)}{\color{red}{\boldsymbol{\mathcal{R}}_i}}\right]\in\mathbb{R}^{d_c + d_r}\\ \boldsymbol{k}_i^{\color{red}{\smash{\bcancel{(s)}}}} = \left[\boldsymbol{c}_i, \boldsymbol{x}_i\boldsymbol{W}_{kr}^{\color{red}{\smash{\bcancel{(s)}}}}{\color{red}{\boldsymbol{\mathcal{R}}_i}}\right]\in\mathbb{R}^{d_c+d_r}\\ \boldsymbol{W}_{qc}^{(s)}\in\mathbb{R}^{d_c'\times d_k},\boldsymbol{W}_{kc}^{(s)}\in\mathbb{R}^{d_c\times d_k},\boldsymbol{W}_{qr}^{(s)}\in\mathbb{R}^{d_c'\times d_r},\boldsymbol{W}_{kr}^{\color{#ccc}{\smash{\bcancel{(s)}}}}\in\mathbb{R}^{d\times d_r} \\[10pt] \boldsymbol{c}_i' = \boldsymbol{x}_i \boldsymbol{W}_c'\in\mathbb{R}^{d_c'},\quad \boldsymbol{W}_c'\in\mathbb{R}^{d\times d_c'} \\ \boldsymbol{c}_i = \boldsymbol{x}_i \boldsymbol{W}_c\in\mathbb{R}^{d_c},\quad \boldsymbol{W}_c\in\mathbb{R}^{d\times d_c} \\ \end{gathered} ot=[ot(1)Wv(1),ot(2)Wv(2),,ot(h)Wv(h)]ot(s)=Attention(qt(s),kt(s) ,ct)itexp(qt(s)ki(s) )itexp(qt(s)ki(s) )ciqi(s)=[ciWqc(s)Wkc(s),ciWqr(s)Ri]Rdc+drki(s) =[ci,xiWkr(s) Ri]Rdc+drWqc(s)Rdc×dk,Wkc(s)Rdc×dk,Wqr(s)Rdc×dr,Wkr(s) Rd×drci=xiWcRdc,WcRd×dcci=xiWcRdc,WcRd×dc

DeepSeek 论文 MLA 实现图:

在这里插入图片描述

代码实现如下:

import math
import torch
import torch.nn as nn

def apply_rope(q_rope, base_freq=10000.0, pos=0):
    # TODO: Implement the RoPE function
    return q_rope

# Multi-head Latent Attention
class MLA(nn.Module):

    def __init__(self, config):
        super().__init__()
        self.d_model = config.d_model
        self.num_heads = config.num_heads
        self.dim_k = config.dim_k
        self.dim_v = config.dim_v
        self.dim_c = config.dim_c
        self.dim_c_prime = config.dim_c_prime
        self.dim_rope = config.dim_rope

        # 下/上投影矩阵
        self.W_c = nn.Linear(self.d_model, self.dim_c)
        self.W_c_prime = nn.Linear(self.d_model, self.dim_c_prime)

        self.W_qc = nn.ModuleList([nn.Linear(self.dim_c_prime, self.dim_k) for _ in range(self.num_heads)])
        self.W_qrope = nn.ModuleList([nn.Linear(self.dim_c_prime, self.dim_rope) for _ in range(self.num_heads)])

        self.W_kc = nn.ModuleList([nn.Linear(self.dim_c, self.dim_k) for _ in range(self.num_heads)])
        self.W_krope = nn.Linear(self.d_model, self.dim_rope)

        self.W_v = nn.ModuleList([nn.Linear(self.dim_c, self.dim_v) for _ in range(self.num_heads)])

        self.out_proj = nn.Linear(self.num_heads * self.dim_v, self.d_model)

        # KV Cache
        self.c_cache = None
        self.k_rope_cache = None

        # Inference phase
        self.W_qc_kc = None

    def forward(self, x):
        if not self.training and self.W_qc_kc is None:
            self.precompute()
        
        bsz, seq_len, _ = x.shape

        c = self.W_c(x) # (bsz, seq_len, dim_c)
        c_prime = self.W_c_prime(x)

        k_rope = self.W_krope(x) # (bsz, seq_len, dim_rope)
        k_rope = apply_rope(k_rope)

        # KV Cache
        if not self.training:
            if self.c_cache is None:
                self.c_cache = c
                self.k_rope_cache = k_rope
            else:
                # c_cache (1, N, dim_c)
                # c (1, 1, dim_c)
                # seq_len 维度拼接
                c = torch.concat([self.c_cache, c], dim=1) # (1, N+1, dim_c)
                self.c_cache = c

                k_rope = torch.concat([self.k_rope_cache, k_rope], dim=1)
                self.k_rope_cache = k_rope
        
        outputs = []
        # TODO: Parallel
        for i in range(self.num_heads):
            if self.training:
                q_nope = self.W_qc[i](c_prime) # (bsz, seq_len, dim_k)
                k_nope = self.W_kc[i](c)
            else:
                q_nope = torch.matmul(self.W_qc_kc[i], c_prime)
                k_nope = c
            
            q_rope = self.W_qrope[i](c_prime) # (bsz, seq_len, dim_rope)
            q_rope = apply_rope(q_rope) # (bsz, seq_len, dim_rope)

            q = torch.concat([q_nope, q_rope], dim=-1) # (bsz, seq_len, dim_k + dim_rope)
            k = torch.concat([k_nope, k_rope.clone()], dim=-1)

            v = self.W_v[i](c)

            attn_score = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.dim_k + self.dim_rope)
            attn_score = torch.softmax(attn_score, dim=-1)
            # TODO: Attention Mask and Dropout
            attn_output = torch.matmul(attn_score, v)
            outputs.append(attn_output)
        
        outputs = torch.concat(outputs, dim=-1)
        outputs = self.out_proj(outputs)

        return outputs

    def precompute(self):
        self.W_qc_kc = [self.W_qc[i].weight @ self.W_kc[i].weight.t() for i in range(self.num_heads)]

Note:代码参考自:https://github.com/preacher-1/MLA_tutorial

关于 MLA 的实现有几个需要注意的点:

  • W k r \boldsymbol{W}_{kr} Wkr K K K 的旋转位置编码部分的投影来自于 input hidden h t \mathbf{h}_t ht
  • 推理阶段 W k c \boldsymbol{W}_{kc} Wkc 矩阵被 W q c \boldsymbol{W}_{qc} Wqc 矩阵吸收

2. DeepSeek-V3官方代码

最后我们对比看下 DeepSeek 官方的实现:

"""
这个 MLA 类实现了带有低秩适应(LoRA)和旋转位置编码(RoPE)的多头注意力机制(Multi-Head Attention)
具体来说:
- 低秩适应(LoRA): 通过引入低秩矩阵来减少模型参数数量, 提高模型的可训练性和效率
- 旋转位置编码(RoPE): 通过旋转位置编码来捕捉序列中的相对位置信息, 增强模型对长距离依赖关系的理解
- 多头注意力机制: 通过多个独立的注意力头来捕捉不同子空间的信息, 提高模型的表达能力
- 分布式支持: 通过 world_size 和 rank 参数支持多进程分布式训练, 确保每个进程只负责一部分头, 提高了并行计算能力
- 缓存机制: 使用 KV Cache 加速推理过程
- 通过
"""

class MLA(nn.Module):
    """
    Multi-Headed Attention Layer (MLA).

    Attributes:
        dim (int): Dimensionality of the input features.
        n_heads (int): Number of attention heads.
        n_local_heads (int): Number of local attention heads for distributed systems.
        q_lora_rank (int): Rank for low-rank query projection.
        kv_lora_rank (int): Rank for low-rank key/value projection.
        qk_nope_head_dim (int): Dimensionality of non-positional query/key projections.
        qk_rope_head_dim (int): Dimensionality of rotary-positional query/key projections.
        qk_head_dim (int): Total dimensionality of query/key projections.
        v_head_dim (int): Dimensionality of value projections.
        softmax_scale (float): Scaling factor for softmax in attention computation.
    """
    def __init__(self, args: ModelArgs):
        super().__init__()
        # 初始化参数
        # 2024
        self.dim = args.dim # 输入数据的维度
        # 16
        self.n_heads = args.n_heads # 头的数量
        # 16
        self.n_local_heads = args.n_heads // world_size # 每个进程本地的头数量
        # 0
        self.q_lora_rank = args.q_lora_rank # 查询向量的 LoRA 秩
        # 512
        self.kv_lora_rank = args.kv_lora_rank # 键值对向量的 LoRA 秩 
        # 128
        self.qk_nope_head_dim = args.qk_nope_head_dim # 查询和键值对中不含位置编码的部分维度
        # 64
        self.qk_rope_head_dim = args.qk_rope_head_dim # 查询和键值对中含义旋转位置编码的部分维度
        # 192
        self.qk_head_dim = args.qk_nope_head_dim + args.qk_rope_head_dim # 查询和键值对的总维度
        # 128
        self.v_head_dim = args.v_head_dim # 值向量的维度

        # 定义查询向量的线程变换层即 $W_Q$
        if self.q_lora_rank == 0:
            self.wq = ColumnParallelLinear(self.dim, self.n_heads * self.qk_head_dim)
        else:
            self.wq_a = Linear(self.dim, self.q_lora_rank)
            self.q_norm = RMSNorm(self.q_lora_rank)
            self.wq_b = ColumnParallelLinear(self.q_lora_rank, self.n_heads * self.qk_head_dim)

        # 定义键值对向量的线性变换层即 $W_K,W_V$
        self.wkv_a = Linear(self.dim, self.kv_lora_rank + self.qk_rope_head_dim)
        self.kv_norm = RMSNorm(self.kv_lora_rank)
        self.wkv_b = ColumnParallelLinear(self.kv_lora_rank, self.n_heads * (self.qk_nope_head_dim + self.v_head_dim))

        # 定义输出线性变换层
        self.wo = RowParallelLinear(self.n_heads * self.v_head_dim, self.dim)

        # 定义 softmax 缩放因子即 $\frac{1}{\sqrt{D}}$
        self.softmax_scale = self.qk_head_dim ** -0.5
        if args.max_seq_len > args.original_seq_len:
            # YaRN: https://github.com/jquesnelle/yarn
            # $1+0.1\log{\frac{L_{test}}{L_{train}}}$
            mscale = 0.1 * args.mscale * math.log(args.rope_factor) + 1.0
            self.softmax_scale = self.softmax_scale * mscale * mscale

        # 注册缓存缓冲区用于 KV Cache
        if attn_impl == "naive":
            self.register_buffer("k_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.n_local_heads, self.qk_head_dim), persistent=False)
            self.register_buffer("v_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.n_local_heads, self.v_head_dim), persistent=False)
        else:
            # torch.Size([8, 16384, 512])
            self.register_buffer("kv_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.kv_lora_rank), persistent=False)
            # torch.Size([8, 16384, 64])
            self.register_buffer("pe_cache", torch.zeros(args.max_batch_size, args.max_seq_len, self.qk_rope_head_dim), persistent=False)

    def forward(self, x: torch.Tensor, start_pos: int, freqs_cis: torch.Tensor, mask: Optional[torch.Tensor]):
        """
        Forward pass for the Multi-Headed Attention Layer (MLA).

        Args:
            x (torch.Tensor): Input tensor of shape (batch_size, seq_len, dim).
            start_pos (int): Starting position in the sequence for caching.
            freqs_cis (torch.Tensor): Precomputed complex exponential values for rotary embeddings.
            mask (Optional[torch.Tensor]): Mask tensor to exclude certain positions from attention.

        Returns:
            torch.Tensor: Output tensor with the same shape as the input.
        """
        # torch.Size([2, 128, 2048])
        bsz, seqlen, _ = x.size()  # 获取批次大小、序列长度和隐藏层维度
        # 128
        end_pos = start_pos + seqlen # 计算当前序列的结束位置

        # 计算查询向量 q 即 $Q=XW_Q$
        if self.q_lora_rank == 0:
            # torch.Size([2, 128, 3072])
            q = self.wq(x) # 直接进行线性变换
        else:
            q = self.wq_b(self.q_norm(self.wq_a(x))) # 使用 LoRA 进行线性变换
        
        # 将查询向量 q 重塑
        # torch.Size([2, 128, 16, 192])
        q = q.view(bsz, seqlen, self.n_local_heads, self.qk_head_dim)

        # 分离不含位置编码和含有旋转位置编码的部分
        # torch.Size([2, 128, 16, 128]), torch.Size([2, 128, 16, 64])
        q_nope, q_pe = torch.split(q, [self.qk_nope_head_dim, self.qk_rope_head_dim], dim=-1)

        # 应用旋转位置编码到查询向量 q
        # torch.Size([2, 128, 16, 64])
        q_pe = apply_rotary_emb(q_pe, freqs_cis)

        # 计算键值对向量 kv
        # torch.Size([2, 128, 576])
        kv = self.wkv_a(x)

        # 分离键值对向量中的键值部分和旋转位置编码部分
        # torch.Size([2, 128, 512]), torch.Size([2, 128, 64])
        kv, k_pe = torch.split(kv, [self.kv_lora_rank, self.qk_rope_head_dim], dim=-1)

        # 应用旋转位置编码到键向量 k
        # torch.Size([2, 128, 1, 64])
        k_pe = apply_rotary_emb(k_pe.unsqueeze(2), freqs_cis)

        # 根据不同的注意力实现方式处理键值对向量
        if attn_impl == "naive":
            q = torch.cat([q_nope, q_pe], dim=-1) # 合并查询向量的不同部分
            kv = self.wkv_b(self.kv_norm(kv)) # 对键值对向量进行线性变换
            # 键值向量重塑
            kv = kv.view(bsz, seqlen, self.n_local_heads, self.qk_nope_head_dim + self.v_head_dim)

            # 分离键值对向量中的键和值部分
            k_nope, v = torch.split(kv, [self.qk_nope_head_dim, self.v_head_dim], dim=-1)

            # 合并键的不同部分
            k = torch.cat([k_nope, k_pe.expand(-1, -1, self.n_local_heads, -1)], dim=-1)

            # 更新 KV Cache
            self.k_cache[:bsz, start_pos:end_pos] = k
            self.v_cache[:bsz, start_pos:end_pos] = v

            # 计算注意力分数即 $\frac{QK^T}{\sqrt{D}}$
            scores = torch.einsum("bshd,bthd->bsht", q, self.k_cache[:bsz, :end_pos]) * self.softmax_scale
        else:
            # torch.Size([4096, 512])
            wkv_b = self.wkv_b.weight if self.wkv_b.scale is None else weight_dequant(self.wkv_b.weight, self.wkv_b.scale, block_size) 
            # torch.Size([16, 256, 512])
            wkv_b = wkv_b.view(self.n_local_heads, -1, self.kv_lora_rank)

            # 对查询向量的不同部分进行线性变换
            # torch.Size([2, 128, 16, 512])
            q_nope = torch.einsum("bshd,hdc->bshc", q_nope, wkv_b[:, :self.qk_nope_head_dim])

            # 更新键值对缓存
            # torch.Size([2, 128, 512])
            self.kv_cache[:bsz, start_pos:end_pos] = self.kv_norm(kv)
            # torch.Size([2, 128, 64])
            self.pe_cache[:bsz, start_pos:end_pos] = k_pe.squeeze(2)
            # torch.Size([2, 128, 16, 128])

            # 计算注意力分数
            scores = (torch.einsum("bshc,btc->bsht", q_nope, self.kv_cache[:bsz, :end_pos]) +
                      torch.einsum("bshr,btr->bsht", q_pe, self.pe_cache[:bsz, :end_pos])) * self.softmax_scale
        
        # 应用掩码
        if mask is not None:
            # torch.Size([2, 128, 16, 128])
            scores += mask.unsqueeze(1)
        
        # 计算加权求和结果
        # torch.Size([2, 128, 16, 128])
        scores = scores.softmax(dim=-1, dtype=torch.float32).type_as(x)
        if attn_impl == "naive":
            x = torch.einsum("bsht,bthd->bshd", scores, self.v_cache[:bsz, :end_pos])
        else:
            # torch.Size([2, 128, 16, 512])
            x = torch.einsum("bsht,btc->bshc", scores, self.kv_cache[:bsz, :end_pos])
            # torch.Size([2, 128, 16, 128])
            x = torch.einsum("bshc,hdc->bshd", x, wkv_b[:, -self.v_head_dim:])
        
        # 最终线性变换
        # torch.Size([2, 128, 2048])
        x = self.wo(x.flatten(2))
        return x

Note:代码参考自:https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/model.py

整体流程及维度的变化可以参考下图:

在这里插入图片描述

Note:该图片来自于:完全从零实现DeepSeek MLA算法(MultiHead Latent Attention)视频评论区

结语

MLA 可以看作是 GQA 的优化,通过投影矩阵的方式替换 GQA 中的分割、复制等线性变换操作,并引入了一个恒等变换在推理阶段通过矩阵吸收来进一步压缩 KV Cache,同时采用了一种混合方法通过新增维度来兼容 RoPE 旋转位置编码,总的来说,MLA 算得上一种非常实用的注意力变体

大家可以多看看苏神的文章来加深理解🤗

参考

Logo

欢迎加入DeepSeek 技术社区。在这里,你可以找到志同道合的朋友,共同探索AI技术的奥秘。

更多推荐