浅析PyTorch中nn.Linear的使用

我和你,或许分开才是最好的选择,那些曾经的过客,就当做最美丽的风景线,在以后的以后,我会好好坚强。

查看源码

Linear 的初始化部分:

class Linear(Module):
 ...
 __constants__ = ['bias']
 
 def __init__(self, in_features, out_features, bias=True):
   super(Linear, self).__init__()
   self.in_features = in_features
   self.out_features = out_features
   self.weight = Parameter(torch.Tensor(out_features, in_features))
   if bias:
     self.bias = Parameter(torch.Tensor(out_features))
   else:
     self.register_parameter('bias', None)
   self.reset_parameters()
 ...
 

需要实现的内容:

计算步骤:

@weak_script_method
  def forward(self, input):
    return F.linear(input, self.weight, self.bias)

返回的是:input * weight + bias

对于 weight

weight: the learnable weights of the module of shape
  :math:`(\text{out\_features}, \text{in\_features})`. The values are
  initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
  :math:`k = \frac{1}{\text{in\_features}}`

对于 bias

bias:  the learnable bias of the module of shape :math:`(\text{out\_features})`.
    If :attr:`bias` is ``True``, the values are initialized from
    :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
    :math:`k = \frac{1}{\text{in\_features}}`

实例展示

举个例子:

>>> import torch
>>> nn1 = torch.nn.Linear(100, 50)
>>> input1 = torch.randn(140, 100)
>>> output1 = nn1(input1)
>>> output1.size()
torch.Size([140, 50])
 

张量的大小由 140 x 100 变成了 140 x 50

执行的操作是:

[140,100]×[100,50]=[140,50]

以上就是浅析PyTorch中nn.Linear的使用。考研历程,苦,此生拥有过,足!更多关于浅析PyTorch中nn.Linear的使用请关注haodaima.com其它相关文章!

您可能有感兴趣的文章
pytorch--之halfTensor的使用详解

解决pytorch中的kl divergence计算问题

pytorch交叉熵损失函数的weight参数的使用

pytorch 实现变分自动编码器的操作

PyTorch梯度裁剪避免训练loss nan的操作