神经网络
-
全连接神经网络:
nn.Linear() 是pytorch里面的线性模块,也就是全连接层,主要参数有inputs_size:输入的数据大小outputs_size:输出的数据的大小bias: 表示是否使用偏置,是一个bool值,默认为True,样例:
from torch.nn import nn
class FullConnectionNet(nn.Module):
def init(self):
super(FullConnectionNet,self).init
self.layer1 = nn.Sequential(nn.Linear(inputs_size,32,bias=False),# bias 表示是否使用偏差,默认为True
nn.BatchNormld(32),nn.Relu(True))
# nn.BatchNormld:用于进行批标准化
self.layer2 = nn.Sequential(nn.Linear(32,64),nn.BatchNormld(64),nn.Relu(True))
self.layer3 = nn.Sequential(nn.Linear(64,outputs_size))
def forward(self,inputs):
net = self.layer1(inputs)
net = self.layer2(net)
out = self.layer2(net)return out
卷积神经网络:
nn.Conv2d()是pytorch中的二维卷积模块,里面常用的参数依次是:in_channels:表示输入的通道数(一般就根据图片的通道数来定)out_channels:表示输出的通道数,也就是卷积核的个数(自由设置或者参考经典的网络模型的参数设置)kernel_size:表示卷积核的高和宽,一般都采用宽高相等的卷积核,(3,2)表示高为3,宽为2stride:表示卷积核在图片上滑动的步长,默认为1padding: padding=1表示四周进行一个像素点的零填充,依次类推nn.Maxpool2d()是pytorch里面的最大池化模块,常用参数有:kernel_size,stride,padding,都跟上面一样,不再赘述样例(以LeNet为例)from torch.nn import nnclass LeNet(nn.Module):def __init__(self,input_channels,n_class):input_channels n_classsuper(LeNet,self).__init__()self.conv1 = nn.Sequential(nn.Conv2d(input_channels,6,kernel_size=5,stride=1,padding=1,bias=Ture),nn.AvgPool2d(kernel_size=2,stride=2,padding=1))self.conv2 = nn.Sequential(nn.Conv2d(6,6,kernel_size=5,stride=1,padding=1,bias=Ture),nn.AvgPool2d(kernel_size=2,stride=2,padding=1))self.conv3 = nn.Sequential(nn.Conv2d(6,16,kernel_size=5,stride=1,padding=1,bias=Ture))self.linear = nn.Sequential(nn.Linear(400,120),nn.Linear(120,84),nn.Linear(84,n_class)) # emmm,待验证,哈哈哈,我不想去试了^0^!!def forward(x):net = self.conv1(x)net = self.conv2(net)net = self.conv3(net)net = net.view(net.size(0),-1)out = self.linear(net)return out
参考书籍:《深度学习入门之PyTorch》