神经网络:Zero2Hero 3 - Tanh、Gradient、BatchNormal

news2024/10/5 19:21:23

Zero2Hero : 3 - Tanh、Gradient、BatchNormal

  • 接上篇,对MLP模型有进一步进行了修改,增加BatchNormal、和激活函数。
  • 深入研究深层网络的内部,激活、反向传递梯度以及随机初始化的陷阱。
  • BatchNormal的作用。
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt # for making figures
from matplotlib.font_manager import FontProperties
font = FontProperties(fname='../chinese_pop.ttf', size=10)

加载数据集

数据是一个中文名数据集

words = open('../Chinese_Names_Corpus.txt', 'r').read().splitlines()
# 数据包含100多万个姓名,过滤出一个姓氏用来测试
names = [name for name in words if name[0] == '王' and len(name) == 3]
len(names)
52127
# 构建词汇表到索引,索引到词汇表的映射,词汇表大小为:1561(加上开始和结束填充字符):
chars = sorted(list(set(''.join(names))))
char2i = {s:i+1 for i,s in enumerate(chars)}
char2i['.'] = 0               # 填充字符
i2char = {i:s for s,i in char2i.items()}
len(chars)
1650

构建训练数据

block_size = 2  

def build_dataset(names):  
    X, Y = [], []
    for w in names:
        context = [0] * block_size
        for ch in w + '.':
            ix = char2i[ch]
            X.append(context)
            Y.append(ix)
            context = context[1:] + [ix] # crop and append

    X = torch.tensor(X)
    Y = torch.tensor(Y)
    print(X.shape, Y.shape)
    return X, Y

划分数据:

import random
random.seed(42)
random.shuffle(names)
n1 = int(0.8*len(names))

Xtr, Ytr = build_dataset(names[:n1])
Xte, Yte = build_dataset(names[n1:])
torch.Size([166804, 2]) torch.Size([166804])
torch.Size([41704, 2]) torch.Size([41704])

MLP模型

  • 模型结构:输入层 → \to 嵌入层 → \to 隐藏层 → \to BatchNormal层 → \to 激活函数 → \to 输出层。

初始化模型参数:

vocab_size = len(char2i)
n_embd = 2                    # 嵌入向量维度
n_hidden = 200                # 隐藏层神经元

g = torch.Generator().manual_seed(2147483647)  
C  = torch.randn((vocab_size, n_embd),  generator=g)
W1 = torch.randn((n_embd * block_size, n_hidden), generator=g)  #* (5/3)/((n_embd * block_size)**0.5) #* 0.2
b1 = torch.randn(n_hidden,                        generator=g)  #* 0.01
W2 = torch.randn((n_hidden, vocab_size),          generator=g)  #* 0.01
b2 = torch.randn(vocab_size,                      generator=g)  #* 0

# BatchNorm parameters
bngain = torch.ones((1, n_hidden))
bnbias = torch.zeros((1, n_hidden))
bnmean_running = torch.zeros((1, n_hidden))
bnstd_running = torch.ones((1, n_hidden))

parameters = [C, W1, W2, b2, bngain, bnbias]
print(sum(p.nelement() for p in parameters)) # number of parameters in total
for p in parameters:
    p.requires_grad = True
336353

训练模型:

# same optimization as last time
max_steps = 20000
batch_size = 32
lossi = []

for i in range(max_steps):
    # random batch data
    ix = torch.randint(0, Xtr.shape[0], (batch_size,), generator=g)
    Xb, Yb = Xtr[ix], Ytr[ix]                        

    # forward pass
    emb = C[Xb]                         # embed the characters into vectors
    embcat = emb.view(emb.shape[0], -1) # concatenate the vectors
    # Linear layer
    hpreact = embcat @ W1 + b1          # hidden layer pre-activation
    # BatchNorm layer
    bnmeani = hpreact.mean(0, keepdim=True)   # (1, n_hidden)
    bnstdi = hpreact.std(0, keepdim=True)     # (1, n_hidden)
    hpreact = bngain * (hpreact - bnmeani) / bnstdi + bnbias
    with torch.no_grad():
        bnmean_running = 0.999 * bnmean_running + 0.001 * bnmeani
        bnstd_running = 0.999 * bnstd_running + 0.001 * bnstdi
    # -------------------------------------------------------------
    # Non-linearity
    h = torch.tanh(hpreact)   
    # output layer   
    logits = h @ W2 + b2  
    loss = F.cross_entropy(logits, Yb) # loss function
    # backward pass
    for p in parameters:
        p.grad = None
    loss.backward()
  
    # update
    lr = 0.1 if i < 10000 else 0.01   
    for p in parameters:
        p.data += -lr * p.grad
    lossi.append(loss.log10().item())

训练/测试Loss:

with torch.no_grad():
    # pass the training set through
    emb = C[Xtr]
    embcat = emb.view(emb.shape[0], -1)
    hpreact = embcat @ W1  + b1
    # measure the mean/std over the entire training set
    bnmean = hpreact.mean(0, keepdim=True)
    bnstd = hpreact.std(0, keepdim=True)
@torch.no_grad() # this decorator disables gradient tracking
def split_loss(split):
    x,y = {'train': (Xtr, Ytr),
           'test': (Xte, Yte),}[split]
    emb = C[x] # (N, block_size, n_embd)
    embcat = emb.view(emb.shape[0], -1) # concat into (N, block_size * n_embd)
    hpreact = embcat @ W1  + b1
    #hpreact = bngain * (hpreact - hpreact.mean(0, keepdim=True)) / hpreact.std(0, keepdim=True) + bnbias
    hpreact = bngain * (hpreact - bnmean_running) / bnstd_running + bnbias
    h = torch.tanh(hpreact)    # (N, n_hidden)
    logits = h @ W2 + b2       # (N, vocab_size)
    loss = F.cross_entropy(logits, y)
    print(split, loss.item())

split_loss('train')
split_loss('test')
train 3.2291476726531982
test 3.237765312194824

随机初始化参数并进行缩放:

# 对随机初始化的参数进行缩放至更小的值
g = torch.Generator().manual_seed(2147483647)  
C  = torch.randn((vocab_size, n_embd),  generator=g)
W1 = torch.randn((n_embd * block_size, n_hidden), generator=g)  * (5/3)/((n_embd * block_size)**0.5) #* 0.2
b1 = torch.randn(n_hidden,                        generator=g)  * 0.01
W2 = torch.randn((n_hidden, vocab_size),          generator=g)  * 0.01
b2 = torch.randn(vocab_size,                      generator=g)  * 0.01

# BatchNorm parameters
bngain = torch.ones((1, n_hidden))
bnbias = torch.zeros((1, n_hidden))
bnmean_running = torch.zeros((1, n_hidden))
bnstd_running = torch.ones((1, n_hidden))

parameters = [C, W1, W2, b2, bngain, bnbias]
print(sum(p.nelement() for p in parameters)) # number of parameters in total
for p in parameters:
    p.requires_grad = True
336353

训练模型:

# same optimization as last time
max_steps = 20000
batch_size = 32
scaled_lossi = []

for i in range(max_steps):
    # random batch data
    ix = torch.randint(0, Xtr.shape[0], (batch_size,), generator=g)
    Xb, Yb = Xtr[ix], Ytr[ix]                        

    # forward pass
    emb = C[Xb]                         # embed the characters into vectors
    embcat = emb.view(emb.shape[0], -1) # concatenate the vectors
    # Linear layer
    hpreact = embcat @ W1 + b1          # hidden layer pre-activation
    # BatchNorm layer
    bnmeani = hpreact.mean(0, keepdim=True)   # (1, n_hidden)
    bnstdi = hpreact.std(0, keepdim=True)     # (1, n_hidden)
    hpreact = bngain * (hpreact - bnmeani) / bnstdi + bnbias
    with torch.no_grad():
        bnmean_running = 0.999 * bnmean_running + 0.001 * bnmeani
        bnstd_running = 0.999 * bnstd_running + 0.001 * bnstdi
    # -------------------------------------------------------------
    # Non-linearity
    h = torch.tanh(hpreact)   
    # output layer   
    logits = h @ W2 + b2  
    loss = F.cross_entropy(logits, Yb) # loss function
    # backward pass
    for p in parameters:
        p.grad = None
    loss.backward()
  
    # update
    lr = 0.1 if i < 10000 else 0.01   
    for p in parameters:
        p.data += -lr * p.grad
    scaled_lossi.append(loss.log10().item())

训练/测试Loss:

with torch.no_grad():
    # pass the training set through
    emb = C[Xtr]
    embcat = emb.view(emb.shape[0], -1)
    hpreact = embcat @ W1  + b1
    # measure the mean/std over the entire training set
    bnmean = hpreact.mean(0, keepdim=True)
    bnstd = hpreact.std(0, keepdim=True)
@torch.no_grad() # this decorator disables gradient tracking
def split_loss(split):
    x,y = {'train': (Xtr, Ytr),
           'test': (Xte, Yte),}[split]
    emb = C[x] # (N, block_size, n_embd)
    embcat = emb.view(emb.shape[0], -1) # concat into (N, block_size * n_embd)
    hpreact = embcat @ W1  + b1
    #hpreact = bngain * (hpreact - hpreact.mean(0, keepdim=True)) / hpreact.std(0, keepdim=True) + bnbias
    hpreact = bngain * (hpreact - bnmean_running) / bnstd_running + bnbias
    h = torch.tanh(hpreact)    # (N, n_hidden)
    logits = h @ W2 + b2       # (N, vocab_size)
    loss = F.cross_entropy(logits, y)
    print(split, loss.item())

split_loss('train')
split_loss('test')
train 3.085115909576416
test 3.104541540145874
plt.figure(figsize=(10, 5))
plt.plot(lossi, label='No Scaled parameters')
plt.plot(scaled_lossi,alpha=0.5, label='Scaled parameters')
plt.legend()

在这里插入图片描述

对随机初始化权重缩放后,可以显著的降低模型的初始误差。

对数损失

  • base
    • test : 3.3062
  • add batch norm
    • train : 3.2291
    • test : 3.2377
  • add batch norm and scaled parameters
    • train : 3.0851
    • test : 3.1045

为什么归一化、缩小权重?

首先观察和误差直接相关的预测输出:logits

# 假设下面是输出层的输出
logits = torch.rand((1, 10))*10
logits
tensor([[0.6693, 1.1769, 4.6489, 6.4311, 8.7869, 5.6321, 0.4762, 7.6668, 5.5291,
         4.9612]])
loss = F.cross_entropy(logits, torch.tensor([1]))
loss
tensor(8.0425)
# 缩小后的损失
loss = F.cross_entropy(logits*0.01, torch.tensor([1]))
loss
tensor(2.3372)

logits的值越大损失就会越大,logits = h @ W2 + b2,所以缩小w2b2,就是在缩小logits,可以显著的减小模型的初始损失。

在本例中,(5/3)/((n_embd * block_size)**0.5) = 0.3,本质也是对随机初始化的权重进行了缩小。

接下来观察hpreact,是隐藏层的输出,hpreact = embcat @ W1 + b1

# 下面假设为隐藏层的输出,隐藏层20个神经元
hpreact = torch.randn((32, 20))*10
hpreact[0]
tensor([  5.4474,   0.8826,  -9.8720,  12.3268, -19.7285,   2.5135,  -9.5221,
          7.9822, -11.6153, -10.5080, -10.6796,   3.6791,  -0.7050,  14.4790,
          7.3994, -18.2474,  11.5146,   0.6579,  -6.6393,  -6.7630])
# 经过Tanh激活后的,隐藏层输出
h = torch.tanh(hpreact)
h[0]
tensor([ 1.0000,  0.7077, -1.0000,  1.0000, -1.0000,  0.9870, -1.0000,  1.0000,
        -1.0000, -1.0000, -1.0000,  0.9987, -0.6076,  1.0000,  1.0000, -1.0000,
         1.0000,  0.5770, -1.0000, -1.0000])
# 激活后的输出,接近0.99占比
torch.sum(torch.abs(h) >= 0.99)/(20*32)
tensor(0.7875)

经过Tanh激活后,输出值的绝对值 ≈ \approx 1的大概占了78%,这是一个很恐怖的现象,下面是Tanh函数:

def tanh(self):
    x = self.data
    t = (math.exp(2*x) - 1)/(math.exp(2*x) + 1)
    out = Value(t, (self, ), 'tanh')
    
    def _backward():
      self.grad += (1 - t**2) * out.grad
    out._backward = _backward
    
    return out

在反向传播部分,(1 - t**2) * out.grad,t : 经过tanh激活后的输出,如果 t 中大量的值接近-1/1,那么大部 ( 1 − t 2 ) ≈ 0 (1 - t^2)\approx 0 (1t2)0,这将导致该层的大部分神经元得不到更新,不能够充分训练。

如何解决这个问题:

  1. hpreact进行归一化
# 把对w1和b1的缩放,近似作用在hpreact上
# 经过Tanh激活后的,隐藏层输出
hpreact = torch.randn((32, 20))*10
hpreact[0]
tensor([ -1.6678,  -5.1004,   4.6603,  -6.7397,  11.6537, -12.1372,  12.5041,
         -6.4717,  -8.0874,  12.1796,  -2.7098, -13.1736,   9.8013,  -2.1097,
          4.5570, -10.4803,  -4.0452,  11.1274,  11.3966,   3.9012])
# 激活前对hpreact进行归一化
hpreact = (hpreact - hpreact.mean(axis=0, keepdim=True))/hpreact.std(axis=0, keepdim=True)
hpreact[0]
tensor([-0.0923, -0.7857,  0.4576, -0.5444,  1.2959, -1.0164,  1.3767, -0.5830,
        -0.4439,  1.0640, -0.0931, -1.0887,  0.9777, -0.2024,  0.4199, -1.4186,
        -0.1238,  1.2435,  1.3699,  0.3593])
# 经过Tanh激活后的,隐藏层输出
h = torch.tanh(hpreact)
h[0]
tensor([-0.0920, -0.6560,  0.4281, -0.4963,  0.8607, -0.7684,  0.8802, -0.5248,
        -0.4169,  0.7872, -0.0929, -0.7964,  0.7521, -0.1997,  0.3968, -0.8893,
        -0.1231,  0.8465,  0.8787,  0.3446])
# 激活后的输出接近0.99的占比
torch.sum(torch.abs(h) >= 0.99)/(20*32)
tensor(0.0063)

经过BatchNormal后,大部神经元都可以得到更新。

DNN模型

# 全连接层
class Linear:

    def __init__(self, fan_in, fan_out, bias=True):
        self.weight = torch.randn((fan_in, fan_out), generator=g) / fan_in**0.5
        self.bias = torch.zeros(fan_out) if bias else None
  
    def __call__(self, x):
        self.out = x @ self.weight
        if self.bias is not None:
            self.out += self.bias
        return self.out
  
    def parameters(self):
        return [self.weight] + ([] if self.bias is None else [self.bias])
# 批归一化层
class BatchNorm1d:
  
    def __init__(self, dim, eps=1e-5, momentum=0.1):
        self.eps = eps
        self.momentum = momentum
        self.training = True
        # parameters (trained with backprop)
        self.gamma = torch.ones(dim)
        self.beta = torch.zeros(dim)
        # buffers (trained with a running 'momentum update')
        self.running_mean = torch.zeros(dim)
        self.running_var = torch.ones(dim)
  
    def __call__(self, x):
        # calculate the forward pass
        if self.training:
            xmean = x.mean(0, keepdim=True) # batch mean
            xvar = x.var(0, keepdim=True)   # batch variance
        else:
            xmean = self.running_mean
            xvar = self.running_var
        xhat = (x - xmean) / torch.sqrt(xvar + self.eps) # normalize to unit variance
        self.out = self.gamma * xhat + self.beta
        # update the buffers
        if self.training:
            with torch.no_grad():
                self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * xmean
                self.running_var = (1 - self.momentum) * self.running_var + self.momentum * xvar
        return self.out
  
    def parameters(self):
        return [self.gamma, self.beta]
class Tanh:
    def __call__(self, x):
        self.out = torch.tanh(x)
        return self.out
    def parameters(self):
        return []

初始化模型参数:

n_embd = 2      
n_hidden = 100  
vocab_size = len(char2i) 
g = torch.Generator().manual_seed(2147483647) # for reproducibility
C = torch.randn((vocab_size, n_embd), generator=g)
layers = [
    Linear(n_embd * block_size, n_hidden, bias=False), BatchNorm1d(n_hidden), Tanh(),
    Linear(           n_hidden, n_hidden, bias=False), BatchNorm1d(n_hidden), Tanh(),
    Linear(           n_hidden, n_hidden, bias=False), BatchNorm1d(n_hidden), Tanh(),
    Linear(           n_hidden, vocab_size, bias=False)]

with torch.no_grad():
    # last layer: make less confident
    #layers[-1].gamma *= 0.1
    #layers[-1].weight *= 0.1
    # all other layers: apply gain
    for layer in layers[:-1]:
        if isinstance(layer, Linear):
            layer.weight *= 0.01 #5/3
parameters = [C] + [p for layer in layers for p in layer.parameters()]
print(sum(p.nelement() for p in parameters)) # number of parameters in total
for p in parameters:
    p.requires_grad = True
189402

训练DNN模型:

# same optimization as last time
max_steps = 20000
batch_size = 32
lossi = []
ud = []

for i in range(max_steps):
    # minibatch data
    ix = torch.randint(0, Xtr.shape[0], (batch_size,), generator=g)
    Xb, Yb = Xtr[ix], Ytr[ix] # batch X,Y
  
    # forward pass
    emb = C[Xb] # embed the characters into vectors
    x = emb.view(emb.shape[0], -1) # concatenate the vectors
    for layer in layers:
        x = layer(x)
    loss = F.cross_entropy(x, Yb) # loss function
  
    # backward pass
    for layer in layers:
        layer.out.retain_grad()   # AFTER_DEBUG: would take out retain_graph
    for p in parameters:
        p.grad = None
    loss.backward()
  
    # update
    lr = 0.1 if i < 15000 else 0.01 # step learning rate decay
    for p in parameters:
        p.data += -lr * p.grad

     
    lossi.append(loss.log10().item())
    with torch.no_grad():
        ud.append([((lr*p.grad).std() / p.data.std()).log10().item() for p in parameters])

    #if i >= 1000:
    #    break # AFTER_DEBUG: would take out obviously to run full optimization

参数和梯度可视化:

# visualize activation histograms
plt.figure(figsize=(10, 3)) # width and height of the plot
legends = []
for i, layer in enumerate(layers[:-1]): # note: exclude the output layer
    if isinstance(layer, Tanh):
        t = layer.out
        print('layer %d (%10s): mean %+.2f, std %.2f, saturated: %.2f%%' % (i, layer.__class__.__name__, t.mean(), t.std(), (t.abs() > 0.97).float().mean()*100))
        hy, hx = torch.histogram(t, density=True)
        plt.plot(hx[:-1].detach(), hy.detach())
        legends.append(f'layer {i} ({layer.__class__.__name__})')
plt.legend(legends)
plt.title('activation distribution')
layer 2 (      Tanh): mean -0.01, std 0.66, saturated: 1.62%
layer 5 (      Tanh): mean +0.00, std 0.68, saturated: 1.28%
layer 8 (      Tanh): mean -0.02, std 0.70, saturated: 0.44%

在这里插入图片描述

# visualize gradinet histograms
plt.figure(figsize=(10, 3)) # width and height of the plot
legends = []
for i, layer in enumerate(layers[:-1]): # note: exclude the output layer
    if isinstance(layer, Tanh):
        t = layer.out.grad
        print('layer %d (%10s): mean %+f, std %e' % (i, layer.__class__.__name__, t.mean(), t.std()))
        hy, hx = torch.histogram(t, density=True)
        plt.plot(hx[:-1].detach(), hy.detach())
        legends.append(f'layer {i} ({layer.__class__.__name__})')
plt.legend(legends)
plt.title('gradient distribution')
layer 2 (      Tanh): mean +0.000000, std 1.148749e-03
layer 5 (      Tanh): mean -0.000000, std 1.178951e-03
layer 8 (      Tanh): mean -0.000058, std 2.413830e-03

在这里插入图片描述

# visualize histograms
plt.figure(figsize=(10, 3)) # width and height of the plot
legends = []
for i,p in enumerate(parameters):
    t = p.grad
    if p.ndim == 2:
        print('weight %10s | mean %+f | std %e | grad:data ratio %e' % (tuple(p.shape), t.mean(), t.std(), t.std() / p.std()))
        hy, hx = torch.histogram(t, density=True)
        plt.plot(hx[:-1].detach(), hy.detach())
        legends.append(f'{i} {tuple(p.shape)}')
plt.legend(legends)
plt.title('weights gradient distribution')
weight  (1651, 2) | mean -0.000000 | std 5.618064e-04 | grad:data ratio 5.536448e-04
weight   (4, 100) | mean -0.000148 | std 5.627263e-03 | grad:data ratio 1.135445e-02
weight (100, 100) | mean -0.000013 | std 7.010635e-04 | grad:data ratio 2.180403e-03
weight (100, 100) | mean -0.000004 | std 1.754580e-03 | grad:data ratio 6.728885e-03
weight (100, 1651) | mean +0.000000 | std 2.069748e-03 | grad:data ratio 1.988948e-02

在这里插入图片描述

测试

@torch.no_grad() # this decorator disables gradient tracking
def split_loss(split):
    x,y = {'train': (Xtr, Ytr),'test': (Xte, Yte),}[split]
    emb = C[x] # (N, block_size, n_embd)
    x = emb.view(emb.shape[0], -1) # concat into (N, block_size * n_embd)
    for layer in layers:
        x = layer(x)
    loss = F.cross_entropy(x, y)
    print(split, loss.item())

# put layers into eval mode
for layer in layers:
    layer.training = False
split_loss('train')
split_loss('test')
train 3.086639881134033
test 3.101759433746338
# sample from the model
g = torch.Generator().manual_seed(2147483647 + 10)

for _ in range(10):
    out = []
    context = [0] * block_size # initialize with all ...
    while True:
        # forward pass the neural net
        emb = C[torch.tensor([context])] # (1,block_size,n_embd)
        x = emb.view(emb.shape[0], -1) # concatenate the vectors
        for layer in layers:
            x = layer(x)
        logits = x
        probs = F.softmax(logits, dim=1)
        # sample from the distribution
        ix = torch.multinomial(probs, num_samples=1, generator=g).item()
        # shift the context window and track the samples
        context = context[1:] + [ix]
        out.append(ix)
        # if we sample the special '.' token, break
        if ix == 0:
            break
    print(''.join(i2char[i] for i in out)) # decode and print the generated word
王才新.
王继东.
王忠营.
王志存.
王胜滨.
王其旗.
王章章.
王铁江.
王三生.
王柏健.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/537179.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

python:图形用户界面GUI(模拟登录、计算器...)

文章目录 一、Tkinter简介1、第一个tkinter窗口2、在窗口内加入组件2.1 思考题&#xff08;问题与答案&#xff09; 3、坐标管理器 二、Tkinter组件及其属性1、Label组件和Entry组件2、计算器代码 引言&#xff1a;我们以QQ为例&#xff0c;当我们点击QQ图标时候&#xff0c;它…

SpringBoot整合MyBatis-Plus实现增删改查

简介 MyBatis-Plus (opens new window)的增强工具&#xff0c;在 MyBatis 的基础上只做增强不做改变&#xff0c;为简化开发、提高效率而生。 特性 无侵入&#xff1a;只做增强不做改变&#xff0c;引入它不会对现有工程产生影响&#xff0c;如丝般顺滑损耗小&#xff1a;启…

一个有趣的avs编码器(注意,是avs,而不是avs2噢)

本章附件是一个清华大学写的关于avs编解码器: https://download.csdn.net/download/weixin_43360707/87793302 该编码器遵循了stuffing bit: 打开文件夹后&#xff0c;如下&#xff1a; 可以看出这个是个跨平台的工程&#xff0c;提供了windows vs2015的工程文件sln&#x…

【数据结构】栈的详解

☃️个人主页&#xff1a;fighting小泽 &#x1f338;作者简介&#xff1a;目前正在学习C语言和数据结构 &#x1f33c;博客专栏&#xff1a;数据结构 &#x1f3f5;️欢迎关注&#xff1a;评论&#x1f44a;&#x1f3fb;点赞&#x1f44d;&#x1f3fb;留言&#x1f4aa;&…

pom里加依赖和把jar包放到lib文件夹下的区别

首先,什么是jar包,jar包其实就是一个a项目打成了a.jar包,然后b项目引入了a.jar包,然后b项目就能用到a项目里面的工具类了. b项目怎么引入a.jar包呢. 第一种:直接把a.jar包放到lib文件夹下(不推荐) 第二种:在pom里添加maven依赖,把a.jar包引过来(推荐) 在pom里加的依赖跟直接…

小学妹刚毕业没地方住想来借宿,于是我连夜用Python给她找了个好房子,我真是太机智了

事情是这样的&#xff0c;小学妹刚毕业参加工作&#xff0c;人生地不熟的&#xff0c;因为就在我附近上班&#xff0c;所以想找我借宿。。。 想什么呢&#xff0c;都不给住宿费&#xff0c;想免费住&#xff1f;于是我用Python连夜给她找了个单间&#xff0c;自己去住吧&#…

解决Linux普通用户无法使用Docker

目录 1.问题描述 2.解决方法 2.1 添加docker用户组 2.2 把当前用户加入docker用户组 2.3 查看是否添加成功 2.4 重启docker 2.5 更新用户组 2.6 测试docker命令是否可以使用 1.问题描述 当使用普通用户的时候&#xff0c;无法对Docker进行操作 [howlongbogon ~]$ dock…

性能优化的大致策略

平时多多少少在工作中会遇到性能问题相关的工作&#xff0c;记录一下大致的思路以及方法。 1. 指导思想 抓大放小&#xff0c;可以采用两种方向&#xff1a; 一种是自底向上&#xff0c;先从操作系统发现某一现象&#xff0c;例如内存过高&#xff0c;负载过高&#xff0c;i…

python值得学习么

python值得学习么&#xff0c;答案当然是毋庸置疑的~ 目前几乎所有大中型互联网企业都在使用 Python 完成各种各样的工作&#xff0c;比如Web应用开发、自动化运维、人工智能领域、网路爬虫、科学计算、游戏开发等领域均已离不开Python。 特别是在和数据相关的领域&#xff0…

【ArcGIS Pro二次开发】(29):村庄规划生成空间功能结构调整表

根据现在村规成果要求&#xff0c;【空间功能结构调整表】是必需的。 以福建省为例&#xff0c;它长这样&#xff1a; 下面就来实现从现状用地和规划用地导出这样的Excel表格。 一、要实现的功能 如上图所示&#xff0c;点击【汇总村庄空间功能结构调整表】工具&#xff0c;选…

Day1--ARM1

用for循环实现1~100相加

百度翻译可以翻译页面

百度翻译可以翻译页面 例如&#xff1a;输入网址 https://www.baidu.com&#xff0c;点击翻译即可。

ChatGPT背后的核心技术报告(附下载)

输入几个简单的关键词&#xff0c;AI能帮你生成一篇短篇小说甚至是专业论文。最近大火的ChatGPT在邮件撰写、文本翻译、代码编写等任务上强大表现&#xff0c;让埃隆马斯克都声称感受到了AI的“危险”。ChatGPT的计算逻辑来自于一个名为transformer的算法&#xff0c;它来源于2…

穿透技术及Apache教学

首先在这里因为很多人没学过内网穿透以及虚拟ip&#xff0c;因此给大家出一套小白网络隧道教学&#xff08;Sunny-Ngrok&#xff09;如下&#xff1a; Sunny-Ngrok内网转发内网穿透 - 国内内网映射服务器 进入文档第一步&#xff1a;注册账号&#xff0c;并登录进入 第二步&a…

Eye of the Temple:在4平米玩出大空间VR效果的秘诀

卧室只有4平米&#xff0c;能在VR中模拟森林等大空间场景吗&#xff1f;仅依靠视觉也许可以&#xff0c;但显然你很难走到森林尽头。不过&#xff0c;通过重定向等视觉欺骗&#xff0c;也许你可以通过在房间内“绕圈”&#xff0c;来模拟在虚拟空间中无尽行走的效果。比如热门大…

企业工程管理系统源码之提高工程项目管理软件的效率

高效的工程项目管理软件不仅能够提高效率还应可以帮你节省成本提升利润 在工程行业中&#xff0c;管理不畅以及不良的项目执行&#xff0c;往往会导致项目延期、成本上升、回款拖后&#xff0c;最终导致项目整体盈利下降。企企管理云业财一体化的项目管理系统&#xff0c;确保…

全景 I 0基础学习VR全景制作,第25章热点功能-接入无为

本期为大家带来蛙色VR平台&#xff0c;热点功能—接入类型为&#xff1a;无功能操作。 功能位置示意 热点&#xff0c;指在全景作品中添加各种类型图标的按钮&#xff0c;引导用户通过按钮产生更多的交互&#xff0c;增加用户的多元化体验。 热点接入类型为&#xff1a;无&…

【安全知识】——LInux的shell反弹姿势合集(更新中)

作者名&#xff1a;白昼安全 主页面链接&#xff1a; 主页传送门 座右铭&#xff1a; 不要让时代的悲哀成为你的悲哀专研方向&#xff1a; web安全&#xff0c;后渗透技术每日鸡汤&#xff1a; 宇宙有宇宙的规律&#xff0c;我也有我的坚持 当我们拿到一台LINUX主机的权限时&am…

java版企业工程项目管理系统源码+spring cloud 系统管理+java 系统设置+二次开发

工程项目各模块及其功能点清单 一、系统管理 1、数据字典&#xff1a;实现对数据字典标签的增删改查操作 2、编码管理&#xff1a;实现对系统编码的增删改查操作 3、用户管理&#xff1a;管理和查看用户角色 4、菜单管理&#xff1a;实现对系统菜单的增删改查操…

线下沙龙丨瑞云“遇·建”-上海站建筑可视化技术沙龙活动圆满落幕!

艺术挑战技术&#xff0c;技术启发艺术&#xff0c;视觉行业的技术日新月异&#xff0c;实时渲染、云制作/云协作以及AIGC等创新技术&#xff0c;不仅能够帮助视觉行业同仁落实愿景&#xff0c;也实现了更加精简的工作流程。 2023年4月26日&#xff0c;深圳市瑞云科技股份有限公…