【计算机视觉】DINOv2(视觉大模型)代码使用和测试(完整的源代码)

news2024/11/24 0:39:56

文章目录

  • 一、环境部署
  • 二、导入原图
    • 2.1 使用vit_s14的模型
  • 三、使用其他模型
    • 3.1 使用vit_b14的模型
    • 3.2 使用vit_l14的模型
    • 3.3 使用vit_g14的模型

一、环境部署

!git clone https://ghproxy.com/https://github.com/facebookresearch/dinov2.git

输出为:

Cloning into 'dinov2'...
remote: Enumerating objects: 141, done.
remote: Counting objects: 100% (96/96), done.
remote: Compressing objects: 100% (74/74), done.  71% (53/74)
remote: Total 141 (delta 40), reused 31 (delta 22), pack-reused 45
Receiving objects: 100% (141/141), 101.01 KiB | 348.00 KiB/s, done.
Resolving deltas: 100% (42/42), done.

命令是一个Git命令,用于克隆(Clone)名为"dinov2"的存储库。它使用了一个名为"ghproxy.com"的代理,用于加速GitHub的克隆操作。

!pip install -r /kaggle/working/dinov2/requirements.txt

在这里插入图片描述
在这里插入图片描述

!pip install scikit-learn -i https://pypi.tuna.tsinghua.edu.cn/simple

在这里插入图片描述

二、导入原图

%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg

image = mpimg.imread('/kaggle/input/demo-image/1 (4).png')

plt.imshow(image)
plt.axis('off')
plt.show()

# 输出图像尺寸
print("图像尺寸:{} x {} x {}".format(image.shape[0], image.shape[1], image.shape[2]))

在这里插入图片描述

图像尺寸:1376 x 920 x 3

我们需要切换为output的路径:

import os

input_path = "/kaggle/working/dinov2"
os.chdir(input_path)

2.1 使用vit_s14的模型

import torch
import torchvision.transforms as T
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.image as mpimg 
from PIL import Image
from sklearn.decomposition import PCA
import matplotlib
 
patch_h = 75
patch_w = 50
feat_dim = 384
 
transform = T.Compose([
    T.GaussianBlur(9, sigma=(0.1, 2.0)),
    T.Resize((patch_h * 14, patch_w * 14)),
    T.CenterCrop((patch_h * 14, patch_w * 14)),
    T.ToTensor(),
    T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
])
 
dinov2_vits14 = torch.hub.load('', 'dinov2_vits14',source='local').cuda()
 
features = torch.zeros(4, patch_h * patch_w, feat_dim)
imgs_tensor = torch.zeros(4, 3, patch_h * 14, patch_w * 14).cuda()
 
img_path = f'/kaggle/input/demo-image/1 (4).png'
img = Image.open(img_path).convert('RGB')
imgs_tensor[0] = transform(img)[:3]
with torch.no_grad():
    features_dict = dinov2_vits14.forward_features(imgs_tensor)
    features = features_dict['x_norm_patchtokens']
    
features = features.reshape(4 * patch_h * patch_w, feat_dim).cpu()
pca = PCA(n_components=3)
pca.fit(features)
pca_features = pca.transform(features)
pca_features[:, 0] = (pca_features[:, 0] - pca_features[:, 0].min()) / (pca_features[:, 0].max() - pca_features[:, 0].min())
 
pca_features_fg = pca_features[:, 0] > 0.3
pca_features_bg = ~pca_features_fg
 
b = np.where(pca_features_bg)

pca.fit(features[pca_features_fg])
pca_features_rem = pca.transform(features[pca_features_fg])
for i in range(3):
    pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].min()) / (pca_features_rem[:, i].max() - pca_features_rem[:, i].min())
    # transform using mean and std, I personally found this transformation gives a better visualization
    # pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].mean()) / (pca_features_rem[:, i].std() ** 2) + 0.5

pca_features_rgb = pca_features.copy()
pca_features_rgb[pca_features_fg] = pca_features_rem
pca_features_rgb[b] = 0

pca_features_rgb = pca_features_rgb.reshape(4, patch_h, patch_w, 3)
plt.imshow(pca_features_rgb[0][...,::-1])
plt.savefig('features.png')
plt.show()
plt.close()

以下是代码的逐行中文解读:

import torch
import torchvision.transforms as T
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.image as mpimg 
from PIL import Image
from sklearn.decomposition import PCA
import matplotlib

# 设置补丁(patch)的高度和宽度
patch_h = 75
patch_w = 50
# 特征维度
feat_dim = 384

# 定义图像转换操作
transform = T.Compose([
    T.GaussianBlur(9, sigma=(0.1, 2.0)),  # 高斯模糊
    T.Resize((patch_h * 14, patch_w * 14)),  # 调整图像大小
    T.CenterCrop((patch_h * 14, patch_w * 14)),  # 中心裁剪
    T.ToTensor(),  # 转换为张量
    T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),  # 标准化
])

# 使用torch.hub加载dinov2_vits14模型并移至CUDA设备
dinov2_vits14 = torch.hub.load('', 'dinov2_vits14', source='local').cuda()

# 创建用于存储特征和图像张量的零张量
features = torch.zeros(4, patch_h * patch_w, feat_dim)
imgs_tensor = torch.zeros(4, 3, patch_h * 14, patch_w * 14).cuda()

# 图像路径
img_path = f'/kaggle/input/demo-image/1 (4).png'
# 打开图像并转换为RGB模式
img = Image.open(img_path).convert('RGB')
# 对图像进行转换操作,并将其存储在imgs_tensor的第一个位置
imgs_tensor[0] = transform(img)[:3]

# 禁用梯度计算
with torch.no_grad():
    # 将图像张量传递给dinov2_vits14模型获取特征
    features_dict = dinov2_vits14.forward_features(imgs_tensor)
    features = features_dict['x_norm_patchtokens']
    
# 重塑特征形状为(4 * patch_h * patch_w, feat_dim)
features = features.reshape(4 * patch_h * patch_w, feat_dim).cpu()

# 创建PCA对象并拟合特征
pca = PCA(n_components=3)
pca.fit(features)

# 对PCA转换后的特征进行归一化处理
pca_features = pca.transform(features)
pca_features[:, 0] = (pca_features[:, 0] - pca_features[:, 0].min()) / (pca_features[:, 0].max() - pca_features[:, 0].min())

# 根据阈值进行前景和背景的区分
pca_features_fg = pca_features[:, 0] > 0.3
pca_features_bg = ~pca_features_fg

# 查找背景特征的索引
b = np.where(pca_features_bg)

# 对前景特征再次进行PCA转换
pca.fit(features[pca_features_fg])
pca_features_rem = pca.transform(features[pca_features_fg])

# 对前景特征进行归一化处理
for i in range(3):
    pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].min()) / (pca_features_rem[:, i].max() - pca_features_rem[:, i].min())
    # 使用均值和标准差进行转换,个人发现这种转换方式可以得到更好的可视化效果
    # pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].mean()) / (pca_features_rem[:, i].std() ** 2) + 0.5

# 创建RGB特征数组
pca_features_rgb = pca_features.copy()

# 替换前景特征为转换后的特征
pca_features_rgb[pca_features_fg] = pca_features_rem

# 将背景特征设置为0
pca_features_rgb[b] = 0

# 重塑特征形状为(4, patch_h, patch_w, 3)
pca_features_rgb = pca_features_rgb.reshape(4, patch_h, patch_w, 3)

# 显示第一个图像的RGB特征
plt.imshow(pca_features_rgb[0][...,::-1])
plt.savefig('features.png')
plt.show()
plt.close()

这段代码的功能是对给定的图像进行一系列处理和特征提取,并使用PCA对特征进行降维。然后,根据特定阈值对前景和背景进行区分,最后将特征可视化为RGB图像。请注意,其中的具体数值和路径可能需要根据您的实际数据和环境进行调整。

在这里插入图片描述

print(features)
print(features.shape)

我们的输出结果为:

tensor([[-1.3500, -4.8793, -1.4393,  ...,  2.3347,  1.6834, -2.9632],
        [-0.4650, -6.4163, -1.5503,  ...,  2.2055,  2.5527, -3.2553],
        [-0.6371, -6.2615, -0.7516,  ...,  3.1827,  2.3861, -2.6838],
        ...,
        [ 1.9385,  0.0726, -0.5395,  ...,  0.3876, -1.4914, -4.5422],
        [ 1.6399, -0.0860,  0.4701,  ...,  1.0180, -0.8897, -5.2614],
        [ 1.6084, -0.0669,  0.7341,  ...,  1.0633, -0.9713, -5.3548]])
torch.Size([15000, 384])

降维后的特征为:

print(pca_features)
print(pca_features.shape)

输出的结果为:

[[  0.81004055   2.458559    12.11051576]
 [  0.79562888   5.65071716  10.84007045]
 [  0.82050109   5.55007889   9.05274001]
 ...
 [  0.27618588 -18.96898667  19.48198916]
 [  0.31861323 -12.21414371  14.19802898]
 [  0.34356016 -10.82144825  13.74648131]]
(15000, 3)
features_dict

我们看一下字典的构成:

{'x_norm_clstoken': tensor([[ 2.2549, -1.5661,  4.4978,  ...,  1.4984, -5.8642, -0.8560],
         [ 1.8816,  2.4343,  1.4931,  ..., -1.3401, -2.5460,  1.3967],
         [ 1.8816,  2.4343,  1.4931,  ..., -1.3401, -2.5460,  1.3967],
         [ 1.8816,  2.4343,  1.4931,  ..., -1.3401, -2.5460,  1.3967]],
        device='cuda:0'),
 'x_norm_patchtokens': tensor([[[-1.3500, -4.8793, -1.4393,  ...,  2.3347,  1.6834, -2.9632],
          [-0.4650, -6.4163, -1.5503,  ...,  2.2055,  2.5527, -3.2553],
          [-0.6371, -6.2615, -0.7516,  ...,  3.1827,  2.3861, -2.6838],
          ...,
          [-0.8778, -0.0251, -0.2867,  ...,  4.7801, -2.0887, -4.5910],
          [-1.2309,  0.2852,  0.7693,  ...,  5.0635, -1.1529, -6.0175],
          [-1.7551,  1.1333, -0.0898,  ...,  4.1885, -3.3197, -5.7227]],
 
         [[ 0.9131, -4.9736, -0.6238,  ...,  0.2835, -0.3494, -0.4916],
          [ 1.0967, -6.0392, -0.7900,  ...,  0.2323,  0.0510,  0.0176],
          [ 1.3852, -5.8056, -1.2573,  ...,  0.0549, -0.3270, -0.4510],
          ...,
          [ 1.9385,  0.0726, -0.5395,  ...,  0.3877, -1.4914, -4.5422],
          [ 1.6399, -0.0860,  0.4701,  ...,  1.0180, -0.8897, -5.2614],
          [ 1.6084, -0.0669,  0.7341,  ...,  1.0633, -0.9713, -5.3548]],
 
         [[ 0.9131, -4.9736, -0.6238,  ...,  0.2835, -0.3494, -0.4916],
          [ 1.0967, -6.0392, -0.7900,  ...,  0.2323,  0.0510,  0.0176],
          [ 1.3852, -5.8056, -1.2573,  ...,  0.0549, -0.3270, -0.4510],
          ...,
          [ 1.9385,  0.0726, -0.5395,  ...,  0.3877, -1.4914, -4.5422],
          [ 1.6399, -0.0860,  0.4701,  ...,  1.0180, -0.8897, -5.2614],
          [ 1.6085, -0.0669,  0.7341,  ...,  1.0633, -0.9713, -5.3548]],
 
         [[ 0.9131, -4.9736, -0.6238,  ...,  0.2835, -0.3494, -0.4916],
          [ 1.0967, -6.0392, -0.7900,  ...,  0.2323,  0.0510,  0.0176],
          [ 1.3852, -5.8056, -1.2573,  ...,  0.0549, -0.3270, -0.4511],
          ...,
          [ 1.9385,  0.0726, -0.5395,  ...,  0.3876, -1.4914, -4.5422],
          [ 1.6399, -0.0860,  0.4701,  ...,  1.0180, -0.8897, -5.2614],
          [ 1.6084, -0.0669,  0.7341,  ...,  1.0633, -0.9713, -5.3548]]],
        device='cuda:0'),
 'x_prenorm': tensor([[[ 4.7546e-01, -3.4794e-02,  1.1905e+00,  ...,  3.3896e-01,
           -1.2591e+00, -8.1724e-03],
          [-5.2994e-01, -3.0311e-01, -2.0162e-01,  ...,  9.4372e-01,
            8.7399e-01, -3.2527e-01],
          [-1.5728e-01, -3.9359e-01, -2.1482e-01,  ...,  9.0485e-01,
            1.2325e+00, -3.3923e-01],
          ...,
          [-4.9091e-01,  1.1081e-02,  1.9814e-01,  ...,  2.0630e+00,
           -8.5562e-01, -7.6588e-01],
          [-6.0861e-01,  5.2204e-02,  6.6299e-01,  ...,  2.1127e+00,
           -3.8590e-01, -9.7335e-01],
          [-9.3785e-01,  1.2485e-01,  3.0359e-01,  ...,  1.9137e+00,
           -1.5223e+00, -1.0352e+00]],
 
         [[ 4.4059e-01,  1.4807e-01,  5.9425e-01,  ..., -3.4851e-01,
           -6.1687e-01,  2.0463e-01],
          [ 3.1511e-01, -3.3073e-01,  9.0955e-02,  ...,  1.3627e-01,
            1.8562e-02,  4.2850e-02],
          [ 3.8695e-01, -4.1345e-01,  2.8734e-02,  ...,  1.1916e-01,
            1.8061e-01,  1.2469e-01],
          ...,
          [ 6.3855e-01,  1.9967e-03,  5.6187e-02,  ...,  1.0780e-01,
           -5.0606e-01, -6.6095e-01],
          [ 5.6617e-01,  4.9071e-03,  4.8375e-01,  ...,  3.7527e-01,
           -2.6194e-01, -7.9524e-01],
          [ 5.6790e-01,  1.4408e-02,  6.0538e-01,  ...,  4.0537e-01,
           -2.9182e-01, -8.1226e-01]],
 
         [[ 4.4059e-01,  1.4807e-01,  5.9424e-01,  ..., -3.4851e-01,
           -6.1687e-01,  2.0463e-01],
          [ 3.1511e-01, -3.3073e-01,  9.0957e-02,  ...,  1.3627e-01,
            1.8564e-02,  4.2850e-02],
          [ 3.8695e-01, -4.1345e-01,  2.8733e-02,  ...,  1.1916e-01,
            1.8061e-01,  1.2469e-01],
          ...,
          [ 6.3855e-01,  1.9971e-03,  5.6186e-02,  ...,  1.0780e-01,
           -5.0606e-01, -6.6095e-01],
          [ 5.6617e-01,  4.9067e-03,  4.8375e-01,  ...,  3.7527e-01,
           -2.6194e-01, -7.9524e-01],
          [ 5.6790e-01,  1.4408e-02,  6.0538e-01,  ...,  4.0536e-01,
           -2.9182e-01, -8.1226e-01]],
 
         [[ 4.4059e-01,  1.4807e-01,  5.9424e-01,  ..., -3.4851e-01,
           -6.1687e-01,  2.0463e-01],
          [ 3.1511e-01, -3.3073e-01,  9.0956e-02,  ...,  1.3627e-01,
            1.8562e-02,  4.2849e-02],
          [ 3.8695e-01, -4.1344e-01,  2.8735e-02,  ...,  1.1916e-01,
            1.8061e-01,  1.2469e-01],
          ...,
          [ 6.3855e-01,  1.9964e-03,  5.6189e-02,  ...,  1.0780e-01,
           -5.0607e-01, -6.6095e-01],
          [ 5.6617e-01,  4.9066e-03,  4.8375e-01,  ...,  3.7527e-01,
           -2.6194e-01, -7.9524e-01],
          [ 5.6790e-01,  1.4408e-02,  6.0538e-01,  ...,  4.0537e-01,
           -2.9182e-01, -8.1226e-01]]], device='cuda:0'),
 'masks': None}

我们换一种可视化的方法:

patch_h = 75
patch_w = 50
feat_dim = 384
 
transform = T.Compose([
    T.GaussianBlur(9, sigma=(0.1, 2.0)),
    T.Resize((patch_h * 14, patch_w * 14)),
    T.CenterCrop((patch_h * 14, patch_w * 14)),
    T.ToTensor(),
    T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
])
 
dinov2_vits14 = torch.hub.load('', 'dinov2_vits14',source='local').cuda()
 
features = torch.zeros(4, patch_h * patch_w, feat_dim)
imgs_tensor = torch.zeros(4, 3, patch_h * 14, patch_w * 14).cuda()
 
img_path = f'/kaggle/input/demo-image/1 (4).png'
img = Image.open(img_path).convert('RGB')
imgs_tensor[0] = transform(img)[:3]
with torch.no_grad():
    features_dict = dinov2_vits14.forward_features(imgs_tensor)
    features = features_dict['x_norm_patchtokens']
    
features = features.reshape(4 * patch_h * patch_w, feat_dim).cpu()
pca = PCA(n_components=3)
pca.fit(features)
pca_features = pca.transform(features)
pca_features[:, 0] = (pca_features[:, 0] - pca_features[:, 0].min()) / (pca_features[:, 0].max() - pca_features[:, 0].min())
 
pca_features_fg = pca_features[:, 0] > 0.3
pca_features_bg = ~pca_features_fg
 
b = np.where(pca_features_bg)

pca.fit(features[pca_features_fg])
pca_features_rem = pca.transform(features[pca_features_fg])
for i in range(3):
    # transform using mean and std, I personally found this transformation gives a better visualization
    pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].mean()) / (pca_features_rem[:, i].std() ** 2) + 0.5

pca_features_rgb = pca_features.copy()
pca_features_rgb[pca_features_fg] = pca_features_rem
pca_features_rgb[b] = 0

pca_features_rgb = pca_features_rgb.reshape(4, patch_h, patch_w, 3)
plt.imshow(pca_features_rgb[0][...,::-1])
plt.savefig('features.png')
plt.show()
plt.close()

在这里插入图片描述

三、使用其他模型

3.1 使用vit_b14的模型

patch_h = 75
patch_w = 50
feat_dim = 768
 
transform = T.Compose([
    T.GaussianBlur(9, sigma=(0.1, 2.0)),
    T.Resize((patch_h * 14, patch_w * 14)),
    T.CenterCrop((patch_h * 14, patch_w * 14)),
    T.ToTensor(),
    T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
])
 
dinov2_vitb14 = torch.hub.load('', 'dinov2_vitb14',source='local').cuda()
 
features = torch.zeros(4, patch_h * patch_w, feat_dim)
imgs_tensor = torch.zeros(4, 3, patch_h * 14, patch_w * 14).cuda()
 
img_path = f'/kaggle/input/demo-image/1 (4).png'
img = Image.open(img_path).convert('RGB')
imgs_tensor[0] = transform(img)[:3]
with torch.no_grad():
    features_dict = dinov2_vitb14.forward_features(imgs_tensor)
    features = features_dict['x_norm_patchtokens']
    
features = features.reshape(4 * patch_h * patch_w, feat_dim).cpu()
pca = PCA(n_components=3)
pca.fit(features)
pca_features = pca.transform(features)
pca_features[:, 0] = (pca_features[:, 0] - pca_features[:, 0].min()) / (pca_features[:, 0].max() - pca_features[:, 0].min())
 
pca_features_fg = pca_features[:, 0] > 0.3
pca_features_bg = ~pca_features_fg
 
b = np.where(pca_features_bg)

pca.fit(features[pca_features_fg])
pca_features_rem = pca.transform(features[pca_features_fg])
for i in range(3):
    # transform using mean and std, I personally found this transformation gives a better visualization
    pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].mean()) / (pca_features_rem[:, i].std() ** 2) + 0.5

pca_features_rgb = pca_features.copy()
pca_features_rgb[pca_features_fg] = pca_features_rem
pca_features_rgb[b] = 0

pca_features_rgb = pca_features_rgb.reshape(4, patch_h, patch_w, 3)
plt.imshow(pca_features_rgb[0][...,::-1])
plt.savefig('features.png')
plt.show()
plt.close()

在这里插入图片描述

3.2 使用vit_l14的模型

patch_h = 75
patch_w = 50
feat_dim = 1024
 
transform = T.Compose([
    T.GaussianBlur(9, sigma=(0.1, 2.0)),
    T.Resize((patch_h * 14, patch_w * 14)),
    T.CenterCrop((patch_h * 14, patch_w * 14)),
    T.ToTensor(),
    T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
])
 
dinov2_vitl14 = torch.hub.load('', 'dinov2_vitl14',source='local').cuda()
 
features = torch.zeros(4, patch_h * patch_w, feat_dim)
imgs_tensor = torch.zeros(4, 3, patch_h * 14, patch_w * 14).cuda()
 
img_path = f'/kaggle/input/demo-image/1 (4).png'
img = Image.open(img_path).convert('RGB')
imgs_tensor[0] = transform(img)[:3]
with torch.no_grad():
    features_dict = dinov2_vitl14.forward_features(imgs_tensor)
    features = features_dict['x_norm_patchtokens']
    
features = features.reshape(4 * patch_h * patch_w, feat_dim).cpu()
pca = PCA(n_components=3)
pca.fit(features)
pca_features = pca.transform(features)
pca_features[:, 0] = (pca_features[:, 0] - pca_features[:, 0].min()) / (pca_features[:, 0].max() - pca_features[:, 0].min())
 
pca_features_fg = pca_features[:, 0] > 0.3
pca_features_bg = ~pca_features_fg
 
b = np.where(pca_features_bg)

pca.fit(features[pca_features_fg])
pca_features_rem = pca.transform(features[pca_features_fg])
for i in range(3):
    # transform using mean and std, I personally found this transformation gives a better visualization
    pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].mean()) / (pca_features_rem[:, i].std() ** 2) + 0.5

pca_features_rgb = pca_features.copy()
pca_features_rgb[pca_features_fg] = pca_features_rem
pca_features_rgb[b] = 0

pca_features_rgb = pca_features_rgb.reshape(4, patch_h, patch_w, 3)
plt.imshow(pca_features_rgb[0][...,::-1])
plt.savefig('features.png')
plt.show()
plt.close()

在这里插入图片描述

3.3 使用vit_g14的模型

patch_h = 75
patch_w = 50
feat_dim = 1536
 
transform = T.Compose([
    T.GaussianBlur(9, sigma=(0.1, 2.0)),
    T.Resize((patch_h * 14, patch_w * 14)),
    T.CenterCrop((patch_h * 14, patch_w * 14)),
    T.ToTensor(),
    T.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
])
 
dinov2_vitg14 = torch.hub.load('', 'dinov2_vitg14',source='local').cuda()
 
features = torch.zeros(4, patch_h * patch_w, feat_dim)
imgs_tensor = torch.zeros(4, 3, patch_h * 14, patch_w * 14).cuda()
 
img_path = f'/kaggle/input/demo-image/1 (4).png'
img = Image.open(img_path).convert('RGB')
imgs_tensor[0] = transform(img)[:3]
with torch.no_grad():
    features_dict = dinov2_vitg14.forward_features(imgs_tensor)
    features = features_dict['x_norm_patchtokens']
    
features = features.reshape(4 * patch_h * patch_w, feat_dim).cpu()
pca = PCA(n_components=3)
pca.fit(features)
pca_features = pca.transform(features)
pca_features[:, 0] = (pca_features[:, 0] - pca_features[:, 0].min()) / (pca_features[:, 0].max() - pca_features[:, 0].min())
 
pca_features_fg = pca_features[:, 0] > 0.3
pca_features_bg = ~pca_features_fg
 
b = np.where(pca_features_bg)

pca.fit(features[pca_features_fg])
pca_features_rem = pca.transform(features[pca_features_fg])
for i in range(3):
    # transform using mean and std, I personally found this transformation gives a better visualization
    pca_features_rem[:, i] = (pca_features_rem[:, i] - pca_features_rem[:, i].mean()) / (pca_features_rem[:, i].std() ** 2) + 0.5

pca_features_rgb = pca_features.copy()
pca_features_rgb[pca_features_fg] = pca_features_rem
pca_features_rgb[b] = 0

pca_features_rgb = pca_features_rgb.reshape(4, patch_h, patch_w, 3)
plt.imshow(pca_features_rgb[0][...,::-1])
plt.savefig('features.png')
plt.show()
plt.close()

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/772379.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

OpenCv之图像轮廓(二)

目录 一、多边形逼近 二、凸包 三、最小外接矩形与最大外接矩形 一、多边形逼近 参照函数: approxPolyDP就是以多边形去逼近轮廓,采用的是Douglas-Peucker算法(DP) DP算法原理比较简单,核心就是不断找多边形最远的点加入形成新的多边形,直…

go环境下载github文件显示timeout解决方法

1、问题背景 go环境正常,需要去github拉取一个资源进行编译 go build -v -o naabu cmd/naabu/main.go 编译过程中报错如下: pkg/runner/runner.go:19:2: github.com/miekg/dnsv1.1.53: Get "https://proxy.golang.org/github.com/miekg/dns/v/v1.1.53.zip&q…

Git 使用笔记

Git使用笔记 1 版本控制 1.1 什么是版本控制 ​ 版本控制(Revision control)是一种在开发的过程中用于管理我们对文件、目录或工程等内容的修改历史,方便查看更改历史记录,备份以便恢复以前的版本的软件工程技术。简单说就是用…

C#使用Linq和Loop计算集合的平均值、方差【标准差】

方差【标准差】 标准差公式是一种数学公式。标准差也被称为标准偏差,或者实验标准差,公式如下所示: 样本标准差方差的算术平方根ssqrt(((x1-x)^2 (x2-x)^2 ......(xn-x)^2)/n) 总体标准差σsqrt(((x1-x)^2 (x2-x)^2 ......(xn-x)^2)/n ) …

准备WebUI自动化测试面试?这30个问题你必须掌握(二)

本文共有11000字,包含了后十五个问题,如需要前十五个问题,可查看文末链接~ 16. 在WebUI自动化测试中,你如何处理验证码或图像识别的问题? 1. 人工识别:一种简单但费时费力的方法是使用人工手动识别验证码。…

什么是渲染?一文看懂,萌新赶紧收藏码住!

十四五规划提出“加快数字化发展,建设数字中国”,数字技术的快速发展,从起初的内容创建到最终的效果呈现,都离不开渲染技术。目前,渲染技术被广泛应用于教育、医疗、影视动画、建筑设计等多个领域。它能有效满足用户对…

小程序微信登陆实现流程

1. 微信登录流程 微信登录官方介绍:https://developers.weixin.qq.com/miniprogram/dev/framework/open-ability/login.html 小程序登录:小程序可以通过微信官方提供的登录能力方便地获取微信提供的用户身份标识,快速建立小程序内的用户体系…

Stable Diffusion - 图像反推 (Interrogate) 提示词算法 (BLIP DeepBooru)

欢迎关注我的CSDN:https://spike.blog.csdn.net/ 本文地址:https://spike.blog.csdn.net/article/details/131817599 图像反推 (Interrogate) 功能,是指根据给定的图像生成一个或多个文本提示,这些提示可以描述图像的内容、风格、…

Linux--crontab命令详解--循环执行的计划任务

Linux–crontab命令详解–循环执行的计划任务 文章目录 Linux--crontab命令详解--循环执行的计划任务一、用户设置二、crontab三、/etc/crontab,/et/cron.d/*总结 一、用户设置 循环执行任务是由cron(crond)这个系统服务来控制的。用户想要建立循环的计划任务时,使用的是cront…

TypeScript 1 - 小记

文章目录 关于 TypeScript 关于 TypeScript TypeScript is a superset of JavaScript that compiles to clean JavaScript output. 官网:https://www.typescriptlang.orggithub : https://github.com/microsoft/TypeScriptplayground : https://www.typescriptlan…

Alibaba Cloud Toolkit部署操作

一、后端部署: 第一步 第二步: 第三步: 二、前端部署 第一步: 第二步: 第三步:

学会项目成本管理计算,PMP计算题就是送分题

学会项目成本管理计算&#xff0c;PMP计算题就是送分题 PMP中的计算主要在 <项目成本管理> 的控制成本部分&#xff0c;服务于挣值管理&#xff08;EVM&#xff0c;Earned Value Management&#xff09;、挣值分析&#xff08;EVA&#xff0c;Earned Value Analysis&…

【0】冒泡排序

前言 通过函数模板技术设计一个冒泡排序算法&#xff0c;领悟泛型编程的思想和冒泡排序的思想&#xff0c;然后使用QTest测试各种输入值&#xff0c;养成先写测试代码&#xff0c;后写程序代码的习惯 0x0 编写一个int版本的冒泡函数 1.不管要排序的数组长度多长&#xff0c;外…

Postman如何设置成中文?(汉化)

1. 点击下方这个链接&#xff0c;进入gitee&#xff0c;在里面下载一个插件 Releases hlmd/Postman-cn GitHub 进入之后是这个样子的&#xff1a; 2.看一下自己的postman是什么版本的&#xff0c;然后在gitee下载对应的APP包&#xff08;注意&#xff1a;是App.zit包。不要下…

中金:龙湖基本面稳健,股价超跌具备配置价值

恒大2.4万亿元的天量债务爆出后&#xff0c;让本就信心不足的房地产行业&#xff0c;越发雪上加霜&#xff0c;房企股价遭遇集体下挫&#xff0c;业内公认的万科、龙湖、保利、中海等“优等生”也不免被波及。多家证券机构提醒&#xff0c;行业预期降至冰点的情况下&#xff0c…

【精华】maven 生命周期 + 依赖传递+ scope【依赖范围】 + 排除依赖 可选依赖

目录 一 . lifecycle 生命周期 二. 依赖 与 依赖传递 三. scope 依赖范围 scope指定依赖范围 依赖传递依赖与原依赖冲突 四 maven的可选依赖与排除依赖 可选依赖 全部 排除依赖 显式的指定 maven官网技术文档&#xff1a; 一 . lifecycle 生命周期 * clean&…

java密码强度校验

一、代码 Testpublic void test(){//包含数字、大小写字母&#xff0c;长度10-20位 String regular "^(?.*\\d)(?.*[a-z])(?.*[A-Z]).{10,20}$";String example1 "1234567891";System.out.println(example1.matches(regular)); //falseString exa…

Python numpy求均值、保留几位小数

import numpy as nplist_test [0.21, 0.32]print(f{np.mean(list_test):.2f}) #保留两位小数 print(f{np.mean(list_test):.3f}) #保留三位小数

从输入URL到页面呈现

1、url解析 1、1地址解析 http和tcp的关系 tcp&#xff1a;传输通道http&#xff1a;传输协议https&#xff1a;比http多了ssl或者tsl&#xff08;证书验证&#xff09;ftp&#xff1a;大文件传输 客户端与服务器直接传送数据&#xff0c;http相当于快递小哥&#xff0c;tcp…

openGauss学习笔记-12 openGauss 简单数据管理-UPDATE语句

文章目录 openGauss学习笔记-12 openGauss 简单数据管理-UPDATE语句12.1 语法格式12.2 参数说明12.3 示例 openGauss学习笔记-12 openGauss 简单数据管理-UPDATE语句 修改已经存储在数据库中数据的行为叫做更新。用户可以更新单独一行&#xff0c;所有行或者指定的部分行。还可…