CMU-Multimodal SDK Version 1.2.0(mmsdk)Windows配置与使用+pytorch代码demo

news2024/11/18 15:50:36

最近做实验要用到CMU-MOSI数据集,网上搜到的教程很少,经过一天时间的探索,最终成功安装配置数据集,这篇文章完整地整理一下该数据集的下载与使用方法。

配置环境:

window10,anaconda

1. 需要下载的内容

步骤1:下载官方github的SDK包:CMU-MultiComp-Lab/CMU-MultimodalSDK (github.com)

步骤2:解压的路径需要保存

 

2.anaconda环境配置

官方github的readme中写了需要配置环境,但该命令是基于linux系统,windows系统需要按照以下步骤设置。

步骤1:在anaconda的虚拟环境 路径下的Lib\site-packages,创建一个文本文档,命名为’mypkpath‘,在该文档中添加上一步的SDK路径,保存之后将文件后缀改为“pth”。

 步骤2:用下面的命令安装mmsdk的依赖包:

pip install h5py validators tqdm numpy argparse requests

步骤3:因为win10和Linux的文件路径分隔符不同,所以这里要修改mmsdk里的文件:

修改文件1:路径如下:

****CMU-MultimodalSDK-main\mmsdk\mmdatasdk\computational_sequence\download_ops.py

在文档最后修改为:(read和URL中间加一个'_') 

 修改文件2:和上一个文件同级的不同文件:computational_sequence.py

****CMU-MultimodalSDK-main\mmsdk\mmdatasdk\computational_sequence\computational_sequence.py

文档第68行左右,把分隔符从'os.sep'改为'/':

步骤4:启动conda的虚拟环境,导入mmsdk包验证是否安装成功,没报错就安装成功

from mmsdk import mmdatasdk as md

3. 下载数据集

步骤一:下载数据集,DATA_PATH是将下载的csd文件存放的目录,

from mmsdk import mmdatasdk as md
DATASET = md.cmu_mosi
DATA_PATH = './data'
try:
	md.mmdataset(DATASET.highlevel,DATA_PATH)
except:
	print('have been downloaded')

步骤2:加载csd文件并对齐

from mmsdk import mmdatasdk as md
DATASET = md.cmu_mosi
DATA_PATH = './data'

data_files = os.listdir(DATA_PATH)
print('\n'.join(data_files))

visual_field = 'CMU_MOSI_Visual_Facet_41'
acoustic_field = 'CMU_MOSI_COVAREP'
text_field = 'CMU_MOSI_ModifiedTimestampedWords'

features = [
    text_field,
    visual_field,
    acoustic_field
]

recipe = {feat: os.path.join(DATA_PATH, feat) + '.csd' for feat in features}
dataset = md.mmdataset(recipe)
print(list(dataset[text_field].keys())[55])
print('done!')

步骤3:对齐csd文件并保存对齐之后的结果。保存的csd文件会存储到'./deployed'文件夹中,下一次使用数据时直接从deployed文件中加载数据就ok,这样就不用重复下载和对齐。

def avg(intervals: np.array, features: np.array) -> np.array:
    try:
        return np.average(features, axis=0)
    except:
        return features

# first we align to words with averaging, collapse_function receives a list of functions
dataset.align(text_field, collapse_functions=[avg])
label_field = 'CMU_MOSI_Opinion_Labels'

# we add and align to lables to obtain labeled segments
# this time we don't apply collapse functions so that the temporal sequences are preserved
label_recipe = {label_field: os.path.join(DATA_PATH, label_field + '.csd')}
dataset.add_computational_sequences(label_recipe, destination=None)
dataset.align(label_field, replace=True)
print(list(dataset[text_field].keys())[55])
print('done!')


###保存
deploy_files={x:x for x in dataset.computational_sequences.keys()}
dataset.deploy("./deployed",deploy_files)
aligned_cmumosi_highlevel=md.mmdataset('./deployed')

4. 一次小小的完整训练

from mmsdk import mmdatasdk
import os, re
import numpy as np
from torch.utils.data import DataLoader
from collections import defaultdict
import torch
import torch.nn as nn
from tqdm import tqdm_notebook
from torch.optim import Adam, SGD
from sklearn.metrics import accuracy_score
import torch, os
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence, pad_packed_sequence
from torch.utils.data import DataLoader, Dataset
from tqdm import tqdm_notebook

word2id = defaultdict(lambda: len(word2id))
UNK = word2id['<unk>']
PAD = word2id['<pad>']
# def return_unk():
#     return UNK
# word2id.default_factory = return_unk
def splittraindevtest(train_split, dev_split, test_split, dataset, features):
    EPS = 0
    text_field, visual_field, acoustic_field, label_field = features
    # construct a word2id mapping that automatically takes increment when new words are encountered
    # place holders for the final train/dev/test dataset
    train = []
    dev = []
    test = []
    # define a regular expression to extract the video ID out of the keys
    pattern = re.compile('(.*)\[.*\]')
    num_drop = 0  # a counter to count how many data points went into some processing issues
    for segment in dataset[label_field].keys():

        # get the video ID and the features out of the aligned dataset
        vid = re.search(pattern, segment).group(1)
        label = dataset[label_field][segment]['features']
        _words = dataset[text_field][segment]['features']
        _visual = dataset[visual_field][segment]['features']
        _acoustic = dataset[acoustic_field][segment]['features']

        # if the sequences are not same length after alignment, there must be some problem with some modalities
        # we should drop it or inspect the data again
        if not _words.shape[0] == _visual.shape[0] == _acoustic.shape[0]:
            print(
                f"Encountered datapoint {vid} with text shape {_words.shape}, visual shape {_visual.shape}, acoustic shape {_acoustic.shape}")
            num_drop += 1
            continue

        # remove nan values
        label = np.nan_to_num(label)
        _visual = np.nan_to_num(_visual)
        _acoustic = np.nan_to_num(_acoustic)

        # remove speech pause tokens - this is in general helpful
        # we should remove speech pauses and corresponding visual/acoustic features together
        # otherwise modalities would no longer be aligned
        words = []
        visual = []
        acoustic = []
        for i, word in enumerate(_words):
            if word[0] != b'sp':
                words.append(word2id[word[0].decode('utf-8')])  # SDK stores strings as bytes, decode into strings here
                visual.append(_visual[i, :])
                acoustic.append(_acoustic[i, :])

        words = np.asarray(words)
        visual = np.asarray(visual)
        acoustic = np.asarray(acoustic)

        # z-normalization per instance and remove nan/infs
        visual = np.nan_to_num((visual - visual.mean(0, keepdims=True)) / (EPS + np.std(visual, axis=0, keepdims=True)))
        acoustic = np.nan_to_num(
            (acoustic - acoustic.mean(0, keepdims=True)) / (EPS + np.std(acoustic, axis=0, keepdims=True)))

        if vid in train_split:
            train.append(((words, visual, acoustic), label, segment))
        elif vid in dev_split:
            dev.append(((words, visual, acoustic), label, segment))
        elif vid in test_split:
            test.append(((words, visual, acoustic), label, segment))
        else:
            print(f"Found video that doesn't belong to any splits: {vid}")

    def return_unk():
        return UNK

    word2id.default_factory = return_unk
    print(f"Total number of {num_drop} datapoints have been dropped.")
    return train, dev, test

def multi_collate(batch):
    '''
    Collate functions assume batch = [Dataset[i] for i in index_set]
    '''
    # for later use we sort the batch in descending order of length
    batch = sorted(batch, key=lambda x: x[0][0].shape[0], reverse=True)

    # get the data out of the batch - use pad sequence util functions from PyTorch to pad things
    labels = torch.cat([torch.from_numpy(sample[1]) for sample in batch], dim=0)
    sentences = pad_sequence([torch.LongTensor(sample[0][0]) for sample in batch], padding_value=PAD)
    visual = pad_sequence([torch.FloatTensor(sample[0][1]) for sample in batch])
    acoustic = pad_sequence([torch.FloatTensor(sample[0][2]) for sample in batch])

    # lengths are useful later in using RNNs
    lengths = torch.LongTensor([sample[0][0].shape[0] for sample in batch])
    return sentences, visual, acoustic, labels, lengths


class LFLSTM(nn.Module):
    def __init__(self, input_sizes, hidden_sizes, fc1_size, output_size, dropout_rate):
        super(LFLSTM, self).__init__()
        self.input_size = input_sizes
        self.hidden_size = hidden_sizes
        self.fc1_size = fc1_size
        self.output_size = output_size
        self.dropout_rate = dropout_rate

        # defining modules - two layer bidirectional LSTM with layer norm in between
        self.embed = nn.Embedding(len(word2id), input_sizes[0])
        self.trnn1 = nn.LSTM(input_sizes[0], hidden_sizes[0], bidirectional=True)
        self.trnn2 = nn.LSTM(2 * hidden_sizes[0], hidden_sizes[0], bidirectional=True)

        self.vrnn1 = nn.LSTM(input_sizes[1], hidden_sizes[1], bidirectional=True)
        self.vrnn2 = nn.LSTM(2 * hidden_sizes[1], hidden_sizes[1], bidirectional=True)

        self.arnn1 = nn.LSTM(input_sizes[2], hidden_sizes[2], bidirectional=True)
        self.arnn2 = nn.LSTM(2 * hidden_sizes[2], hidden_sizes[2], bidirectional=True)

        self.fc1 = nn.Linear(sum(hidden_sizes) * 4, fc1_size)
        self.fc2 = nn.Linear(fc1_size, output_size)
        self.relu = nn.ReLU()
        self.dropout = nn.Dropout(dropout_rate)
        self.tlayer_norm = nn.LayerNorm((hidden_sizes[0] * 2,))
        self.vlayer_norm = nn.LayerNorm((hidden_sizes[1] * 2,))
        self.alayer_norm = nn.LayerNorm((hidden_sizes[2] * 2,))
        self.bn = nn.BatchNorm1d(sum(hidden_sizes) * 4)

    def extract_features(self, sequence, lengths, rnn1, rnn2, layer_norm):
        packed_sequence = pack_padded_sequence(sequence, lengths.cpu())
        packed_h1, (final_h1, _) = rnn1(packed_sequence)
        padded_h1, _ = pad_packed_sequence(packed_h1)
        normed_h1 = layer_norm(padded_h1)
        packed_normed_h1 = pack_padded_sequence(normed_h1, lengths.cpu())
        _, (final_h2, _) = rnn2(packed_normed_h1)
        return final_h1, final_h2

    def fusion(self, sentences, visual, acoustic, lengths):
        batch_size = lengths.size(0)
        sentences = self.embed(sentences)

        # extract features from text modality
        final_h1t, final_h2t = self.extract_features(sentences, lengths, self.trnn1, self.trnn2, self.tlayer_norm)

        # extract features from visual modality
        final_h1v, final_h2v = self.extract_features(visual, lengths, self.vrnn1, self.vrnn2, self.vlayer_norm)

        # extract features from acoustic modality
        final_h1a, final_h2a = self.extract_features(acoustic, lengths, self.arnn1, self.arnn2, self.alayer_norm)

        # simple late fusion -- concatenation + normalization
        h = torch.cat((final_h1t, final_h2t, final_h1v, final_h2v, final_h1a, final_h2a),
                      dim=2).permute(1, 0, 2).contiguous().view(batch_size, -1)
        return self.bn(h)

    def forward(self, sentences, visual, acoustic, lengths):
        batch_size = lengths.size(0)
        h = self.fusion(sentences, visual, acoustic, lengths)
        h = self.fc1(h)
        h = self.dropout(h)
        h = self.relu(h)
        o = self.fc2(h)
        return o
def load_emb(w2i, path_to_embedding, embedding_size=300, embedding_vocab=2196017, init_emb=None):
    if init_emb is None:
        emb_mat = np.random.randn(len(w2i), embedding_size)
    else:
        emb_mat = init_emb
    f = open(path_to_embedding, 'r')
    found = 0
    for line in tqdm_notebook(f, total=embedding_vocab):
        content = line.strip().split()
        vector = np.asarray(list(map(lambda x: float(x), content[-300:])))
        word = ' '.join(content[:-300])
        if word in w2i:
            idx = w2i[word]
            emb_mat[idx, :] = vector
            found += 1
    print(f"Found {found} words in the embedding file.")
    return torch.tensor(emb_mat).float()
def run(train_loader, dev_loader, test_loader):
    torch.manual_seed(123)
    torch.cuda.manual_seed_all(123)

    CUDA = True  # torch.cuda.is_available()
    MAX_EPOCH = 5

    text_size = 300
    visual_size = 47
    acoustic_size = 74

    # define some model settings and hyper-parameters
    input_sizes = [text_size, visual_size, acoustic_size]
    hidden_sizes = [int(text_size * 1.5), int(visual_size * 1.5), int(acoustic_size * 1.5)]
    fc1_size = sum(hidden_sizes) // 2
    dropout = 0.25
    output_size = 1
    curr_patience = patience = 8
    num_trials = 3
    grad_clip_value = 1.0
    weight_decay = 0.1
    CACHE_PATH = r'D:\Speech\jupyter_notes_zyx\CMU-MultimodalSDK-Tutorials-master\CMU-MultimodalSDK-Tutorials-master\data\embedding_and_mapping.pt'
    if os.path.exists(CACHE_PATH):
        pretrained_emb, word2id = torch.load(CACHE_PATH)

    else:
        pretrained_emb = None

    model = LFLSTM(input_sizes, hidden_sizes, fc1_size, output_size, dropout)
    if pretrained_emb is not None:
        model.embed.weight.data = pretrained_emb
    model.embed.requires_grad = False
    optimizer = Adam([param for param in model.parameters() if param.requires_grad], weight_decay=weight_decay)

    if CUDA:
        model.cuda()
    criterion = nn.L1Loss(reduction='sum')
    criterion_test = nn.L1Loss(reduction='sum')
    best_valid_loss = float('inf')
    lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.1)
    lr_scheduler.step()  # for some reason it seems the StepLR needs to be stepped once first
    train_losses = []
    valid_losses = []
    for e in range(MAX_EPOCH):
        model.train()
        train_iter = tqdm_notebook(train_loader)
        train_loss = 0.0
        for batch in train_iter:
            model.zero_grad()
            t, v, a, y, l = batch
            batch_size = t.size(0)
            if CUDA:
                t = t.cuda()
                v = v.cuda()
                a = a.cuda()
                y = y.cuda()
                l = l.cuda()
            y_tilde = model(t, v, a, l)
            loss = criterion(y_tilde, y)
            loss.backward()
            torch.nn.utils.clip_grad_value_([param for param in model.parameters() if param.requires_grad], grad_clip_value)
            optimizer.step()
            train_iter.set_description(f"Epoch {e}/{MAX_EPOCH}, current batch loss: {round(loss.item() / batch_size, 4)}")
            train_loss += loss.item()
        train_loss = train_loss / len(train)
        train_losses.append(train_loss)
        print(f"Training loss: {round(train_loss, 4)}")

        model.eval()
        with torch.no_grad():
            valid_loss = 0.0
            for batch in dev_loader:
                model.zero_grad()
                t, v, a, y, l = batch
                if CUDA:
                    t = t.cuda()
                    v = v.cuda()
                    a = a.cuda()
                    y = y.cuda()
                    l = l.cuda()
                y_tilde = model(t, v, a, l)
                loss = criterion(y_tilde, y)
                valid_loss += loss.item()

        valid_loss = valid_loss / len(dev)
        valid_losses.append(valid_loss)
        print(f"Validation loss: {round(valid_loss, 4)}")
        print(f"Current patience: {curr_patience}, current trial: {num_trials}.")
        if valid_loss <= best_valid_loss:
            best_valid_loss = valid_loss
            print("Found new best model on dev set!")
            torch.save(model.state_dict(), 'model.std')
            torch.save(optimizer.state_dict(), 'optim.std')
            curr_patience = patience
        else:
            curr_patience -= 1
            if curr_patience <= -1:
                print("Running out of patience, loading previous best model.")
                num_trials -= 1
                curr_patience = patience
                model.load_state_dict(torch.load('model.std'))
                optimizer.load_state_dict(torch.load('optim.std'))
                lr_scheduler.step()
                print(f"Current learning rate: {optimizer.state_dict()['param_groups'][0]['lr']}")

        if num_trials <= 0:
            print("Running out of patience, early stopping.")
            break

    model.load_state_dict(torch.load('model.std'))
    y_true = []
    y_pred = []
    model.eval()
    with torch.no_grad():
        test_loss = 0.0
        for batch in test_loader:
            model.zero_grad()
            t, v, a, y, l = batch
            if CUDA:
                t = t.cuda()
                v = v.cuda()
                a = a.cuda()
                y = y.cuda()
                l = l.cuda()
            y_tilde = model(t, v, a, l)
            loss = criterion_test(y_tilde, y)
            y_true.append(y_tilde.detach().cpu().numpy())
            y_pred.append(y.detach().cpu().numpy())
            test_loss += loss.item()
    print(f"Test set performance: {test_loss / len(test)}")
    y_true = np.concatenate(y_true, axis=0)
    y_pred = np.concatenate(y_pred, axis=0)

    y_true_bin = y_true >= 0
    y_pred_bin = y_pred >= 0
    bin_acc = accuracy_score(y_true_bin, y_pred_bin)
    print(f"Test set accuracy is {bin_acc}")

if __name__=='__main__':
    visual_field1 = 'CMU_MOSI_Visual_Facet_41'
    acoustic_field1 = 'CMU_MOSI_COVAREP'
    text_field1 = 'CMU_MOSI_ModifiedTimestampedWords'
    label_field1 = 'CMU_MOSI_Opinion_Labels'
    features1 = [
        text_field1,
        visual_field1,
        acoustic_field1,
        label_field1
    ]
    DATA_PATH1 = './mymydeployed'
    recipe1 = {feat: os.path.join(DATA_PATH1, feat) + '.csd' for feat in features1}
    dataset1 = mmdatasdk.mmdataset(recipe1)
    print(list(dataset1[text_field1].keys())[55])
    tensors = dataset1.get_tensors(seq_len=25, non_sequences=["Opinion Segment Labels"], direction=False,
                                   folds=[mmdatasdk.cmu_mosi.standard_folds.standard_train_fold, mmdatasdk.cmu_mosi.
                                   standard_folds.standard_valid_fold,
                                          mmdatasdk.cmu_mosi.standard_folds.standard_test_fold])

    fold_names = ["train", "valid", "test"]
    train, dev, test = splittraindevtest(mmdatasdk.cmu_mosi.standard_folds.standard_train_fold, mmdatasdk.cmu_mosi.
                                   standard_folds.standard_valid_fold, mmdatasdk.cmu_mosi.standard_folds.standard_test_fold,
                                         dataset1, features1)

    batch_sz = 56
    train_loader = DataLoader(train, shuffle=True, batch_size=batch_sz, collate_fn=multi_collate)
    dev_loader = DataLoader(dev, shuffle=False, batch_size=batch_sz * 3, collate_fn=multi_collate)
    test_loader = DataLoader(test, shuffle=False, batch_size=batch_sz * 3, collate_fn=multi_collate)
    run(train_loader, dev_loader, test_loader)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/641700.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

DVWA-15.Open HTTP Redirect

OWASP将其定义为&#xff1a; 当 Web 应用程序接受不受信任的输入时&#xff0c;可能会导致 Web 应用程序将请求重定向到不受信任输入中包含的 URL&#xff0c;则可能会出现未经验证的重定向和转发。通过修改恶意站点的不受信任的 URL 输入&#xff0c;攻击者可以成功发起网络钓…

NeRF 模型评价指标PSNR,MS-SSIM, LPIPS 详解和python实现

PSNR&#xff1a; PSNR&#xff08;Peak Signal-to-Noise Ratio&#xff0c;峰值信噪比&#xff09;是一种常用于衡量图像或视频质量的指标。它用于比较原始图像与经过处理或压缩后的图像之间的差异。PSNR通过计算原始图像与重建图像之间的均方误差&#xff08;Mean Squared E…

python爬各平台评论并数据分析——数据采集、评论情绪分析、新闻热度

一、爬取数据 小问题汇总 1.python之matplotlib使用系统字体 用于解决python绘图中&#xff0c;中文字体显示问题 2.cookie与视频页面id&#xff08;b站、微博等&#xff09;查看 F12打开网页开发者模式&#xff0c;然后F5刷新&#xff0c;进入控制台中的网络&#xff0c;…

618什么值得囤?这些刚需数码好物必囤!

​目前&#xff0c;618活动已经正式拉开帷幕了&#xff0c;相信很多小伙伴已经按耐不住想要入手了&#xff01;但如果目前还没什么头绪&#xff0c;不知道买什么的话&#xff0c;现在就不妨来抄一下作业吧&#xff01;近期我整理了一份618数码好物清单&#xff0c;都是精心挑选…

插件化工程R文件瘦身技术方案 | 京东云技术团队

随着业务的发展及版本迭代&#xff0c;客户端工程中不断增加新的业务逻辑、引入新的资源&#xff0c;随之而来的问题就是安装包体积变大&#xff0c;前期各个业务模块通过无用资源删减、大图压缩或转上云、AB实验业务逻辑下线或其他手段在降低包体积上取得了一定的成果。 在瘦…

Window域控环境之账号误删恢复

文章目录 背景信息问题分析操作步骤 文章内容已做脱敏处理 背景信息 8&#xff1a;30&#xff0c;收到联络反馈客户误删除部门领导域控账户&#xff0c;希望紧急实施VM整机恢复工作。收到联络时&#xff0c;我是觉得这个事情挺严重的。毕竟现在域控账号是企业里面重要的身份与…

深度学习数据处理中,标量、向量、张量的区别与联系

计算机中的标量机是指只是一个数一个数地进行计算的加工处理方法&#xff0c;区别于向量机能够对一批数据同时进行加工处理。标量机比向量机的运算速度慢&#xff0c;因此&#xff0c;向量机更适合于演算数据量多的大型科学、工程计算问题。 计算机可以进行数值计算&#xff0c…

5.2.11 IP分组的转发(二)IP分组转发算法

5.2.11 IP分组的转发&#xff08;二&#xff09;IP分组转发算法 我们前面已经了解了路由器的结构以及直接交付和间接交付的概念&#xff0c;明白了路由器会根据路由协议生成路由表再根据路由表生成转发表&#xff0c;当路由器收到一个待转发的IP分组以后&#xff0c;会根据分组…

Background-1 基础知识 sqli-Labs Less1-Less-4

文章目录 一、Less-1二、Less-2三、Less-3四、Less-4总结 一、Less-1 http://sqli:8080/Less-1/?id1在第一关我们可以尝试增加一个单引号进行尝试 http://sqli:8080/Less-1/?id1错误显示如下&#xff1a; near 1 LIMIT 0,1 at line 1推测语法的结构 select *from where **…

2009年iMac装64位windows7

单位领导会花屏的iMac&#xff08;24寸 2009年初版&#xff09;我捡来用&#xff0c;应该大约是在2020年安装了32位windows7&#xff0c;发现不安装显卡驱动便不会花屏死机&#xff0c;于是就当简单的上网机用着&#xff0c;毕竟iMac的显示屏还是蛮不错的。现在要使用的1个软件…

linux中那些常用好玩的命令

前言 大家好&#xff0c;又见面了&#xff0c;我是沐风晓月&#xff0c;本文是专栏【linux基本功-基础命令实战】的第66篇文章&#xff0c;今天要分享是多个命令&#xff0c;在工作中不常用&#xff0c;但好玩。 专栏地址&#xff1a;[linux基本功-基础命令专栏] &#xff0c…

华为OD机试真题B卷 Java 实现【二叉树的所有路径】,附详细解题思路

一、题目描述 给定一个二叉树&#xff0c;返回所有从根节点到叶子节点的路径。 说明: 叶子节点是指没有子节点的节点。 二、思路与算法 最直观的方法是使用深度优先搜索。在深度优先搜索遍历二叉树时&#xff0c;我们需要考虑当前的节点以及它的孩子节点。 如果当前节点不是…

DAY 77 [ Ceph ] 基本概念、原理及架构

前言 在实现容器化的初期&#xff0c;计划使用 Ceph 作为容器的存储。都说存储是虚拟化之母&#xff0c;相对容器来说&#xff0c;存储也起到了至关重要的作用。 选用 Ceph 作为容器化存储理由如下&#xff1a; 方便后期横向扩展&#xff1b;Ceph能够同时支持快存储、对象存…

MM32F3273G8P火龙果开发板MindSDK开发教程15 - 获取msa311加速器的方向改变事件

MM32F3273G8P火龙果开发板MindSDK开发教程15 - 获取msa311加速器的方向改变事件 1、功能描述 类似手机里横屏竖屏检测&#xff0c;当方向发生变化时&#xff0c;横屏竖屏自动切换。 当msa311方向改变时&#xff0c;会产生中断&#xff0c;然后从寄存器Reg 0x0C(Orientation _…

图解LeetCode——20. 有效的括号

一、题目 给定一个只包括 (&#xff0c;)&#xff0c;{&#xff0c;}&#xff0c;[&#xff0c;] 的字符串 s &#xff0c;判断字符串是否有效。 有效字符串需满足&#xff1a; 左括号必须用相同类型的右括号闭合。左括号必须以正确的顺序闭合。每个右括号都有一个对应的相同类…

i.MX RT1010跨界MCU调试利器(FreeMASTER上手体验)

FreeMASTER是一款基于PC的免费工具&#xff0c;用于可视化和调试嵌入式实时应用程序。它可以帮助开发人员快速实现深入嵌入式系统的数据测试和调试&#xff0c;它为嵌入式系统设计师提供了一个强大的、可视化的调试环境&#xff0c;在调试、验证和追踪实时应用程序时尤其有用。…

Atair 柱状比例图

如何熟练掌握可视化库和应对使用过程的疑难问题&#xff1f; 基本用法不妨访问 GeeksforGeeks 疑难问题优先搜索 https://stackoverflow.com 尽量使用官方文档&#xff1a; numpy的学习访问 https://numpy.org/doc/stable/user/index.html 例如&#xff1a; 一则 altair 使用过…

@antv/g2plot 特殊 散点图 x轴为category 调整了legend 的marker

下面代码演示了如何使用 antv/g2plot 创建一个散点图&#xff0c;并对其进行基本的样式和布局配置。 具体来说&#xff0c;代码中的 data 数组定义了散点图的数据系列&#xff0c;每个数据对象包含了分类、值和 y 轴字段三个属性。而 cateMap 对象则定义了每个分类对应的颜色和…

玩转ChatGPT:名单排序

一、写在前面 最近在文秘工作中&#xff0c;碰到一个名字排序的问题&#xff0c;大概的规则&#xff1a; &#xff08;1&#xff09;按照第一个汉字的首字母的英文单词排序&#xff0c;从A-Z&#xff1b; &#xff08;2&#xff09;若第一个字的首字母一致&#xff0c;则比较…

【920信号与系统笔记】第三章 连续信号的正交分解

连续信号的正交分解 3.1引言3.3信号表示为傅里叶级数(FS)三角傅里叶级数1. 本质展开式1展开式2展开条件-狄利克雷条件分量概念补充 指数傅里叶级数使用条件形式1&#xff08;按连续信号的正交分解定义展开&#xff09;形式2&#xff08;由三角函数形式的傅里叶级数推导&#xf…