全面梳理Python下的NLP 库

news2024/11/24 22:48:36

一、说明

        Python 对自然语言处理库有丰富的支持。从文本处理、标记化文本并确定其引理开始,到句法分析、解析文本并分配句法角色,再到语义处理,例如识别命名实体、情感分析和文档分类,一切都由至少一个库提供。那么,你从哪里开始呢?

        本文的目标是为每个核心 NLP 任务提供相关 Python 库的概述。这些库通过简要说明进行了解释,并给出了 NLP 任务的具体代码片段。继续我对 NLP 博客文章的介绍,本文仅显示用于文本处理、句法和语义分析以及文档语义等核心 NLP 任务的库。此外,在 NLP 实用程序类别中,还提供了用于语料库管理和数据集的库。

        涵盖以下库:

  • NLTK
  • TextBlob
  • Spacy
  • SciKit Learn
  • Gensim
  • 这篇文章最初出现在博客 admantium.com。

二、核心自然语言处理任务

2.1 文本处理

        任务:标记化、词形还原、词干提取

      NLTK 库为文本处理提供了一个完整的工具包,包括标记化、词干提取和词形还原。


from nltk.tokenize import sent_tokenize, word_tokenize

paragraph = '''Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.'''
sentences = []
for sent in sent_tokenize(paragraph):
  sentences.append(word_tokenize(sent))

sentences[0]
# ['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline'

        使用 TextBlob,支持相同的文本处理任务。它与NLTK的区别在于更高级的语义结果和易于使用的数据结构:解析句子已经生成了丰富的语义信息。

from textblob import TextBlob

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

blob = TextBlob(text)
blob.ngrams()
#[WordList(['Artificial', 'intelligence', 'was']),
# WordList(['intelligence', 'was', 'founded']),
# WordList(['was', 'founded', 'as']),

blob.tokens
# WordList(['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline', 'in', '1956', ',', 'and', 'in',

        借助现代 NLP 库 Spacy,文本处理只是主要语义任务的丰富管道中的第一步。与其他库不同,它需要首先加载目标语言的模型。最近的模型不是启发式的,而是人工神经网络,尤其是变压器,它提供了更丰富的抽象,可以更好地与其他模型相结合。

import spacy
nlp = spacy.load('en_core_web_lg')

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

doc = nlp(text)
tokens = [token for token in doc]

print(tokens)
# [Artificial, intelligence, was, founded, as, an, academic, discipline

三、句法分析

        任务:解析、词性标记、名词短语提取

        从 NLTK 开始,支持所有语法任务。它们的输出作为 Python 原生数据结构提供,并且始终可以显示为简单的文本输出。

from nltk.tokenize import word_tokenize
from nltk import pos_tag, RegexpParser

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),
# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.html

grammar = "NP: {<DT>?<JJ>*<NN>}"
parser = RegexpParser(grammar)
parser.parse(pos_tag(word_tokenize(text)))
#(S
#  (NP Artificial/JJ intelligence/NN)
#  was/VBD
#  founded/VBN
#  as/IN
#  (NP an/DT academic/JJ discipline/NN)
#  in/IN
#  1956/CD

文本 Blob 在处理文本时立即提供 POS 标记。使用另一种方法,创建包含丰富语法信息的解析树。

from textblob import TextBlob

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''
blob = TextBlob(text)

blob.tags
#[('Artificial', 'JJ'),
# ('intelligence', 'NN'),
# ('was', 'VBD'),
# ('founded', 'VBN'),

blob.parse()
# Artificial/JJ/B-NP/O
# intelligence/NN/I-NP/O
# was/VBD/B-VP/O
# founded/VBN/I-VP/O

Spacy 库使用转换器神经网络来支持其语法任务。

import spacy
nlp = spacy.load('en_core_web_lg')

for token in nlp(text):
    print(f'{token.text:<20}{token.pos_:>5}{token.tag_:>5}')

#Artificial            ADJ   JJ
#intelligence         NOUN   NN
#was                   AUX  VBD
#founded              VERB  VBN

四、语义分析

        任务:命名实体识别、词义消歧、语义角色标记

        语义分析是NLP方法开始不同的领域。使用 NLTK 时,生成的语法信息将在字典中查找以识别例如命名实体。因此,在处理较新的文本时,可能无法识别实体。

from nltk import download as nltk_download
from nltk.tokenize import word_tokenize
from nltk import pos_tag, ne_chunk

nltk_download('maxent_ne_chunker')
nltk_download('words')
text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''

pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),
# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.html

print(ne_chunk(pos_tag(word_tokenize(text))))
# (S
#   As/IN
#   of/IN
#   [...]
#   (ORGANIZATION USA/NNP)
#   [...]
#   which/WDT
#   carried/VBD
#   (GPE Soviet/JJ)
#   cosmonaut/NN
#   (PERSON Yuri/NNP Gagarin/NNP)

      Spacy 库使用的转换器模型包含一个隐式的“时间戳”:它们的训练时间。这决定了模型使用了哪些文本,因此模型能够识别哪些文本。

import spacy
nlp = spacy.load('en_core_web_lg')

text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''
doc = nlp(paragraph)
for token in doc.ents:
    print(f'{token.text:<25}{token.label_:<15}')
# 2016                   DATE
# only three             CARDINAL
# USSR                   GPE
# Russia                 GPE
# USA                    GPE
# China                  GPE
# first                  ORDINAL
# Vostok 1               PRODUCT
# Soviet                 NORP
# Yuri Gagarin           PERSON

五、文档语义

        任务:文本分类、主题建模、情感分析、毒性识别

        情感分析也是NLP方法差异不同的任务:在词典中查找单词含义与在单词或文档向量上编码的学习单词相似性。

      TextBlob 具有内置的情感分析,可返回文本中的极性(整体正面或负面内涵)和主观性(个人意见的程度)。

from textblob import TextBlob

text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''

blob = TextBlob(text)
blob.sentiment
#Sentiment(polarity=0.16180290297937355, subjectivity=0.42155589508530683)

Spacy 不包含文本分类功能,但可以作为单独的管道步骤进行扩展。下面的代码很长,包含几个 Spacy 内部对象和数据结构 — 以后的文章将更详细地解释这一点。

## train single label categorization from multi-label dataset
def convert_single_label(dataset, filename):
    db = DocBin()
    nlp = spacy.load('en_core_web_lg')

for index, fileid in enumerate(dataset):
        cat_dict = {cat: 0 for cat in dataset.categories()}
        cat_dict[dataset.categories(fileid).pop()] = 1
        doc = nlp(get_text(fileid))
        doc.cats = cat_dict
        db.add(doc)
    db.to_disk(filename)

## load trained model and apply to text
nlp = spacy.load('textcat_multilabel_model/model-best')
text = dataset.raw(42)
doc = nlp(text)
estimated_cats = sorted(doc.cats.items(), key=lambda i:float(i[1]), reverse=True)

print(dataset.categories(42))
# ['orange']

print(estimated_cats)
# [('nzdlr', 0.998894989490509), ('money-supply', 0.9969857335090637), ... ('orange', 0.7344251871109009),

SciKit Learn 是一个通用的机器学习库,提供许多聚类和分类算法。它仅适用于数字输入,因此需要对文本进行矢量化,例如使用 GenSims 预先训练的词向量,或使用内置的特征矢量化器。仅举一个例子,这里有一个片段,用于将原始文本转换为单词向量,然后将 KMeans分类器应用于它们。

from sklearn.feature_extraction import DictVectorizer
from sklearn.cluster import KMeans

vectorizer = DictVectorizer(sparse=False)
x_train = vectorizer.fit_transform(dataset['train'])
kmeans = KMeans(n_clusters=8, random_state=0, n_init="auto").fit(x_train)

print(kmeans.labels_.shape)
# (8551, )

print(kmeans.labels_)
# [4 4 4 ... 6 6 6]

最后,Gensim是一个专门用于大规模语料库的主题分类的库。以下代码片段加载内置数据集,矢量化每个文档的令牌,并执行聚类分析算法 LDA。仅在 CPU 上运行时,这些最多可能需要 15 分钟。

# source: https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.html, https://radimrehurek.com/gensim/auto_examples/howtos/run_downloader_api.html

import logging
import gensim.downloader as api
from gensim.corpora import Dictionary
from gensim.models import LdaModel

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
docs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]

_ = dictionary[0]
id2word = dictionary.id2token
# Define and train the model

model = LdaModel(
    corpus=corpus,
    id2word=id2word,
    chunksize=2000,
    alpha='auto',
    eta='auto',
    iterations=400,
    num_topics=10,
    passes=20,
    eval_every=None
)

print(model.num_topics)
# 10

print(model.top_topics(corpus)[6])
#  ([(4.201401e-06, 'done'),
#    (4.1998064e-06, 'zero'),
#    (4.1478743e-06, 'eight'),
#    (4.1257395e-06, 'one'),
#    (4.1166854e-06, 'two'),
#    (4.085097e-06, 'six'),
#    (4.080696e-06, 'language'),
#    (4.050306e-06, 'system'),
#    (4.041121e-06, 'network'),
#    (4.0385708e-06, 'internet'),
#    (4.0379923e-06, 'protocol'),
#    (4.035399e-06, 'open'),
#    (4.033435e-06, 'three'),
#    (4.0334166e-06, 'interface'),
#    (4.030141e-06, 'four'),
#    (4.0283044e-06, 'seven'),
#    (4.0163245e-06, 'no'),
#    (4.0149207e-06, 'i'),
#    (4.0072555e-06, 'object'),
#    (4.007036e-06, 'programming')],

六、公用事业

6.1 语料库管理

NLTK为JSON格式的纯文本,降价甚至Twitter提要提供语料库阅读器。它通过传递文件路径来创建,然后提供基本统计信息以及迭代器以处理所有找到的文件。

from  nltk.corpus.reader.plaintext import PlaintextCorpusReader

corpus = PlaintextCorpusReader('wikipedia_articles', r'.*\.txt')

print(corpus.fileids())
# ['AI_alignment.txt', 'AI_safety.txt', 'Artificial_intelligence.txt', 'Machine_learning.txt', ...]

print(len(corpus.sents()))
# 47289

print(len(corpus.words()))
# 1146248

Gensim 处理文本文件以形成每个文档的词向量表示,然后可用于其主要用例主题分类。文档需要由包装遍历目录的迭代器处理,然后将语料库构建为词向量集合。但是,这种语料库表示很难外部化并与其他库重用。以下片段是上面的摘录——它将加载 Gensim 中包含的数据集,然后创建一个基于词向量的表示。

import gensim.downloader as api
from gensim.corpora import Dictionary

docs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]

print('Number of unique tokens: %d' % len(dictionary))
# Number of unique tokens: 253854

print('Number of documents: %d' % len(corpus))
# Number of documents: 1701

七、数据

NLTK提供了几个即用型数据集,例如路透社新闻摘录,欧洲议会会议记录以及古腾堡收藏的开放书籍。请参阅完整的数据集和模型列表。

from nltk.corpus import reuters

print(len(reuters.fileids()))
#10788

print(reuters.categories()[:43])
# ['acq', 'alum', 'barley', 'bop', 'carcass', 'castor-oil', 'cocoa', 'coconut', 'coconut-oil', 'coffee', 'copper', 'copra-cake', 'corn', 'cotton', 'cotton-oil', 'cpi', 'cpu', 'crude', 'dfl', 'dlr', 'dmk', 'earn', 'fuel', 'gas', 'gnp', 'gold', 'grain', 'groundnut', 'groundnut-oil', 'heat', 'hog', 'housing', 'income', 'instal-debt', 'interest', 'ipi', 'iron-steel', 'jet', 'jobs', 'l-cattle', 'lead', 'lei', 'lin-oil']

SciKit Learn包括来自新闻组,房地产甚至IT入侵检测的数据集,请参阅完整列表。这是后者的快速示例。

from sklearn.datasets import fetch_20newsgroups
dataset = fetch_20newsgroups()
dataset.data[1]
# "From: guykuo@carson.u.washington.edu (Guy Kuo)\nSubject: SI Clock Poll - Final Call\nSummary: Final call for SI clock reports\nKeywords: SI,acceleration,clock,upgrade\nArticle-I.D.: shelley.1qvfo9INNc3s\nOrganization: University of Washington\nLines: 11\nNNTP-Posting-Host: carson.u.washington.edu\n\nA fair number of brave souls who upgraded their SI clock oscillator have\nshared their experiences for this poll.

八、结论

        对于 Python 中的 NLP 项目,存在大量的库选择。为了帮助您入门,本文提供了 NLP 任务驱动的概述,其中包含紧凑的库解释和代码片段。从文本处理开始,您了解了如何从文本创建标记和引理。继续语法分析,您学习了如何生成词性标签和句子的语法结构。到达语义,识别文本中的命名实体以及文本情感也可以在几行代码中解决。对于语料库管理和访问预结构化数据集的其他任务,您还看到了库示例。总而言之,这篇文章应该会给你一个良好的开端,进入你的下一个NLP项目,从事核心NLP任务。

        NLP方法向使用神经网络,特别是大型语言模型的演变引发了一套完整的新库的创建和适应,从文本矢量化,神经网络定义和训练开始,以及语言生成模型的应用等等。这些模型涵盖了所有高级 NLP 任务,将在以后的文章中介绍。

塞巴斯蒂安

·

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/882091.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

电子器件散热应用中的温升参数常识

那么实际应用中如何估算器件的大概测试温升和温度&#xff08;哈哈&#xff0c;还在摸索中……&#xff09; 结温Tj&#xff1a;电子设备中半导体器件的实际工作温度&#xff0c;实际上比器件封装外壳温度高。 IGBT散热&#xff1a;通过铜底板传导至散热器&#xff0c;散热方向…

Arthas 使用方法简介

一、背景 不知道大家有没有遇到这种情况&#xff0c;接口业务逻辑写完后&#xff0c;用 postman 一调&#xff0c;发现接口响应时间好长&#xff0c;不得不对接口进行优化。但是此时接口的代码往往逻辑比较复杂&#xff0c;调用层次也比较多&#xff0c;很难定位到耗时较长的代…

小黑day5那拉提空中草原上午因下雨关闭改去河谷草原,下午薰衣草基地,入住伊宁江苏酒店附近吃当地特色羊肉串的leetcode之旅:209. 长度最小的子数组

小黑代码 class Solution:def minSubArrayLen(self, target: int, nums: List[int]) -> int:# 数组长度n len(nums)# 定义双指针head 0tail 0# 中间变量sum_ 0# 结果变量min_ n 1# 开始迭代 while head < n:# 尾部指针右移while tail < n and sum_ < target…

快速掌握Postman实现接口测试

快速掌握Postman实现接口测试 Postman简介 Postman是谷歌开发的一款网页调试和接口测试工具&#xff0c;能够发送任何类型的http请求&#xff0c;支持GET/PUT/POST/DELETE等方法。Postman非常简单易用&#xff0c;可以直接填写URL&#xff0c;header&#xff0c;body等就可以发…

企业权限管理(十一)-角色操作

角色操作-查询所有 RoleController RequestMapping("/role") Controller public class RoleController {Autowiredprivate IRoleService roleService;RequestMapping("/findAll.do")public ModelAndView findAll() throws Exception {ModelAndView mv n…

网络通信TCP/IP协议逐层分析数据链路层(第四十课)

Ethernet Ⅱ帧,也称为Ethernet V2帧,是如今局域网里最常见的以太帧,是以太网事实标准。如今大多数的TCP/IP应用(如HTTP、FTP、SMTP、POP3等)都是采用Ethernet II帧承载。 1、MAC地址概述 -MAC地址,即以太网地址,用来标识一个以太网上的某个单独设备或一组设备 -长度…

SGL论文中相关公式推导

SGL论文中相关公式推导 题记对文中公式14到15进行推导对文中公式16进行解析对文中公式20进行补充说明对文中公式21进行补充说明参考博文及感谢 题记 关于Wu2021_SIGIR_paper_Self-supervised graph learning for recommendation》这篇文章已经有很多大神整理过了&#xff0c;具…

TVP走进南京文投:数字化浪潮下,文旅业如何迈向发展“快车道”?

引言 随着数字经济的蓬勃发展&#xff0c;文旅业也面临着数字化浪潮的冲击与挑战。如何实现数字技术与文化内涵的有机结合&#xff0c;成为行业内绕不开的热点话题。8 月 4 日&#xff0c;腾讯云 TVP 联合腾讯文旅&#xff0c;携手南京文投集团&#xff0c;在南京大报恩寺遗址景…

【QGIS】处理带坐标的Excel、csv格式的数据文件

1、场景描述 项目中客户提供了某地区的地址数据Excel&#xff0c;让我发布地图服务。 2、处理过程 arcgis有导入Excel的功能&#xff0c;只需要指定横纵坐标和坐标系即可。可是我对arcgis不熟悉&#xff0c;查了一下QGIS是否有导入Excel的功能。暂时没查到直接导入Excel的文…

Redis持久化——RDB和AOF

Redis数据库是内存数据库&#xff0c;一旦出现服务宕机&#xff0c;那么内存中的数据就容易丢失。所以需要进行redis的持久化动作。 Redis持久化是指将Redis内存数据持存储到磁盘中&#xff0c;若出现了Redis服务宕机后&#xff0c;能够从硬盘中再恢复到Redis内存中。 Redis的持…

软件测试面试题【2023整理版(含答案)】

01、您所熟悉的测试用例设计方法都有哪些&#xff1f;请分别以具体的例子来说明这些方法在测试用例设计工作中的应用。 答&#xff1a;有黑盒和白盒两种测试种类&#xff0c;黑盒有等价类划分方法 边界值分析方法 错误推测方法 因果图方法 判定表驱动分析方法 正交实验设…

现有的vue3+ts+vite项目集成electron

效果图 什么时Electron Electron是使用JavaScript,HTML和CSS构建跨平台的桌面应用程序框架。 Electron兼容Mac、Windows和Linux,可以构建出三个平台的应用程序。 现有的vue3项目集成Electron 安装依赖 原来有一个vue3+ts+vite+pnpm的项目,其中sub-modules是子项目,web是…

威海--游记

威海盛夏已至&#xff0c;气温攀升的同时&#xff0c;小伙伴们出去玩的心也都藏不住了。 作为离韩国最近的城市&#xff0c;不出国门就能轻松get到浓浓的“韩范儿”&#xff01;从韩式建筑、小吃甜品&#xff0c;再到各种宝藏打卡小店&#xff0c;玩法超多&#xff0c;好吃好看…

计算机竞赛 python+opencv+深度学习实现二维码识别

0 前言 &#x1f525; 优质竞赛项目系列&#xff0c;今天要分享的是 &#x1f6a9; pythonopencv深度学习实现二维码识别 &#x1f947;学长这里给一个题目综合评分(每项满分5分) 难度系数&#xff1a;3分工作量&#xff1a;3分创新点&#xff1a;3分 该项目较为新颖&…

__ob__: Observer 后缀的数组的取值方式

开发中&#xff0c;经常从接口、父组件中&#xff0c;拿到数组然后给新的数组使用&#xff0c; 但是&#xff0c;有时候会发现带有 __ob__: Observer 后缀的数组&#xff0c;对这种数组来说&#xff0c;你是无法取到这个数组的值的&#xff0c; 而且&#xff0c;离谱的是consol…

【广州华锐互动】物联网工程VR虚拟课件有哪些特色?

物联网工程VR虚拟课件由广州华锐互动制作&#xff0c;是一种利用虚拟现实技术&#xff0c;将物联网的概念和应用场景通过模拟的方式呈现给学生的教学工具。相比传统的教学方式&#xff0c;物联网工程VR虚拟课件具有以下特色&#xff1a; 1.交互性强 物联网工程VR虚拟课件可以让…

七夕音乐照片墙制作教程,打造独一无二的浪漫礼物

大家好&#xff0c;我是机灵鹤。 一年一度的七夕马上到了。 我准备送女朋友一个亲手制作的&#xff0c;有创意的&#xff0c;有程序员特色的礼物。 女朋友特别喜欢林俊杰&#xff0c;于是我决定做一个林俊杰歌曲的 NFC 音乐相框 送给她 。 只需要用手机 NFC 在歌曲照片上贴…

韧性数据安全体系组成:多层级快速响应 |CEO专栏

风险是可以具象化和可感知的对象&#xff0c;是数据安全的衡量标准之一&#xff0c;上期「构建适应性进化的韧性数据安全体系」专栏&#xff0c;对韧性数据安全体系的组成-适应性动态风险展开介绍。 本期内容&#xff0c;将介绍韧性数据安全体系的另一个重要组成—多层级快速响…

Dubbo 与 gRPC、Spring Cloud、Istio 的关系

很多开发者经常会问到 Apache Dubbo 与 Spring Cloud、gRPC 以及一些 Service Mesh 项目如 Istio 的关系&#xff0c;要解释清楚它们的关系并不困难&#xff0c;你只需要跟随这篇文章和 Dubbo 文档做一些更深入的了解&#xff0c;但总的来说&#xff0c;它们之间有些能力是重合…

家纺家居小程序商城搭建指南

随着移动互联网的快速发展&#xff0c;小程序成为了商家们开展电商业务的重要方式之一。家纺家居行业作为一个庞大的市场&#xff0c;也可以通过搭建小程序商城来实现线上销售。下面就为大家介绍一下如何搭建家纺家居小程序商城。 首先&#xff0c;我们需要找一个专业成熟的小程…