【自然语言处理】情感分析(一):基于 NLTK 的 Naive Bayes 实现

news2025/1/13 19:47:40

情感分析(一):基于 NLTK 的 Naive Bayes 实现

朴素贝叶斯(Naive Bayes)分类器可以用来确定输入文本属于某一组类别的概率。例如,预测评论是正面的还是负面的。

它是 “朴素的”,它假设文本中的单词是独立的(但在现实的自然人类语言中,单词的顺序传达了上下文信息)。尽管有这些假设,但朴素贝叶斯在使用少量训练集预测类别时具有很高的准确性。

推荐阅读:Baines, O., Naive Bayes: Machine Learning and Text Classification Application of Bayes’ Theorem.

本文代码已上传至 我的GitHub,需要可自行下载。

1.数据集

我们使用 tensorflow-datasets 提供的 imdb_reviews 数据集。这是一个大型电影评论数据集,可用于二元情感分类,包含比以前的基准数据集多得多的数据。它提供了一组 25000 25000 25000 条极性电影评论用于训练, 25000 25000 25000 条用于测试,还有其他未标记的数据可供使用。

在这里插入图片描述

2.环境准备

安装 tensorflowtensorflow-datasets,注意版本匹配问题,博主在此处踩了坑,最好不要用太新的版本,否则不兼容的问题会比较多。

首先,建一个单独的虚拟环境。

安装 tensorflow

pip install tensorflow==2.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/

安装 tensorflow-datasets

pip install tensorflow-datasets==2.0.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/

安装 nltk

pip install nltk -i https://pypi.tuna.tsinghua.edu.cn/simple/

如果导入 nltk 时报错,并提示 nltk.download(‘omw-1.4’),可以按照提示进行下载,或者直接去 NLTK Corpora 网站将文件手动下载下来放到对应的目录中。

在这里插入图片描述
在这里插入图片描述
其他包都比较好安装。

在 jupyter notebook 中编写代码之前,一定要确定好对应的虚拟环境是否选择正确,可以按照如下方法进行监测。

import sys
sys.executable

在这里插入图片描述

可以看到是我们为了本次项目所选择的虚拟环境。

3.导入包

import nltk
from nltk.metrics.scores import precision, recall, f_measure
import pandas as pd
import collections

import sys
sys.path.append("..") # Adds higher directory to python modules path.
from NLPmoviereviews.data import load_data_sent
from NLPmoviereviews.utilities import preprocessing

其中,NLPmoviereviews.data 利用 tensorflow-datasets 封装了数据下载功能。(NLPmoviereviews 是自己写的一个包。)

import tensorflow_datasets as tfds
from tensorflow.keras.preprocessing.text import text_to_word_sequence

def load_data(percentage_of_sentences=10):
    """
    Load the imdb_reviews dataset for given percentage of the dataset.
    Returns train-test sets
    X--> returned as list of words in lower case
    y--> returned as two classes 0 and 1 for bad and good reviews
    """
    train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], batch_size=-1, as_supervised=True)

    train_sentences, y_train = tfds.as_numpy(train_data)
    test_sentences, y_test = tfds.as_numpy(test_data)

    # Take only a given percentage of the entire data
    if percentage_of_sentences is not None:
        assert(percentage_of_sentences> 0 and percentage_of_sentences<=100)

        len_train = int(percentage_of_sentences/100*len(train_sentences))
        train_sentences, y_train = train_sentences[:len_train], y_train[:len_train]

        len_test = int(percentage_of_sentences/100*len(test_sentences))
        test_sentences, y_test = test_sentences[:len_test], y_test[:len_test]

    X_train = [text_to_word_sequence(_.decode("utf-8")) for _ in train_sentences]
    X_test = [text_to_word_sequence(_.decode("utf-8")) for _ in test_sentences]

    return X_train, y_train, X_test, y_test

def load_data_sent(percentage_of_sentences=10):
    """
    Load the imdb_reviews dataset for given percentage of the dataset.
    Returns train-test sets
    X--> returned as sentences in lower case
    y--> returned as two classes 0 and 1 for bad and good reviews
    """
    X_train, y_train, X_test, y_test = load_data(percentage_of_sentences)
    X_train = [' '.join(_) for _ in X_train]
    X_test = [' '.join(_) for _ in X_test]
    return X_train, y_train, X_test, y_test

NLPmoviereviews.utilities 包含了一些功能函数,比如 preprocessingembed_sentence_with_TF 等函数。

import string
from nltk.corpus import stopwords
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer

def preprocessing(sentence):
    """
    Use NLTK to clean text: remove numbers, stop words, and lemmatize verbs and nouns
    """
    # Basic cleaning
    sentence = sentence.strip()  # remove whitespaces
    sentence = sentence.lower()  # lowercasing
    sentence = ''.join(char for char in sentence if not char.isdigit())  # removing numbers
    # Advanced cleaning
    for punctuation in string.punctuation:
        sentence = sentence.replace(punctuation, '')  # removing punctuation
    tokenized_sentence = word_tokenize(sentence)  # tokenizing
    stop_words = set(stopwords.words('english'))  # defining stopwords
    tokenized_sentence_cleaned = [w for w in tokenized_sentence
                                  if not w in stop_words]  # remove stopwords
    # 1 - Lemmatizing the verbs
    verb_lemmatized = [WordNetLemmatizer().lemmatize(word, pos = "v")  # v --> verbs
              for word in tokenized_sentence_cleaned]
    # 2 - Lemmatizing the nouns
    noun_lemmatized = [WordNetLemmatizer().lemmatize(word, pos = "n")  # n --> nouns
                for word in verb_lemmatized]
    cleaned_sentence= ' '.join(w for w in noun_lemmatized)
    return cleaned_sentence

4.导入数据

# load data
X_train, y_train, X_test, y_test = load_data_sent(percentage_of_sentences=10)
X_train

X_train 是一个列表,存储了一条条文本信息,如下所示。

["this is a big step down after the surprisingly enjoyable original this sequel isn't nearly as fun as part one and it instead spends too much time on plot development tim thomerson is still the best thing about this series but his wisecracking is toned down in this entry the performances are all adequate but this time the script lets us down the action is merely routine and the plot is only mildly interesting so i need lots of silly laughs in order to stay entertained during a trancers movie unfortunately the laughs are few and far between and so this film is watchable at best",
 "perhaps because i was so young innocent and brainwashed when i saw it this movie was the cause of many sleepless nights for me i haven't seen it since i was in seventh grade at a presbyterian school so i am not sure what effect it would have on me now however i will say that it left an impression on me and most of my friends it did serve its purpose at least until we were old enough and knowledgeable enough to analyze and create our own opinions i was particularly terrified of what the newly converted post rapture christians had to endure when not receiving the mark of the beast i don't want to spoil the movie for those who haven't seen it so i will not mention details of the scenes but i can still picture them in my head and it's been 19 years",
 ...]

y_train 存储了每一条文本对应的极性: 0 0 0(负面的)或 1 1 1(正面的)。

y_train

在这里插入图片描述

5.数据预处理

rm_custom_stops 函数:移除停用词。

# remove custom stop-words
def rm_custom_stops(sentence):
    '''
    Custom stop word remover
    Parameters:
        sentence (str): a string of words
    Returns:
        list_of_words (list): cleaned sentence as a list of words
    '''
    words = sentence.split()
    stop_words = {'br', 'movie', 'film'}
    
    return [w for w in words if not w in stop_words]

process_df 函数:数据清洗、格式转换。

# perform preprocessing (cleaning) & transform to dataframe
def process_df(X, y):
    '''
    Transform texts and labels into dataframe of 
    cleaned texts (as list of words) and human readable target labels
    
    Parameters:
        X (list): list of strings (reviews)
        y (list): list of target labels (0/1)
    Returns:
        df (dataframe): dataframe of processed reviews (as list of words)
                        and corresponding sentiment label (positive/negative)
    '''
    # create dataframe from data
    d = {'text': X, 'sentiment': y}
    df = pd.DataFrame(d)
    
    # make sentiment human-readable
    df['sentiment'] = df.sentiment.map(lambda x: 'positive' if x==1 else 'negative')

    # clean and split text into list of words
    df['text'] = df.text.apply(preprocessing)
    df['text'] = df.text.apply(rm_custom_stops)

    # Generate the feature sets for the movie review documents one by one
    return df

开始处理数据。

# process data
train_df = process_df(X_train, y_train)
test_df = process_df(X_test, y_test)

查看转换格式后的训练数据 train

# inspect dataframe
train_df.head()

在这里插入图片描述

6.获取常用词

获取语​​料库中单词的频率分布,并选择 2000 2000 2000 个最常见的单词。

# get frequency distribution of words in corpus & select 2000 most common words
def most_common(df, n=2000):
    '''
    Get n most common words from data frame of text reviews
    
    Parameters:
        df (dataframe): dataframe with column of processed text reviews
        n (int): number of most common words to get
    Returns:
        most_common_words (list): list of n most common words
    '''
    # create list of all words in the train data
    complete_corpus = df.text.sum()
    
    # Construct a frequency dict of all words in the overall corpus 
    all_words = nltk.FreqDist(w.lower() for w in complete_corpus)

    # select the 2,000 most frequent words (incl. frequency)
    most_common_words = all_words.most_common(n)
    
    return [item[0] for item in most_common_words]
# get 2000 most common words
most_common_2000 = most_common(train_df)

# inspect first 10 most common words
most_common_2000[0:10]

在这里插入图片描述

7.创建 NLTK 特征集

对于 NLTK 朴素贝叶斯分类器,我们必须对句子进行分词,并找出句子与 all_words / most_common_words 共享哪些词,构成了句子的特征。(:其实就是利用 词袋模型 构建特征)

# for a given text, create a featureset (dict of features - {'word': True/False})
def review_features(review, most_common_words):
    '''
    Feature extractor that checks whether each of the most
    common words is present in a given review
    
    Parameters:
        review (list): text reviews as list of words
        most_common_words (list): list of n most common words
    Returns:
        features (dict): dict of most common words & corresponding True/False
    '''
    review_words = set(review)
    features = {}
    for word in most_common_words:
        features['contains(%s)' % word] = (word in review_words)
    return features
# create featureset for each text in a given dataframe
def make_set(df, most_common_words):
    '''
    Generates nltk featuresets for each movie review in dataframe.
    Feature sets are composed of a dict describing whether each of the most 
    common words is present in the text review or not

    Parameters:
        df (dataframe): processed dataframe of text reviews
        most_common_words (list): list of most common words
    Returns:
        feature_set (list): list of dicts of most common words & corresponding True/False
    '''
    return [(review_features(df.text[i], most_common_words), df.sentiment[i]) for i in range(len(df.sentiment))]
# make data into featuresets (for nltk naive bayes classifier)
train_set = make_set(train_df, most_common_2000)
test_set = make_set(test_df, most_common_2000)
# inspect first train featureset
train_set[0]
({'contains(one)': True,
  'contains(make)': False,
  'contains(like)': False,
  'contains(see)': False,
  'contains(get)': False,
  'contains(time)': True,
  'contains(good)': False,
  'contains(watch)': False,
  'contains(character)': False,
  'contains(story)': False,
  'contains(go)': False,
  'contains(even)': False,
  'contains(think)': False,
  'contains(really)': False,
  'contains(well)': False,
  'contains(show)': False,
  'contains(would)': False,
  'contains(scene)': False,
  'contains(end)': False,
  'contains(look)': False,
  'contains(much)': True,
  'contains(say)': False,
  'contains(know)': False,
  ...},
 'negative')

8.训练并评估模型

选用 nltk 提供的朴素贝叶斯分类器(NaiveBayesClassifier)。

# Train a naive bayes classifier with train set by nltk
classifier = nltk.NaiveBayesClassifier.train(train_set)
# Get the accuracy of the naive bayes classifier with test set
accuracy = nltk.classify.accuracy(classifier, test_set)
accuracy

在这里插入图片描述

# build reference and test set of observed values (for each label)
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
 
for i, (feats, label) in enumerate(train_set):
    refsets[label].add(i) # 存储不同标签对应的训练数据(分类前结果)
    observed = classifier.classify(feats) # 根据训练数据的特征进行分类
    testsets[observed].add(i) # 存储不同标签对应的训练数据(分类后结果)
# print precision, recall, and f-measure
print('pos precision:', precision(refsets['positive'], testsets['positive']))
print('pos recall:', recall(refsets['positive'], testsets['positive']))
print('pos F-measure:', f_measure(refsets['positive'], testsets['positive']))
print('neg precision:', precision(refsets['negative'], testsets['negative']))
print('neg recall:', recall(refsets['negative'], testsets['negative']))
print('neg F-measure:', f_measure(refsets['negative'], testsets['negative']))

在这里插入图片描述
显示前 n n n 个最有用的特征:

# show top n most informative features
classifier.show_most_informative_features(10)

在这里插入图片描述

9.预测

# predict on new review (from mubi.com)
new_review = "Surprisingly effective and moving, The Balcony Movie takes the Front Up \
            concept of talking to strangers, but here attaches it to a fixed perspective \
            in order to create a strong sense of the stream of life passing us by. \
            It's possible to not only witness the subtle changing of seasons\
            but also the gradual opening of trust and confidence in Lozinski's \
            repeating characters. A Pandemic movie, pre-pandemic. 3.5 stars"
# perform preprocessing (cleaning & featureset transformation)
processed_review = rm_custom_stops(preprocessing(new_review))
processed_review = review_features(processed_review, most_common_2000)
# predict label
classifier.classify(processed_review)

在这里插入图片描述
获取每个标签及对应单词的概率:

# to get individual probability for each label and word, taken from:
# https://stackoverflow.com/questions/20773200/python-nltk-naive-bayes-probabilities
for label in classifier.labels():
    print(f'\n\n{label}:')
    for (fname, fval) in classifier.most_informative_features(50):
        print(f"   {fname}({fval}): ", end="")
        print("{0:.2f}%".format(100*classifier._feature_probdist[label, fname].prob(fval)))
negative:
   contains(delightful)(True): 0.12%
   contains(absurd)(True): 2.51%
   contains(beautifully)(True): 0.28%
   contains(noir)(True): 0.20%
   contains(unfunny)(True): 2.03%
   contains(magnificent)(True): 0.20%
   contains(poorly)(True): 4.49%
   contains(dreadful)(True): 1.71%
   contains(worst)(True): 15.63%
   contains(waste)(True): 12.29%
   contains(turkey)(True): 1.47%
   contains(vietnam)(True): 1.47%
   contains(restore)(True): 0.20%
   contains(lame)(True): 4.73%
   contains(brilliantly)(True): 0.28%
   contains(awful)(True): 8.15%
   contains(garbage)(True): 3.14%
   contains(worse)(True): 8.39%
   contains(intense)(True): 0.44%
   contains(wonderfully)(True): 0.36%
   contains(laughable)(True): 2.59%
   contains(unbelievable)(True): 2.90%
   contains(finest)(True): 0.36%
   contains(pointless)(True): 3.30%
   contains(crap)(True): 5.85%
   contains(trial)(True): 0.28%
   contains(disappointment)(True): 3.62%
   contains(warm)(True): 0.36%
   contains(unconvincing)(True): 1.47%
   contains(lincoln)(True): 0.12%
   contains(underrate)(True): 0.36%
   contains(pathetic)(True): 2.98%
   contains(unfold)(True): 0.36%
   contains(zero)(True): 2.11%
   contains(existent)(True): 1.71%
   contains(shallow)(True): 1.71%
   contains(dull)(True): 5.37%
   contains(cheap)(True): 4.18%
   contains(mess)(True): 4.89%
   contains(perfectly)(True): 0.91%
   contains(ridiculous)(True): 5.85%
   contains(excuse)(True): 3.70%
   contains(che)(True): 0.12%
   contains(gritty)(True): 0.36%
   contains(pleasant)(True): 0.36%
   contains(mediocre)(True): 2.59%
   contains(rubbish)(True): 1.55%
   contains(insult)(True): 2.90%
   contains(porn)(True): 1.87%
   contains(douglas)(True): 0.36%


positive:
   contains(delightful)(True): 1.97%
   contains(absurd)(True): 0.20%
   contains(beautifully)(True): 3.33%
   contains(noir)(True): 2.37%
   contains(unfunny)(True): 0.20%
   contains(magnificent)(True): 1.73%
   contains(poorly)(True): 0.52%
   contains(dreadful)(True): 0.20%
   contains(worst)(True): 1.89%
   contains(waste)(True): 1.65%
   contains(turkey)(True): 0.20%
   contains(vietnam)(True): 0.20%
   contains(restore)(True): 1.33%
   contains(lame)(True): 0.76%
   contains(brilliantly)(True): 1.73%
   contains(awful)(True): 1.33%
   contains(garbage)(True): 0.52%
   contains(worse)(True): 1.41%
   contains(intense)(True): 2.61%
   contains(wonderfully)(True): 2.13%
   contains(laughable)(True): 0.44%
   contains(unbelievable)(True): 0.52%
   contains(finest)(True): 1.97%
   contains(pointless)(True): 0.60%
   contains(crap)(True): 1.08%
   contains(trial)(True): 1.49%
   contains(disappointment)(True): 0.68%
   contains(warm)(True): 1.89%
   contains(unconvincing)(True): 0.28%
   contains(lincoln)(True): 0.60%
   contains(underrate)(True): 1.81%
   contains(pathetic)(True): 0.60%
   contains(unfold)(True): 1.73%
   contains(zero)(True): 0.44%
   contains(existent)(True): 0.36%
   contains(shallow)(True): 0.36%
   contains(dull)(True): 1.16%
   contains(cheap)(True): 0.92%
   contains(mess)(True): 1.08%
   contains(perfectly)(True): 4.06%
   contains(ridiculous)(True): 1.33%
   contains(excuse)(True): 0.84%
   contains(che)(True): 0.52%
   contains(gritty)(True): 1.57%
   contains(pleasant)(True): 1.57%
   contains(mediocre)(True): 0.60%
   contains(rubbish)(True): 0.36%
   contains(insult)(True): 0.68%
   contains(porn)(True): 0.44%
   contains(douglas)(True): 1.49%

比如 d e l i g h t f u l delightful delightful,在 negative 下是 0.12 % 0.12\% 0.12%,在 positive 下是 1.97 % 1.97\% 1.97%,而 1.97 % ∶ 0.12 % = 16.5 ∶ 1.0 1.97\% ∶ 0.12\% = 16.5 ∶ 1.0 1.97%∶0.12%=16.5∶1.0

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/178837.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

机器学习: Label vs. One Hot Encoder

如果您是机器学习的新手&#xff0c;您可能会对这两者感到困惑——Label 编码器和 One-Hot 编码器。这两个编码器是 Python 中 SciKit Learn 库的一部分&#xff0c;它们用于将分类数据或文本数据转换为数字&#xff0c;我们的预测模型可以更好地理解这些数字。今天&#xff0c…

图机器学习-节点嵌入(Node Embedding)

图机器学习-节点嵌入(Node Embedding) Node Embedding简单点来说就是将一个node表示为一个RdR^dRd的向量。 EncoderDecoder Framework 我们首先需要设计一个encoder对节点进行编码。既然要比较相似度那么我就需要定义节点的相似度。同时我们还需要定义一个decoder&#xff0…

Java线程的生命周期的五种状态

在java中&#xff0c;任何对象都要有生命周期&#xff0c;线程也不例外&#xff0c;它也有自己的生命周期。当Thread对象创建完成时&#xff0c;线程的生命周期便开始了&#xff0c;当run()方法中代码正常执行完毕或者线程抛出一个未捕获的异常(Exception)或者错误(Error)时&am…

通信原理简明教程 | 数字基带传输

文章目录1数字基带传输系统的基本组成2 数字基带信号及其频域特性2.1 基本码型2.2 常用码型2.3 数字基带信号的功率谱3 码间干扰3.1 码间干扰的概念&#xff08;ISI&#xff09;3.2 无码间干扰传输的条件3.3 无码间干扰的典型传输波形4 部分响应和均衡技术&#xff08;*&#x…

java+ssm网上书店图书销售评价系统

目 录 摘 要 I ABSTRACT II 目 录 II 第1章 绪论 1 1.1背景及意义 1 1.2 国内外研究概况 1 1.3 研究的内容 1 第2章 相关技术 2 第3章 系统分析 4 3.1 需求分析 4 3.2 系统可行性分析 4 3.2.1技术可行性&#xff1a;技术背景 4 3.2.2经…

【蓝桥云课】位运算

一、原码、反码、补码 原码&#xff1a;符号位&#xff08;正数为0、负数为1&#xff09;二进制数 反码&#xff1a;正数的反码正数的原码&#xff1b;负数的反码负数的原码除符号位外按位求反 补码&#xff1a;正数的补码正数的反码&#xff1b;负数的补码负数的反码1 整数原…

基于SPN实现的密码学课程设计(附完整代码)

就是如图所示的一个过程! 1.1 初次写SPN 初代SPN 为了方便的使用S盒P盒的那些运算(直接使用数组, 而不使用位运算),所以想起了C语言课上学的

HTML基本常用标签

<!doctype html> <html> <head> <title>HTML的常用标签</title> <meta charset"UTF-8"> <!-- 这行代码是告诉浏览器使用UTF-8字符集打开; 而不是设置当前页面的编码方式 --> </head> <bo…

SSH命令

概念 安全外壳协议&#xff08;Secure Shell Protocol&#xff0c;简称SSH&#xff09;是一种加密的网络传输协议&#xff0c;可在不安全的网络中为网络服务提供安全的传输环境。SSH通过在网络中建立安全隧道&#xff08;secure channel&#xff09;来实现SSH客户端与服务器之间…

结构体的内存对齐与位段的实现

本篇文章重点介绍结构体相关知识以及深入介绍的结构体的内存对齐与位段的实现 ———————————— 内存对齐位段——————————————————一.结构体1.结构体类型的声明1.1基础知识1.2声明1.3特殊声明1.4结构体的自引用1.5结构体变量的定义和初始化与访问2.结…

C 语言零基础入门教程(八)

C 判断 判断结构要求程序员指定一个或多个要评估或测试的条件&#xff0c;以及条件为真时要执行的语句&#xff08;必需的&#xff09;和条件为假时要执行的语句&#xff08;可选的&#xff09;。 C 语言把任何非零和非空的值假定为 true&#xff0c;把零或 null 假定为 false…

JVM快速入门学习笔记(四)

15.GC &#xff1a;垃圾回收机制 垃圾回收的区域只有在堆里面&#xff08;方法区在堆里面&#xff09; 15.1 垃圾回收 GC JVM 在进行垃圾回收&#xff08;GC&#xff09;时&#xff0c;并不是堆这三个区域统一回收。大部分时候&#xff0c;回收都是新生代~   1.新生代   …

Opencv项目实战:19 手势控制鼠标

目录 0、项目介绍 1、效果展示 2、项目搭建 3、项目代码展示 HandTrackingModule.py VirtualMouse.py 4、项目资源 5、项目总结 0、项目介绍 在Opencv项目实战&#xff1a;15 手势缩放图片中&#xff0c;我们搭建了HandTrackingModule模块&#xff0c;但在这里你还得用…

离散数学与组合数学-04图论上

文章目录离散数学与组合数学-04图论上4.1 图的引入4.1.1 图的示例4.1.2 无序对和无序积4.1.3 图的定义4.2 图的表示4.2.1 集合表示和图形表示4.2.2 矩阵表示法4.2.3 邻接点与邻接边4.3 图的分类4.3.1 按边的方向分类4.3.2 按平行边分类4.3.3 按权值分类4.3.4 综合分类方法4.4 图…

MySQL —— 表操作

目录 一、创建表 二、创建表的案例 三、查看表的结构 四、修改表 五、删除表 一、创建表 语法&#xff1a; CREATE TABLE [IF NOT EXISTS] table_name(field1 datatype1 [COMMENT 注释信息],field2 datatype2 [COMMENT 注释信息],field3 datatype3 [COMMENT 注释信息] )…

通信原理简明教程 | 模拟信号的数字化传输

文章目录1 抽样及抽样定理1.1 抽样1.2 抽样定理2 量化及量化信噪比2.1 均匀量化2.2 量化误差和量化信噪比2.3 非均匀量化3 编 码3.1常用的二进制码组3.2 均匀量化编码方法3.3 A律13折线编码4 脉冲编码调制系统4.1 PCM系统的码元速率4.2 PCM系统的抗噪声性能5 预测编码5.1 差分脉…

【Kotlin】扩展函数 ③ ( 定义扩展文件 | 重命名扩展函数 | Kotlin 标准库扩展函数 )

文章目录一、定义扩展文件二、重命名扩展函数三、Kotlin 标准库扩展函数一、定义扩展文件 如果定义的 扩展函数 需要在 多个 Kotlin 代码文件 中使用 , 则需要在 单独的 Kotlin 文件 中定义 , 该文件被称为 扩展文件 ; 定义 标准库函数 的 Standard.kt 就是 独立的 扩展文件 ;…

IDEA搭建Finchley.SR2版本的SpringCloud父子基础项目-------Feign负载均衡

1.概述 官网&#xff1a;http://projects.spring.io/spring-cloud/spring-cloud.html#spring-cloud-feign Feign是一个声明式WebService客户端。使用Feign能让编写Web Service客户端更加简单, 它的使用方法是定义一个接口&#xff0c;然后在上面添加注解&#xff0c;同时也支…

[Linux]进程地址空间

&#x1f941;作者&#xff1a; 华丞臧. &#x1f4d5;​​​​专栏&#xff1a;【LINUX】 各位读者老爷如果觉得博主写的不错&#xff0c;请诸位多多支持(点赞收藏关注)。如果有错误的地方&#xff0c;欢迎在评论区指出。 推荐一款刷题网站 &#x1f449; LeetCode刷题网站 文…

谁你的财神 谁是你的穷神

送穷神&#xff0c;迎灶神&#xff0c;下午提前准备迎接财神 我们说一个人穷&#xff0c;揭不开锅了&#xff0c;只能喝凉水了&#xff0c;到后来只能喝西北风 谁是我们的财神&#xff0c;信任我们的人&#xff0c;帮助我们的人&#xff0c;感谢过往贵人的资助 但是信任是不对…