基于Pytorch框架的LSTM算法(一)——单维度单步预测(1)

news2024/10/7 4:33:04

1.项目说明

使用data中的close列的时间序列数据完成预测【使用close列数据中的前windowback-1天数据完成未来一天close的预测】
在这里插入图片描述

2.数据集

Date,Open,High,Low,Close,Adj Close,Volume
2018-05-23,182.500000,186.910004,182.179993,186.899994,186.899994,16628100
2018-05-24,185.880005,186.800003,185.029999,185.929993,185.929993,12354700
2018-05-25,186.020004,186.330002,184.449997,184.919998,184.919998,10965100
2018-05-29,184.339996,186.809998,183.710007,185.740005,185.740005,16398900
2018-05-30,186.539993,188.000000,185.250000,187.669998,187.669998,13736900
2018-05-31,187.869995,192.720001,187.479996,191.779999,191.779999,30782600
2018-06-01,193.070007,194.550003,192.070007,193.990005,193.990005,17307200
2018-06-04,191.839996,193.979996,191.470001,193.279999,193.279999,18939800
2018-06-05,194.300003,195.000000,192.619995,192.940002,192.940002,15544300
2018-06-06,191.029999,192.529999,189.110001,191.339996,191.339996,22558900
2018-06-07,190.750000,190.970001,186.770004,188.179993,188.179993,21503200
2018-06-08,187.529999,189.479996,186.429993,189.100006,189.100006,12677100
2018-06-11,188.809998,192.600006,188.800003,191.539993,191.539993,12928900
2018-06-12,192.169998,193.279999,191.559998,192.399994,192.399994,11562700
2018-06-13,192.740005,194.500000,191.910004,192.410004,192.410004,15853800
2018-06-14,193.100006,197.279999,192.910004,196.809998,196.809998,19120900
2018-06-15,195.789993,197.070007,194.639999,195.850006,195.850006,21860900
2018-06-18,194.800003,199.580002,194.130005,198.309998,198.309998,16826000
2018-06-19,196.240005,197.960007,193.789993,197.490005,197.490005,19994000
2018-06-20,199.100006,203.550003,198.809998,202.000000,202.000000,28230900
2018-06-21,202.759995,203.389999,200.089996,201.500000,201.500000,19045700
2018-06-22,201.160004,202.240005,199.309998,201.740005,201.740005,17420200
2018-06-25,200.000000,200.000000,193.110001,196.350006,196.350006,25275100
2018-06-26,197.600006,199.100006,196.229996,199.000000,199.000000,17897600
2018-06-27,199.179993,200.750000,195.800003,195.839996,195.839996,18734400
2018-06-28,195.179993,197.339996,193.259995,196.229996,196.229996,18172400
2018-06-29,197.320007,197.600006,193.960007,194.320007,194.320007,15811600
2018-07-02,193.369995,197.449997,192.220001,197.360001,197.360001,13961600
2018-07-03,194.550003,195.399994,192.520004,192.729996,192.729996,13489500
2018-07-05,194.740005,198.649994,194.029999,198.449997,198.449997,19684200
2018-07-06,198.449997,203.639999,197.699997,203.229996,203.229996,19740100
2018-07-09,204.929993,205.800003,202.119995,204.740005,204.740005,18149400
2018-07-10,204.500000,204.910004,202.259995,203.539993,203.539993,13190100
2018-07-11,202.220001,204.500000,201.750000,202.539993,202.539993,12927400
2018-07-12,203.429993,207.080002,203.190002,206.919998,206.919998,15454700
2018-07-13,207.809998,208.429993,206.449997,207.320007,207.320007,11486800
2018-07-16,207.500000,208.720001,206.839996,207.229996,207.229996,11078200
2018-07-17,204.899994,210.460007,204.839996,209.990005,209.990005,15349900
2018-07-18,209.820007,210.990005,208.440002,209.360001,209.360001,15334900
2018-07-19,208.770004,209.990005,207.759995,208.089996,208.089996,11350400
2018-07-20,208.850006,211.500000,208.500000,209.940002,209.940002,16163900
2018-07-23,210.580002,211.619995,208.800003,210.910004,210.910004,16732000

3.数据预处理

3.1 读取数据

ata = pd.read_csv('./data/rlData.csv')
data = data.sort_values('Date')
data.head()
data.shape

# 绘制close收盘价和Date时间的二维视图
sns.set_style("darkgrid")
plt.figure(figsize = (15,9))
plt.plot(data[['Close']])  #这里注意输入数据y的数据类型
plt.xticks(range(0,data.shape[0],20), data['Date'].loc[::20], rotation=45)
plt.title("****** Stock Price",fontsize=18, fontweight='bold')
plt.xlabel('Date',fontsize=18)
plt.ylabel('Close Price (USD)',fontsize=18)
plt.show()

3.2 选取特征工程Close列数据

price = data[['Close']]
price.info()
print(price.shape) #(25

3.3 数据归一化

scaler = MinMaxScaler(feature_range=(-1, 1))
price['Close'] = scaler.fit_transform(price['Close'].values.reshape(-1,1))

3.4 数据集的制造[batch_size,time_steps,input_size]

def split_data(stock, lookback):
    data_raw = stock.to_numpy() 
    data = []  
    # you can free play(seq_length)
    for index in range(len(data_raw) - lookback): 
        
        #这里input的shape[252,1],然后每次取input中的[]
        """
        input= [[1],[2]...[252]]  input.shape=[252,1] 
        每次循环取input[i:i+lookback],[[1],[2]...[lookback]] shape=[3,1]
        然后存到data列表中,经过若干次循环后就变成了【len(data_raw)-lookback,3,1"""
        data.append(data_raw[index: index + lookback])

    data = np.array(data)
    # print(data_raw.shape) #(252, 1)
    # print(data.shape)  #(249, 3, 1)
    
    
    test_set_size = int(np.round(0.2 * data.shape[0]))
    train_set_size = data.shape[0] - (test_set_size)
    
    x_train = data[:train_set_size,:-1,:] #当lookback=n时,本代码表示将time_step=n-1,其他数据不变
    y_train = data[:train_set_size,-1,:]  #当lookback=n时,本代码表示将n-1个数据作为y,且数据shape也会不变,和x_train保持一直
    
    x_test = data[train_set_size:,:-1,:]
    y_test = data[train_set_size:,-1,:]
    
    return [torch.Tensor(x_train), torch.Tensor(y_train), torch.Tensor(x_test),torch.Tensor(y_test)]

lookback = 3
x_train, y_train, x_test, y_test = split_data(price, lookback)
print('x_train.shape = ',x_train.shape) #(199, 2, 1)
print('y_train.shape = ',y_train.shape)
print('x_test.shape = ',x_test.shape) #x_test.shape =  (50, 2, 1)
print('y_test.shape = ',y_test.shape)

4.LSTM算法

class LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
        super(LSTM, self).__init__()
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers
        
        self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        """
        batch_first=True:则输入为x.shape = (batch_size,time_steps,input_size)
                                out.shape = (batch_size,time-steps,ouput_size)
        
        """
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_()
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_()
        
        # x.shape = (batch_size,time_steps,input_size)
        # out.shape = (batch_size,time_steps,ouput_size)
        out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
        
        #out.shape是选取time_steps列的最后一个数据 out.shape = (batch_size,ouput_size)
        out = self.fc(out[:, -1, :]) 
        return out

5.预训练

input_dim = 1
hidden_dim = 32
num_layers = 2
output_dim = 1
num_epochs = 100

model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers)
criterion = torch.nn.MSELoss()
optimiser = torch.optim.Adam(model.parameters(), lr=0.01)


hist = np.zeros(num_epochs)
lstm = []

for t in range(num_epochs):
    
    #x_train.shape=torch.Size([199, 2, 1])     [batch_size,time_steps,input_size]
    #y_train_pred.shape=torch.Size([199, 1])   [batch_size,ouput_size]
    
    y_train_pred = model(x_train)    

    loss = criterion(y_train_pred, y_train_lstm)
    # print("Epoch ", t, "MSE: ", loss.item())
    hist[t] = loss.item()

    optimiser.zero_grad()
    loss.backward()
    optimiser.step()

6.绘制预测值和真实值拟合图形,以及loss图形

#绘制预测值y_train_pred和真实值y_train二维视图
predict = pd.DataFrame(scaler.inverse_transform(y_train_pred.detach().numpy()))
original = pd.DataFrame(scaler.inverse_transform(y_train.detach().numpy()))


sns.set_style("darkgrid")    
fig = plt.figure()
fig.subplots_adjust(hspace=0.2, wspace=0.2)

plt.subplot(1, 2, 1)
ax = sns.lineplot(x = original.index, y = original[0], label="Data", color='royalblue')
ax = sns.lineplot(x = predict.index, y = predict[0], label="Training Prediction (LSTM)", color='tomato')
ax.set_title('Stock price', size = 14, fontweight='bold')
ax.set_xlabel("Days", size = 14)
ax.set_ylabel("Cost (USD)", size = 14)
ax.set_xticklabels('', size=10)

plt.subplot(1, 2, 2)
ax = sns.lineplot(data=hist, color='royalblue')
ax.set_xlabel("Epoch", size = 14)
ax.set_ylabel("Loss", size = 14)
ax.set_title("Training Loss", size = 14, fontweight='bold')
fig.set_figheight(6)
fig.set_figwidth(16)

7.测试集测试

model.eval()

# make predictions
y_test_pred = model(x_test)

# invert predictions
y_train_pred = scaler.inverse_transform(y_train_pred.detach().numpy())
y_train = scaler.inverse_transform(y_train.detach().numpy())
y_test_pred = scaler.inverse_transform(y_test_pred.detach().numpy())
y_test = scaler.inverse_transform(y_test.detach().numpy())

# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(y_train[:,0], y_train_pred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(y_test[:,0], y_test_pred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
lstm.append(trainScore)
lstm.append(testScore)
lstm.append(training_time)

完整代码

import numpy as np 
import pandas as pd 
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
import torch
import torch.nn as nn
import time
import math, time
from sklearn.metrics import mean_squared_error
import seaborn as sns

data = pd.read_csv('./data/rlData.csv')
data = data.sort_values('Date')
data.head()
data.shape

# 绘制close收盘价和Date时间的二维视图
sns.set_style("darkgrid")
plt.figure(figsize = (15,9))
plt.plot(data[['Close']])  #这里注意输入数据y的数据类型
plt.xticks(range(0,data.shape[0],20), data['Date'].loc[::20], rotation=45)
plt.title("****** Stock Price",fontsize=18, fontweight='bold')
plt.xlabel('Date',fontsize=18)
plt.ylabel('Close Price (USD)',fontsize=18)
plt.show()


#1.选取特征工程Close类数据
price = data[['Close']]
price.info()
print(price.shape) #(252, 1)

#2.数据归一化
scaler = MinMaxScaler(feature_range=(-1, 1))
price['Close'] = scaler.fit_transform(price['Close'].values.reshape(-1,1))

#3.数据集的制作
def split_data(stock, lookback):
    data_raw = stock.to_numpy() 
    data = []  
    # you can free play(seq_length)
    for index in range(len(data_raw) - lookback): 
        
        #这里input的shape[252,1],然后每次取input中的[]
        """
        input= [[1],[2]...[252]]  input.shape=[252,1] 
        每次循环取input[i:i+lookback],[[1],[2]...[lookback]] shape=[3,1]
        然后存到data列表中,经过若干次循环后就变成了【len(data_raw)-lookback,3,1"""
        data.append(data_raw[index: index + lookback])

    data = np.array(data)
    # print(data_raw.shape) #(252, 1)
    # print(data.shape)  #(249, 3, 1)
    
    
    test_set_size = int(np.round(0.2 * data.shape[0]))
    train_set_size = data.shape[0] - (test_set_size)
    
    x_train = data[:train_set_size,:-1,:] #当lookback=n时,本代码表示将time_step=n-1,其他数据不变
    y_train = data[:train_set_size,-1,:]  #当lookback=n时,本代码表示将n-1个数据作为y,且数据shape也会不变,和x_train保持一直
    
    x_test = data[train_set_size:,:-1,:]
    y_test = data[train_set_size:,-1,:]
    
    return [torch.Tensor(x_train), torch.Tensor(y_train), torch.Tensor(x_test),torch.Tensor(y_test)]

lookback = 3
x_train, y_train, x_test, y_test = split_data(price, lookback)
print('x_train.shape = ',x_train.shape) #(199, 2, 1)
print('y_train.shape = ',y_train.shape)
print('x_test.shape = ',x_test.shape) #x_test.shape =  (50, 2, 1)
print('y_test.shape = ',y_test.shape)

class LSTM(nn.Module):
    def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
        super(LSTM, self).__init__()
        self.hidden_dim = hidden_dim
        self.num_layers = num_layers
        
        self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        """
        batch_first=True:则输入为x.shape = (batch_size,time_steps,input_size)
                                out.shape = (batch_size,time-steps,ouput_size)
        
        """
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_()
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_()
        
        # x.shape = (batch_size,time_steps,input_size)
        # out.shape = (batch_size,time_steps,ouput_size)
        out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
        
        #out.shape是选取time_steps列的最后一个数据 out.shape = (batch_size,ouput_size)
        out = self.fc(out[:, -1, :]) 
        return out
     
input_dim = 1
hidden_dim = 32
num_layers = 2
output_dim = 1
num_epochs = 100

model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers)
criterion = torch.nn.MSELoss()
optimiser = torch.optim.Adam(model.parameters(), lr=0.01)


hist = np.zeros(num_epochs)
lstm = []

for t in range(num_epochs):
    
    #x_train.shape=torch.Size([199, 2, 1])     [batch_size,time_steps,input_size]
    #y_train_pred.shape=torch.Size([199, 1])   [batch_size,ouput_size]
    
    y_train_pred = model(x_train)    

    loss = criterion(y_train_pred, y_train_lstm)
    # print("Epoch ", t, "MSE: ", loss.item())
    hist[t] = loss.item()

    optimiser.zero_grad()
    loss.backward()
    optimiser.step()
    
#绘制预测值y_train_pred和真实值y_train二维视图
predict = pd.DataFrame(scaler.inverse_transform(y_train_pred.detach().numpy()))
original = pd.DataFrame(scaler.inverse_transform(y_train.detach().numpy()))


sns.set_style("darkgrid")    
fig = plt.figure()
fig.subplots_adjust(hspace=0.2, wspace=0.2)

plt.subplot(1, 2, 1)
ax = sns.lineplot(x = original.index, y = original[0], label="Data", color='royalblue')
ax = sns.lineplot(x = predict.index, y = predict[0], label="Training Prediction (LSTM)", color='tomato')
ax.set_title('Stock price', size = 14, fontweight='bold')
ax.set_xlabel("Days", size = 14)
ax.set_ylabel("Cost (USD)", size = 14)
ax.set_xticklabels('', size=10)

plt.subplot(1, 2, 2)
ax = sns.lineplot(data=hist, color='royalblue')
ax.set_xlabel("Epoch", size = 14)
ax.set_ylabel("Loss", size = 14)
ax.set_title("Training Loss", size = 14, fontweight='bold')
fig.set_figheight(6)
fig.set_figwidth(16)


# 设置成eval模式
model.eval()

# make predictions
y_test_pred = model(x_test)

# invert predictions
y_train_pred = scaler.inverse_transform(y_train_pred.detach().numpy())
y_train = scaler.inverse_transform(y_train.detach().numpy())
y_test_pred = scaler.inverse_transform(y_test_pred.detach().numpy())
y_test = scaler.inverse_transform(y_test.detach().numpy())

# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(y_train[:,0], y_train_pred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(y_test[:,0], y_test_pred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
lstm.append(trainScore)
lstm.append(testScore)
lstm.append(training_time)

参考:https://gitee.com/qiangchen_sh/stock-prediction/blob/master/%E4%BB%A3%E7%A0%81/LSTM%E4%BB%8E%E7%90%86%E8%AE%BA%E5%9F%BA%E7%A1%80%E5%88%B0%E4%BB%A3%E7%A0%81%E5%AE%9E%E6%88%98%203%20%E8%82%A1%E7%A5%A8%E4%BB%B7%E6%A0%BC%E9%A2%84%E6%B5%8B_Pytorch.ipynb#

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1178388.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

IDEA高效调试,你真的会了吗

大家好,这里是 一口八宝周 👏 欢迎来到我的博客 ❤️一起交流学习 文章中有需要改进的地方请大佬们多多指点 谢谢 🙏 由于平时工作中经常需要debug调试代码,每次调试都会阻塞住进程,影响别人排查问题。 “你一个人deb…

Linux生成动态库

动态库 1.命名规则 Linux: libxxx.so lib :前缀(固定的);xxx:动态库的名字(自己取);.so:后缀(固定的); Windows&#…

.NET Framework中自带的泛型委托Action

Action<>是.NET Framework中自带的泛型委托&#xff0c;可以接收一个或多个输入参数&#xff0c;但不返回任何参数&#xff0c;可传递至多16种不同类型的参数类型。在Linq的一些方法上使用的比较多。 1、Action泛型委托 .NET Framework为我们提供了多达16个参数的Action…

链表面试OJ题(1)

今天讲解两道链表OJ题目。 1.链表的中间节点 给你单链表的头结点 head &#xff0c;请你找出并返回链表的中间结点。 如果有两个中间结点&#xff0c;则返回第二个中间结点。 示例 输入&#xff1a;head [1,2,3,4,5] 输出&#xff1a;[3,4,5] 解释&#xff1a;链表只有一个…

轻量封装WebGPU渲染系统示例<19>- 使用GPU Compute材质多pass元胞自动机(源码)

当前示例源码github地址: https://github.com/vilyLei/voxwebgpu/blob/feature/rendering/src/voxgpu/sample/GameOfLifeMultiMaterialPass.ts 系统特性: 1. 用户态与系统态隔离。 细节请见&#xff1a;引擎系统设计思路 - 用户态与系统态隔离-CSDN博客 2. 高频调用与低频调…

1200*D. Same Differences(数学推公式)

Problem - 1520D - Codeforces 解析&#xff1a; 统计 a [ i ] - i #include<bits/stdc.h> using namespace std; #define int long long const int N2e55; int t,n,a[N]; signed main(){scanf("%lld",&t);while(t--){scanf("%lld",&n);…

一些对程序员有用的网站

当你遇到问题时 Stack Overflow&#xff1a;订阅他们的每周新闻和任何你感兴趣的主题Google&#xff1a;全球最大搜索引擎必应&#xff1a;在你无法使用Google的时候CSDN&#xff1a;聊胜于无AI导航一号AI导航二号 新闻篇 OSCHINA&#xff1a;中文开源技术交流社区 针对初学…

FPGA设计过程中有关数据之间的并串转化

1.原理 并串转化是指的是完成串行传输和并行传输两种传输方式之间的转换的技术&#xff0c;通过移位寄存器可以实现串并转换。 串转并&#xff0c;将数据移位保存在寄存器中&#xff0c;再将寄存器的数值同时输出&#xff1b; 并转串&#xff0c;将数据先进行移位&#xff0…

RHCSA --- Linux系统文件操作

rm -rf * 删除当前目录下的所有文件及目录&#xff0c;并且是直接删除&#xff0c;无需逐一确认命令行为 touch&#xff1a; 不存在时创建&#xff0c;存在时更新文件时间戳 touch 1 2 3 4 创建多个文件 touch {1..4}{2..4} 花括号展开创…

STM32_project:led_beep

代码&#xff1a; 主要部分&#xff1a; #include "stm32f10x.h" // Device header #include "delay.h"// 给蜂鸣器IO口输出低电平&#xff0c;响&#xff0c;高&#xff0c;不向。 //int main (void) //{ // // 开启时钟 // RC…

开关电源怎么进行老化测试?有哪些测试方法?

一、开关电源老化测试原理 开关电源老化测试是检测电源长期稳定性和可靠性的重要测试方法。通过模拟开关电源在实际工作环境(如高负荷、高温等)中的长时间使用&#xff0c;来验证其性能、稳定性和可靠性。老化测试的原理主要基于以下概念&#xff1a; 1. 加速老化原理 老化测试…

Ntrip协议是什么?(RTK)

NTRIP (Networked Transport of RTCM via Internet Protocol) 是一种将实时差分导航数据通过互联网传输的协议。它被广泛应用于全球卫星定位系统 (GNSS) 定位和导航领域&#xff0c;以提高 GNSS 定位的精度。 NTRIP 是基于 TCP/IP 的协议&#xff0c;使用 HTTP/1.1 进行数据传…

vue中异步更新$nextTick

1.需求 编辑标题, 编辑框自动聚焦 点击编辑&#xff0c;显示编辑框让编辑框&#xff0c;立刻获取焦点 2.代码实现 <template><div class"app"><div v-if"isShowEdit"><input type"text" v-model"editValue"…

leetcode链表

这几天手的骨裂稍微好一点了&#xff0c;但是还是很疼&#xff0c;最近学校的课是真多&#xff0c;我都没时间做自己的事&#xff0c;但是好在今天下午是没有课的&#xff0c;我也终于可以做自己的事情了。 今天分享几道题目 移除链表元素 这道题我们将以两种方法开解决&…

linux内的循环

格式 while 【 条件判断 】 do 语句体 done 上图 第一次代码&#xff0c;输入语句在外面&#xff0c;结果输入完&#xff08;非hello&#xff09;程序不断循环&#xff0c;没办法&#xff0c;ctrlc给程序终止了&#xff0c;然后把用户输入的语句放到了循环体里面…

【数据结构初级(2)】单链表的基本操作和实现

文章目录 Ⅰ 概念及结构1. 单链表的概念2. 单链表的结构 Ⅱ 基本操作实现1. 定义单链表结点2. 创建新结点3. 单链表打印4. 单链表尾插5. 单链表头插6. 单链表尾删7. 单链表头删8. 单链表查找9. 在指定 pos 位置前插入结点10. 删除指定 pos 位置的结点11. 单链表销毁 本章实现的…

阿里云服务器优惠购买和搭建网站全流程(图文教程)

阿里云服务器使用教程包括云服务器购买、云服务器配置选择、云服务器开通端口号、搭建网站所需Web环境、安装网站程序、域名解析到云服务器公网IP地址&#xff0c;最后网站上线全流程&#xff0c;新手站长xinshouzhanzhang.com分享阿里云服务器详细使用教程&#xff1a; 一&am…

win11系统完全卸载Oracle11g图文详细步骤

完全卸载Oracle11g图文详细步骤 卸载步骤&#xff1a; 1.停用Oracle服务 2.卸载Oracle产品 3.删除注册表 4.删除环境变量 5.删除安装文件 6.重启电脑 文章目录 1. 停用Oracle服务2. 卸载Oracle产品3. 删除注册表4. 删除环境变量5. 删除安装文件6. 重启电脑扩展了解一下 Oracle相…

Requests 与接口请求构造

Requests 是一个优雅而简单的 Python HTTP 库&#xff0c;其实 Python 内置了用于访问网络的资源模块&#xff0c;比如urllib&#xff0c;但是它远不如 Requests 简单优雅&#xff0c;而且缺少了许多实用功能。所以&#xff0c;更推荐掌握 Requests 接口测试实战技能&#xff0…

oracle_19c 安装

oracle安装部署 1、安装docker,docker-compose环境。 curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun curl -L "https://github.com/docker/compose/releases/download/1.14.0-rc2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/b…