2023高教社杯全国大学生数学建模竞赛C题 Python代码演示

news2024/11/24 1:58:58

目录

    • 问题一
      • 1.1 蔬菜类商品不同品类或不同单品之间可能存在一定的关联关系,请分析蔬菜各品类及单品销售量的分布规律及相互关系。
        • 数据预处理
          • 数据合并
          • 提取年、月、日信息
          • 对蔬菜的各品类按月求销量均值
        • 季节性时间序列分解
          • STL分解
          • 加法分解
          • 乘法分解
        • ARIMA
        • LSTM

import pandas as pd

path = '/home/shiyu/Desktop/path_acdemic/ant/数模/历年题目/2023/'
d1 = pd.read_excel(path + '附件1.xlsx')
d2 = pd.read_excel(path + '附件2.xlsx')
d3 = pd.read_excel(path + '附件3.xlsx')
d4 = pd.read_excel(path + '附件4.xlsx',sheet_name='Sheet1')
print(d1.shape)
print(d2.shape)
print(d3.shape)
print(d4.shape)
(251, 4)
(878503, 7)
(55982, 3)
(251, 3)

问题一

1.1 蔬菜类商品不同品类或不同单品之间可能存在一定的关联关系,请分析蔬菜各品类及单品销售量的分布规律及相互关系。

在这里插入图片描述

数据预处理
数据合并
d1['分类名称'].value_counts()
分类名称
花叶类      100
食用菌       72
辣椒类       45
水生根茎类     19
茄类        10
花菜类        5
Name: count, dtype: int64
import pandas as pd
d12 = pd.merge(d2, d1, on='单品编码')

d3.columns = ['销售日期'] + list(d3.columns[1:3])
d123 = pd.merge(d12, d3, on=['单品编码','销售日期'])
d1234 = pd.merge(d123, d4, on=['单品编码','单品名称'])

d1234.shape
(878503, 12)
提取年、月、日信息
d1234['月份'] = d1234['销售日期'].dt.month
d1234['月份'] = d1234['月份'].astype(str).str.zfill(2)
d1234['年份'] = d1234['销售日期'].dt.year
d1234['年月'] = d1234['年份'].astype(str) + '-' + d1234['月份'].astype(str) 
对蔬菜的各品类按月求销量均值
def my_group(category):
    d_sub = d1234[d1234['分类名称'] == category]
    # 对销量按月求均值
    sale_by_month = pd.DataFrame(d_sub.groupby(['年月'])['销量(千克)'].mean())
    sale_by_month.columns = [category + sale_by_month.columns]
    sale_by_month['年月'] = sale_by_month.index
    return(sale_by_month)
sale_by_month_leaves = my_group('花叶类')
sale_by_month_mushroom = my_group('食用菌')
sale_by_month_pepper = my_group('辣椒类')
sale_by_month_water = my_group('水生根茎类')
sale_by_month_eggplant = my_group('茄类')
sale_by_month_cauliflower = my_group('花菜类')
from functools import reduce
dfs = [sale_by_month_leaves, sale_by_month_mushroom, sale_by_month_pepper, sale_by_month_water, sale_by_month_eggplant, sale_by_month_cauliflower]
sale_by_month_all = reduce(lambda left,right: pd.merge(left,right), dfs)
sale_by_month_all.head()
花叶类销量(千克)年月食用菌销量(千克)辣椒类销量(千克)水生根茎类销量(千克)茄类销量(千克)花菜类销量(千克)
00.4646802020-070.3088060.2801850.4187340.5808380.473726
10.4831672020-080.3348040.3092980.5333210.5491050.455973
20.5007422020-090.3516440.3012420.5579130.5438800.464073
30.5291072020-100.4584460.2924240.6515360.5368340.510383
40.6257632020-110.5538530.3229140.6434660.4841980.535812
df = pd.DataFrame(None, columns=['年月', '销量','蔬菜品类'], index=range(sale_by_month_all.shape[0]*6))
df['销量'] = list(sale_by_month_all.iloc[:,0]) + list(sale_by_month_all.iloc[:,2]) + list(sale_by_month_all.iloc[:,3]) + list(sale_by_month_all.iloc[:,4]) + list(sale_by_month_all.iloc[:,5]) + list(sale_by_month_all.iloc[:,6])
df['年月'] = list(sale_by_month_all.iloc[:,1]) * 6
names = list(sale_by_month_all.columns[0]) + list(sale_by_month_all.columns)[2:7]
df['蔬菜品类'] = [x for x in names for i in range(sale_by_month_all.shape[0])]
df.head(3)
年月销量蔬菜品类
02020-070.464680花叶类销量(千克)
12020-080.483167花叶类销量(千克)
22020-090.500742花叶类销量(千克)
import plotly.express as px

fig = px.line(df, x="年月", y="销量", color='蔬菜品类', title='各蔬菜品类月销量随时间变化')
# center title
fig.update_layout(title_x=0.5)
# remove background color
fig.update_layout({
'plot_bgcolor': 'rgba(0, 0, 0, 0)',
'paper_bgcolor': 'rgba(0, 0, 0, 0)',
})
fig.show()

在这里插入图片描述

季节性时间序列分解
sale_by_month_all.head(3)
花叶类销量(千克)年月食用菌销量(千克)辣椒类销量(千克)水生根茎类销量(千克)茄类销量(千克)花菜类销量(千克)
00.4646802020-070.3088060.2801850.4187340.5808380.473726
10.4831672020-080.3348040.3092980.5333210.5491050.455973
20.5007422020-090.3516440.3012420.5579130.5438800.464073

水生根茎类

STL分解

https://www.geo.fu-berlin.de/en/v/soga-py/Advanced-statistics/time-series-analysis/Seasonal-decompositon/STL-decomposition/index.html

import pandas as pd
from statsmodels.tsa.seasonal import seasonal_decompose, STL
import matplotlib.pyplot as plt

stl = STL(sale_by_month_all.iloc[:,4], period=12)
res = stl.fit()

data = {'trend': res.trend,
        'seasonality': res.seasonal,
        'residuals':res.resid
         }

res_stl = pd.DataFrame(data)
res_stl.head()
trendseasonalityresiduals
00.644373-0.2477680.022129
10.642466-0.1322870.023142
20.640681-0.059934-0.022835
30.6390110.018400-0.005875
40.6374520.007495-0.001481
# Linux show Chinese characters *** important
plt.rcParams['font.family'] = 'WenQuanYi Micro Hei' 
plt.rcParams['figure.dpi'] = 300
plt.rcParams['savefig.dpi'] = 300

fig = res.plot()

在这里插入图片描述

import scipy.stats as stats

plt.figure(figsize=(18, 6))

plt.figure()
stats.probplot(res.resid, dist="norm", plot=plt)
plt.title("QQ-Plot")
plt.show()

在这里插入图片描述

# histogram plot
plt.figure(figsize=(9, 3))

plt.hist(res.resid)
plt.title("Residuals")
plt.show()

在这里插入图片描述

加法分解
add = seasonal_decompose(
    sale_by_month_all.iloc[:,4], period=12,
    model="additive"
)

data = {'trend': add.trend,
        'seasonality': add.seasonal,
        'residuals':add.resid
         }

res_add = pd.DataFrame(data)
res_add.iloc[6:10,:]
trendseasonalityresiduals
60.6190410.168495-0.011543
70.6220290.1436840.092367
80.6273880.0474140.054634
90.632759-0.0250560.054265
add.plot()

在这里插入图片描述

乘法分解
multi = seasonal_decompose(
    sale_by_month_all.iloc[:,4], period=12,
    model="multiplicative"
)

data = {'trend': multi.trend,
        'seasonality': multi.seasonal,
        'residuals':multi.resid
         }

res_multi = pd.DataFrame(data)
res_multi.iloc[6:10,:]
trendseasonalityresiduals
60.6190411.2591770.995523
70.6220291.2214431.129390
80.6273881.0722591.084304
90.6327590.9637601.085500
multi.plot()

在这里插入图片描述

ARIMA

https://machinelearningmastery.com/arima-for-time-series-forecasting-with-python/

基于STL分解得到的趋势性,使用ARIMA模型进行拟合预测

res_stl.head()
trendseasonalityresiduals
00.644373-0.2477680.022129
10.642466-0.1322870.023142
20.640681-0.059934-0.022835
30.6390110.018400-0.005875
40.6374520.007495-0.001481
import datetime
from matplotlib import pyplot

series = res_stl['trend']
series.plot()

在这里插入图片描述

from pandas.plotting import autocorrelation_plot
autocorrelation_plot(series)

在这里插入图片描述

from statsmodels.tsa.arima.model import ARIMA
model = ARIMA(series, order=(10,1,0))
model_fit = model.fit()
print(model_fit.summary())
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                  trend   No. Observations:                   36
Model:                ARIMA(10, 1, 0)   Log Likelihood                 222.352
Date:                Thu, 01 Aug 2024   AIC                           -422.704
Time:                        20:01:12   BIC                           -405.596
Sample:                             0   HQIC                          -416.798
                                 - 36                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1          1.4304      0.096     14.970      0.000       1.243       1.618
ar.L2         -0.2795      0.077     -3.638      0.000      -0.430      -0.129
ar.L3          0.0147      0.078      0.187      0.851      -0.139       0.169
ar.L4         -0.0643      0.045     -1.417      0.156      -0.153       0.025
ar.L5         -0.1505      0.046     -3.291      0.001      -0.240      -0.061
ar.L6         -0.0301      0.071     -0.422      0.673      -0.170       0.110
ar.L7         -0.0039      0.066     -0.059      0.953      -0.132       0.125
ar.L8          0.0468      0.039      1.198      0.231      -0.030       0.123
ar.L9          0.0183      0.034      0.537      0.591      -0.048       0.085
ar.L10         0.0070      0.104      0.067      0.946      -0.198       0.212
sigma2      1.491e-07   1.57e-08      9.498      0.000    1.18e-07     1.8e-07
===================================================================================
Ljung-Box (L1) (Q):                   0.08   Jarque-Bera (JB):               449.77
Prob(Q):                              0.78   Prob(JB):                         0.00
Heteroskedasticity (H):               0.09   Skew:                             3.52
Prob(H) (two-sided):                  0.00   Kurtosis:                        19.09
===================================================================================
from pandas import DataFrame
# line plot of residuals
residuals = DataFrame(model_fit.resid)
residuals.plot()
pyplot.show()

在这里插入图片描述

# summary stats of residuals
print(residuals.describe())
               0
count  36.000000
mean    0.017927
std     0.107392
min    -0.001907
25%    -0.000042
50%     0.000025
75%     0.000113
max     0.644373
import warnings
warnings.filterwarnings("ignore")
from numpy import sqrt 

X = series.values
train, test = X, X
history = [x for x in train]
predictions = list()

# walk-forward validation
for t in range(len(test)):
 model = ARIMA(history, order=(5,1,0))
 model_fit = model.fit()
 output = model_fit.forecast()
 yhat = output[0]
 predictions.append(yhat)
 obs = test[t]
 history.append(obs)
 print('predicted=%f, expected=%f' % (yhat, obs))
    
# evaluate forecasts
rmse = sqrt(mean_squared_error(test, predictions))
print('Test RMSE: %.3f' % rmse)
predicted=0.746828, expected=0.644373
predicted=0.639646, expected=0.642466
predicted=0.637131, expected=0.640681
predicted=0.636508, expected=0.639011
predicted=0.638260, expected=0.637452
predicted=0.636941, expected=0.636011
predicted=0.635828, expected=0.634695
predicted=0.634524, expected=0.633510
predicted=0.633352, expected=0.632476
predicted=0.632332, expected=0.631610
predicted=0.631483, expected=0.630988
predicted=0.630880, expected=0.630979
predicted=0.630907, expected=0.633298
predicted=0.633330, expected=0.636138
predicted=0.636276, expected=0.639619
predicted=0.639865, expected=0.643978
predicted=0.644346, expected=0.649266
predicted=0.649768, expected=0.655252
predicted=0.655890, expected=0.661718
predicted=0.662507, expected=0.668408
predicted=0.669351, expected=0.675055
predicted=0.676136, expected=0.681361
predicted=0.682541, expected=0.687128
predicted=0.688354, expected=0.692332
predicted=0.693554, expected=0.697262
predicted=0.698456, expected=0.701650
predicted=0.702786, expected=0.705999
predicted=0.707089, expected=0.710305
predicted=0.711366, expected=0.714555
predicted=0.715603, expected=0.718749
predicted=0.719792, expected=0.722890
predicted=0.723909, expected=0.726981
predicted=0.728039, expected=0.731031
predicted=0.732095, expected=0.735043
predicted=0.736114, expected=0.739024
predicted=0.740101, expected=0.742980
Test RMSE: 0.017
# plot forecasts against actual outcomes
pyplot.plot(test)
pyplot.plot(predictions, color='red')
pyplot.show()

在这里插入图片描述

LSTM

https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
# fix random seed for reproducibility
tf.random.set_seed(7)
df = d1234.iloc[:,3]
df = np.array(df).reshape(-1,1)

# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
df_norm = scaler.fit_transform(df)

# split into train and test sets
train_size = int(len(df_norm) * 0.7)
test_size = len(df_norm) - train_size
train, test = df_norm[0:train_size,:], df_norm[train_size:len(df_norm),:]
print(len(train), len(test))
614952 263551
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset)-look_back-1):
        a = dataset[i:(i+look_back), 0]
        dataX.append(a)
        dataY.append(dataset[i + look_back, 0])
        return np.array(dataX), np.array(dataY)
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)

# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))

# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
Epoch 1/100
1/1 - 1s - loss: 0.0026 - 611ms/epoch - 611ms/step
Epoch 2/100
1/1 - 0s - loss: 0.0024 - 2ms/epoch - 2ms/step
Epoch 3/100
1/1 - 0s - loss: 0.0023 - 2ms/epoch - 2ms/step
Epoch 4/100
1/1 - 0s - loss: 0.0021 - 2ms/epoch - 2ms/step
Epoch 5/100
1/1 - 0s - loss: 0.0020 - 2ms/epoch - 2ms/step
Epoch 6/100
1/1 - 0s - loss: 0.0019 - 2ms/epoch - 2ms/step
Epoch 7/100
1/1 - 0s - loss: 0.0017 - 1ms/epoch - 1ms/step
Epoch 8/100
1/1 - 0s - loss: 0.0016 - 1ms/epoch - 1ms/step
Epoch 9/100
1/1 - 0s - loss: 0.0015 - 1ms/epoch - 1ms/step
Epoch 10/100
1/1 - 0s - loss: 0.0014 - 1ms/epoch - 1ms/step
Epoch 11/100
1/1 - 0s - loss: 0.0013 - 1ms/epoch - 1ms/step
Epoch 12/100
1/1 - 0s - loss: 0.0012 - 1ms/epoch - 1ms/step
Epoch 13/100
1/1 - 0s - loss: 0.0011 - 2ms/epoch - 2ms/step
Epoch 14/100
1/1 - 0s - loss: 9.5703e-04 - 2ms/epoch - 2ms/step
Epoch 15/100
1/1 - 0s - loss: 8.6722e-04 - 2ms/epoch - 2ms/step
Epoch 16/100
1/1 - 0s - loss: 7.8267e-04 - 2ms/epoch - 2ms/step
Epoch 17/100
1/1 - 0s - loss: 7.0334e-04 - 2ms/epoch - 2ms/step
Epoch 18/100
1/1 - 0s - loss: 6.2916e-04 - 2ms/epoch - 2ms/step
Epoch 19/100
1/1 - 0s - loss: 5.6005e-04 - 1ms/epoch - 1ms/step
Epoch 20/100
1/1 - 0s - loss: 4.9593e-04 - 1ms/epoch - 1ms/step
Epoch 21/100
1/1 - 0s - loss: 4.3668e-04 - 1ms/epoch - 1ms/step
Epoch 22/100
1/1 - 0s - loss: 3.8218e-04 - 2ms/epoch - 2ms/step
Epoch 23/100
1/1 - 0s - loss: 3.3229e-04 - 2ms/epoch - 2ms/step
Epoch 24/100
1/1 - 0s - loss: 2.8686e-04 - 2ms/epoch - 2ms/step
Epoch 25/100
1/1 - 0s - loss: 2.4572e-04 - 2ms/epoch - 2ms/step
Epoch 26/100
1/1 - 0s - loss: 2.0869e-04 - 2ms/epoch - 2ms/step
Epoch 27/100
1/1 - 0s - loss: 1.7558e-04 - 2ms/epoch - 2ms/step
Epoch 28/100
1/1 - 0s - loss: 1.4619e-04 - 2ms/epoch - 2ms/step
Epoch 29/100
1/1 - 0s - loss: 1.2030e-04 - 2ms/epoch - 2ms/step
Epoch 30/100
1/1 - 0s - loss: 9.7691e-05 - 2ms/epoch - 2ms/step
Epoch 31/100
1/1 - 0s - loss: 7.8142e-05 - 1ms/epoch - 1ms/step
Epoch 32/100
1/1 - 0s - loss: 6.1421e-05 - 1ms/epoch - 1ms/step
Epoch 33/100
1/1 - 0s - loss: 4.7296e-05 - 2ms/epoch - 2ms/step
Epoch 34/100
1/1 - 0s - loss: 3.5534e-05 - 2ms/epoch - 2ms/step
Epoch 35/100
1/1 - 0s - loss: 2.5905e-05 - 2ms/epoch - 2ms/step
Epoch 36/100
1/1 - 0s - loss: 1.8181e-05 - 2ms/epoch - 2ms/step
Epoch 37/100
1/1 - 0s - loss: 1.2141e-05 - 2ms/epoch - 2ms/step
Epoch 38/100
1/1 - 0s - loss: 7.5720e-06 - 2ms/epoch - 2ms/step
Epoch 39/100
1/1 - 0s - loss: 4.2689e-06 - 2ms/epoch - 2ms/step
Epoch 40/100
1/1 - 0s - loss: 2.0387e-06 - 2ms/epoch - 2ms/step
Epoch 41/100
1/1 - 0s - loss: 7.0006e-07 - 2ms/epoch - 2ms/step
Epoch 42/100
1/1 - 0s - loss: 8.5575e-08 - 1ms/epoch - 1ms/step
Epoch 43/100
1/1 - 0s - loss: 4.2072e-08 - 1ms/epoch - 1ms/step
Epoch 44/100
1/1 - 0s - loss: 4.3148e-07 - 2ms/epoch - 2ms/step
Epoch 45/100
1/1 - 0s - loss: 1.1312e-06 - 1ms/epoch - 1ms/step
Epoch 46/100
1/1 - 0s - loss: 2.0341e-06 - 1ms/epoch - 1ms/step
Epoch 47/100
1/1 - 0s - loss: 3.0485e-06 - 1ms/epoch - 1ms/step
Epoch 48/100
1/1 - 0s - loss: 4.0975e-06 - 1ms/epoch - 1ms/step
Epoch 49/100
1/1 - 0s - loss: 5.1187e-06 - 2ms/epoch - 2ms/step
Epoch 50/100
1/1 - 0s - loss: 6.0630e-06 - 2ms/epoch - 2ms/step
Epoch 51/100
1/1 - 0s - loss: 6.8934e-06 - 2ms/epoch - 2ms/step
Epoch 52/100
1/1 - 0s - loss: 7.5847e-06 - 4ms/epoch - 4ms/step
Epoch 53/100
1/1 - 0s - loss: 8.1211e-06 - 3ms/epoch - 3ms/step
Epoch 54/100
1/1 - 0s - loss: 8.4957e-06 - 3ms/epoch - 3ms/step
Epoch 55/100
1/1 - 0s - loss: 8.7090e-06 - 3ms/epoch - 3ms/step
Epoch 56/100
1/1 - 0s - loss: 8.7674e-06 - 3ms/epoch - 3ms/step
Epoch 57/100
1/1 - 0s - loss: 8.6819e-06 - 2ms/epoch - 2ms/step
Epoch 58/100
1/1 - 0s - loss: 8.4674e-06 - 2ms/epoch - 2ms/step
Epoch 59/100
1/1 - 0s - loss: 8.1409e-06 - 2ms/epoch - 2ms/step
Epoch 60/100
1/1 - 0s - loss: 7.7213e-06 - 2ms/epoch - 2ms/step
Epoch 61/100
1/1 - 0s - loss: 7.2276e-06 - 2ms/epoch - 2ms/step
Epoch 62/100
1/1 - 0s - loss: 6.6792e-06 - 2ms/epoch - 2ms/step
Epoch 63/100
1/1 - 0s - loss: 6.0943e-06 - 2ms/epoch - 2ms/step
Epoch 64/100
1/1 - 0s - loss: 5.4902e-06 - 2ms/epoch - 2ms/step
Epoch 65/100
1/1 - 0s - loss: 4.8822e-06 - 2ms/epoch - 2ms/step
Epoch 66/100
1/1 - 0s - loss: 4.2841e-06 - 2ms/epoch - 2ms/step
Epoch 67/100
1/1 - 0s - loss: 3.7074e-06 - 2ms/epoch - 2ms/step
Epoch 68/100
1/1 - 0s - loss: 3.1617e-06 - 1ms/epoch - 1ms/step
Epoch 69/100
1/1 - 0s - loss: 2.6544e-06 - 1ms/epoch - 1ms/step
Epoch 70/100
1/1 - 0s - loss: 2.1909e-06 - 1ms/epoch - 1ms/step
Epoch 71/100
1/1 - 0s - loss: 1.7745e-06 - 1ms/epoch - 1ms/step
Epoch 72/100
1/1 - 0s - loss: 1.4072e-06 - 2ms/epoch - 2ms/step
Epoch 73/100
1/1 - 0s - loss: 1.0890e-06 - 1ms/epoch - 1ms/step
Epoch 74/100
1/1 - 0s - loss: 8.1904e-07 - 1ms/epoch - 1ms/step
Epoch 75/100
1/1 - 0s - loss: 5.9503e-07 - 1ms/epoch - 1ms/step
Epoch 76/100
1/1 - 0s - loss: 4.1391e-07 - 1ms/epoch - 1ms/step
Epoch 77/100
1/1 - 0s - loss: 2.7203e-07 - 1ms/epoch - 1ms/step
Epoch 78/100
1/1 - 0s - loss: 1.6523e-07 - 2ms/epoch - 2ms/step
Epoch 79/100
1/1 - 0s - loss: 8.9118e-08 - 2ms/epoch - 2ms/step
Epoch 80/100
1/1 - 0s - loss: 3.9202e-08 - 1ms/epoch - 1ms/step
Epoch 81/100
1/1 - 0s - loss: 1.1048e-08 - 1ms/epoch - 1ms/step
Epoch 82/100
1/1 - 0s - loss: 4.0064e-10 - 1ms/epoch - 1ms/step
Epoch 83/100
1/1 - 0s - loss: 3.2746e-09 - 1ms/epoch - 1ms/step
Epoch 84/100
1/1 - 0s - loss: 1.6036e-08 - 1ms/epoch - 1ms/step
Epoch 85/100
1/1 - 0s - loss: 3.5442e-08 - 1ms/epoch - 1ms/step
Epoch 86/100
1/1 - 0s - loss: 5.8693e-08 - 1ms/epoch - 1ms/step
Epoch 87/100
1/1 - 0s - loss: 8.3416e-08 - 1ms/epoch - 1ms/step
Epoch 88/100
1/1 - 0s - loss: 1.0769e-07 - 1ms/epoch - 1ms/step
Epoch 89/100
1/1 - 0s - loss: 1.3003e-07 - 1ms/epoch - 1ms/step
Epoch 90/100
1/1 - 0s - loss: 1.4932e-07 - 1ms/epoch - 1ms/step
Epoch 91/100
1/1 - 0s - loss: 1.6482e-07 - 1ms/epoch - 1ms/step
Epoch 92/100
1/1 - 0s - loss: 1.7613e-07 - 1ms/epoch - 1ms/step
Epoch 93/100
1/1 - 0s - loss: 1.8309e-07 - 1ms/epoch - 1ms/step
Epoch 94/100
1/1 - 0s - loss: 1.8579e-07 - 1ms/epoch - 1ms/step
Epoch 95/100
1/1 - 0s - loss: 1.8451e-07 - 2ms/epoch - 2ms/step
Epoch 96/100
1/1 - 0s - loss: 1.7963e-07 - 1ms/epoch - 1ms/step
Epoch 97/100
1/1 - 0s - loss: 1.7169e-07 - 1ms/epoch - 1ms/step
Epoch 98/100
1/1 - 0s - loss: 1.6123e-07 - 1ms/epoch - 1ms/step
Epoch 99/100
1/1 - 0s - loss: 1.4884e-07 - 1ms/epoch - 1ms/step
Epoch 100/100
1/1 - 0s - loss: 1.3510e-07 - 1ms/epoch - 1ms/step





<keras.callbacks.History at 0x7f1d50766be0>
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
# calculate root mean squared error
trainScore = np.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = np.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
1/1 [==============================] - 0s 192ms/step
1/1 [==============================] - 0s 10ms/step
Train Score: 0.06 RMSE
Test Score: 0.19 RMSE
# shift train predictions for plotting
trainPredictPlot = np.empty_like(df_norm)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict

# shift test predictions for plotting
testPredictPlot = np.empty_like(df_norm)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict

# plot baseline and predictions
plt.plot(scaler.inverse_transform(df_norm))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2140837.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

热成像目标检测数据集

热成像目标检测数据集 V2 版本 项目背景 热成像技术因其在安防监控、夜间巡逻、消防救援等领域的独特优势而受到重视。本数据集旨在提供高质量的热成像图像及其对应的可见光图像&#xff0c;支持热成像目标检测的研究与应用。 数据集概述 名称&#xff1a;热成像目标检测数据…

多目标优化算法求解LSMOP(Large-Scale Multi-Objective Optimization Problem)测试集,MATLAB代码

LSMOP&#xff08;Large-Scale Multi-Objective Optimization Problem&#xff09;测试集是用于评估大规模多目标优化算法性能的一组标准测试问题。这些测试问题通常具有大量的决策变量和目标函数&#xff0c;旨在模拟现实世界中的复杂优化问题。 LSMOP测试集包含多个子问题&am…

深度学习之微积分预备知识点

极限&#xff08;Limit&#xff09; 定义&#xff1a;表示某一点处函数趋近于某一特定值的过程&#xff0c;一般记为 极限是一种变化状态的描述&#xff0c;核心思想是无限靠近而永远不能到达 公式&#xff1a; 表示 x 趋向 a 时 f(x) 的极限。 知识点口诀解释极限的存在左…

2024 VMpro 虚拟机中如何给Ubuntu Linux操作系统配置联网

现在这是一个联网的状态 可以在商店里面下载东西 也能ping成功 打开虚拟网络编辑器 放管理员权限 进行设置的更改 选择DNS设置 按提示修改即可 注意的是首选的DNS服务器必须是114.114.114.114 原因 这边刚刚去查了一下 114.114.114.114 是国内的IP地址 8.8.8.8 是国外的I…

【人工智能】OpenAI最新发布的o1-preview模型,和GPT-4o到底哪个更强?最新分析结果就在这里!

在人工智能的快速发展中&#xff0c;OpenAI的每一次新模型发布都引发了广泛的关注与讨论。2023年9月13日&#xff0c;OpenAI正式推出了名为o1的新模型&#xff0c;这一模型不仅是其系列“推理”模型中的首个代表&#xff0c;更是朝着类人人工智能迈进的重要一步。本文将综合分析…

PFC和LLC的本质和为什么要用PFC和LLC电路原因

我们可以用电感和电容的特性&#xff0c;以及电压和电流之间的不同步原理来解释PFC&#xff08;功率因数校正&#xff09;和LLC&#xff08;谐振变换器&#xff09;。 电感和电容的基本概念 电感&#xff08;Inductor&#xff09;&#xff1a; 电感是一种储存电能的组件。它的…

PhotoZoom Pro / Classic 9.0.2激活版安装激活图文教程

图像格式中&#xff0c;位图格式的图像是由点阵像素组成的数据文件&#xff0c;所以呢在把位图图像放大的时候&#xff0c;就会发现看到它是由于许多点构成&#xff0c;这就是为什么数码照片在使用普通的工具放大时会失真的原因。不过呢由于一些日常需求&#xff0c;我们经常需…

图神经网络模型扩展5--3

以图卷积网络为例&#xff0c;在每一层卷积中&#xff0c;我们需要用到两个输入A∈Rnn 和X∈Rnd。 很容易想象&#xff0c;当输入的图数据很大时(n 很大),图卷积网络的计算量是很大的&#xff0c;所需要的内存也是很大的。推广到更一般的信息传递网络&#xff0c;在每一层中&am…

Python 课程12-Python 自动化应用

前言 Python 自动化应用 可以帮助开发者节省时间和精力&#xff0c;将重复性、手动操作变为自动化脚本。例如&#xff0c;Python 可以用于自动化处理文件、邮件、生成报表&#xff0c;甚至可以控制浏览器执行复杂的网页操作任务。借助 Python 的强大库和工具&#xff0c;可以轻…

Kubernetes 常用指令2

kubernetes 常用命 令 1. 编写 yaml 文件 2. kubectl create 通过配置文件名或标准输入创建一个集群资源对象&#xff0c;支 持 json 和 yaml 格式的文件 语法&#xff1a; kubectl create -f 文件名 kubectl create deployment 资源名 --image镜像名 kubectl create deplo…

视频格式转为mp4(使用ffmpeg)

1、首先安装ffmpeg&#xff0c;下载链接如下 https://www.gyan.dev/ffmpeg/builds/packages/ffmpeg-6.1.1-full_build.7z 安装后确保ffmpeg程序加到PATH路径里&#xff0c;cmd执行ffmpeg -version出现下图内容表示安装成功。 2、粘贴下面的脚本到文本文件中&#xff0c;文件后缀…

【Linux进程控制】进程创建|终止

目录 一、进程创建 fork函数 写时拷贝 二、进程终止 想明白&#xff1a;终止是在做什么&#xff1f; 进程退出场景 常见信号码及其含义 进程退出的常见方法 正常终止与异常终止 exit与_exit的区别 一、进程创建 fork函数 在Linux中fork函数是非常重要的函数&#x…

魔方财务升级指南

本文将详细介绍如何升级魔方财务系统&#xff0c;确保您能够顺利地更新到最新版本。 重要提示 在进行任何系统升级之前&#xff0c;请务必备份数据。这是良好的习惯&#xff0c;也是我们的建议。 备份数据库&#xff0c;并下载到本地。最好将网站目录文件打包&#xff0c;并…

【最新华为OD机试E卷-支持在线评测】最长连续子序列(100分)多语言题解-(Python/C/JavaScript/Java/Cpp)

🍭 大家好这里是春秋招笔试突围 ,一枚热爱算法的程序员 💻 ACM金牌🏅️团队 | 大厂实习经历 | 多年算法竞赛经历 ✨ 本系列打算持续跟新华为OD-E/D卷的多语言AC题解 🧩 大部分包含 Python / C / Javascript / Java / Cpp 多语言代码 👏 感谢大家的订阅➕ 和 喜欢�…

沉浸式利用自然语言无代码开发工具生成式AI产品应用(上)

背景 小伙伴们过去在开发应用时&#xff0c;经常需要编写大量代码文件以实现业务逻辑&#xff0c;想必肯定有小伙伴开发过类似于快消行业索赔处理、订单库存跟踪和项目审批等系统。去解决这些业务实际问题&#xff0c;我们需要定制地开发业务应用程序为这些问题提供解决方案。…

S100A9:鸡支原体感染中的免疫调控“双面间谍”【AbMole】

在生物学研究的广阔天地里&#xff0c;总有一些分子扮演着令人意想不到的角色。今天&#xff0c;我们要探索的主角是S100A9蛋白&#xff0c;一种在鸡支原体感染过程中展现出惊人双重功能的分子。这项来自华中农业大学动物科技学院与兽医学院的最新研究成果&#xff0c;揭示了S1…

(一)springboot2.6.13+mybatis-plus3.5.3.1+shardingsphere4.0.0-RC2

1,POM配置 <?xml version"1.0" encoding"UTF-8"?> <project xmlns"http://maven.apache.org/POM/4.0.0" xmlns:xsi"http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation"http://maven.apache.org/POM/4.0…

【Mac】系统环境配置

常用工具 Navicat PJ版本&#xff1a;this Host切换器 SwitchHosts termius 一款好用的Linux服务器连接工具&#xff1a; termius 小飞机 dddd&#xff1a;&#x1fa9c; Git mac安装git有好多种方式&#xff0c;自带的xcode或者通过Homebrew来安装&#xff0c;本文的…

文心一言 VS 讯飞星火 VS chatgpt (348)-- 算法导论23.2 7题

七、假定图G的一棵最小生成树已经被计算出来。如果在图中加入一个新结点及其相关的新边&#xff0c;我们需要多少时间来对最小生成树进行更新&#xff1f;如果要写代码&#xff0c;请用go语言。 文心一言&#xff1a; 在图中加入一个新节点及其相关边后&#xff0c;要更新最小…

【计算机毕设-大数据方向】基于Hadoop的在线教育平台数据分析可视化系统的设计与实现

&#x1f497;博主介绍&#xff1a;✌全平台粉丝5W,高级大厂开发程序员&#x1f603;&#xff0c;博客之星、掘金/知乎/华为云/阿里云等平台优质作者。 【源码获取】关注并且私信我 【联系方式】&#x1f447;&#x1f447;&#x1f447;最下边&#x1f447;&#x1f447;&…