🎯要点
- 对比三种方式计算
- 读取二维和三维三角形四边形和六面体网格
- 运动学奇异点处理
- 医学图像成像组学分析
- 特征敏感度增强
- 机械臂路径规划和手臂空间操作变换
- 苹果手机物理稳定性中间轴定理
Python雅可比矩阵
多变量向量值函数的雅可比矩阵推广了多变量标量值函数的梯度,而这又推广了单变量标量值函数的导数。换句话说,多变量标量值函数的雅可比矩阵是其梯度(的转置),而单变量标量值函数的梯度是其导数。
在函数可微的每个点,其雅可比矩阵也可以被认为是描述函数在该点附近局部施加的“拉伸”、“旋转”或“变换”量。例如,如果使用 ( x ′ , y ′ ) = f ( x , y ) \left(x^{\prime}, y^{\prime}\right)= f (x, y) (x′,y′)=f(x,y) 平滑变换图像,则雅可比矩阵 J f ( x , y ) J _{ f }( x, y) Jf(x,y),描述了 ( x , y ) (x, y) (x,y)邻域中的图像如何变换。如果函数在某点可微,其微分在坐标系中由雅可比矩阵给出。然而,函数不需要可微才能定义其雅可比矩阵,因为只需要存在其一阶偏导数。
考虑以下向量函数,该函数将
n
n
n 维向量
x
∈
R
n
x \in R ^n
x∈Rn 作为输入,并将该向量映射到
m
m
m 维向量:
f
(
x
)
=
[
f
1
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
f
2
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
⋮
f
m
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
]
f ( x )=\left[\begin{array}{c} f_1\left(x_1, x_2, x_3, \ldots, x_n\right) \\ f_2\left(x_1, x_2, x_3, \ldots, x_n\right) \\ \vdots \\ f_m\left(x_1, x_2, x_3, \ldots, x_n\right) \end{array}\right]
f(x)=
f1(x1,x2,x3,…,xn)f2(x1,x2,x3,…,xn)⋮fm(x1,x2,x3,…,xn)
其中向量
x
x
x 定义为
x
=
[
x
1
x
2
⋮
x
n
]
x =\left[\begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array}\right]
x=
x1x2⋮xn
非线性向量函数
f
f
f 产生
m
m
m 维向量
[
f
1
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
f
2
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
⋮
f
m
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
]
\left[\begin{array}{c} f_1\left(x_1, x_2, x_3, \ldots, x_n\right) \\ f_2\left(x_1, x_2, x_3, \ldots, x_n\right) \\ \vdots \\ f_m\left(x_1, x_2, x_3, \ldots, x_n\right) \end{array}\right]
f1(x1,x2,x3,…,xn)f2(x1,x2,x3,…,xn)⋮fm(x1,x2,x3,…,xn)
其条目是
m
m
m 函数
f
i
,
i
=
1
,
2
,
…
,
n
f_i, i=1,2, \ldots, n
fi,i=1,2,…,n,将向量
x
x
x 的条目映射为标量数。
函数
f
(
⋅
)
f (\cdot)
f(⋅) 的雅可比矩阵是
m
m
m ×
n
n
n 维偏导数矩阵,定义为
∂
f
∂
x
=
[
∂
f
1
∂
x
1
∂
f
1
∂
x
2
⋯
∂
f
1
∂
x
n
∂
f
2
∂
x
1
∂
f
2
∂
x
2
⋯
∂
f
2
∂
x
n
⋮
⋮
⋮
∂
f
m
∂
x
1
∂
f
m
∂
x
2
…
∂
f
m
∂
x
n
]
\frac{\partial f }{\partial x }=\left[\begin{array}{cccc} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & & \vdots \\ \frac{\partial f_m}{\partial x_1} & \frac{\partial f_m}{\partial x_2} & \ldots & \frac{\partial f_m}{\partial x_n} \end{array}\right]
∂x∂f=
∂x1∂f1∂x1∂f2⋮∂x1∂fm∂x2∂f1∂x2∂f2⋮∂x2∂fm⋯⋯…∂xn∂f1∂xn∂f2⋮∂xn∂fm
该矩阵的第一行由
f
1
(
⋅
)
f_1(\cdot)
f1(⋅) 分别相对于
x
1
、
x
2
、
…
、
x
n
x_1、x_2、\ldots、x_n
x1、x2、…、xn 的偏导数组成。类似地,该矩阵的第二行由
f
2
(
⋅
)
f_2(\cdot)
f2(⋅) 分别相对于
x
1
、
x
2
、
…
、
x
n
x_1、x_2、\ldots、x_n
x1、x2、…、xn 的偏导数组成。以同样的方式,我们构造雅可比矩阵的其他行。
在这里,我们展示了用于符号计算雅可比矩阵和创建 Python 函数的 Python 脚本,该函数将返回给定输入向量
x
x
x 的雅可比矩阵的数值。为了验证 Python 实现,让我们考虑以下测试用例函数
f
=
[
x
1
x
2
sin
(
x
1
)
cos
(
x
3
)
x
3
e
x
4
]
f =\left[\begin{array}{c} x_1 x_2 \\ \sin \left(x_1\right) \\ \cos \left(x_3\right) \\ x_3 e^{x_4} \end{array}\right]
f=
x1x2sin(x1)cos(x3)x3ex4
其中
x
x
x 是
x
=
[
x
1
x
2
x
3
x
4
]
x =\left[\begin{array}{l} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array}\right]
x=
x1x2x3x4
且
f
1
(
x
1
,
x
2
,
x
3
,
x
4
)
=
x
1
x
2
f
2
(
x
1
,
x
2
,
x
3
,
x
4
)
=
sin
(
x
1
)
f
3
(
x
1
,
x
2
,
x
3
,
x
4
)
=
cos
(
x
3
)
f
4
(
x
1
,
x
2
,
x
3
,
x
4
)
=
x
3
e
x
4
\begin{aligned} & f_1\left(x_1, x_2, x_3, x_4\right)=x_1 x_2 \\ & f_2\left(x_1, x_2, x_3, x_4\right)=\sin \left(x_1\right) \\ & f_3\left(x_1, x_2, x_3, x_4\right)=\cos \left(x_3\right) \\ & f_4\left(x_1, x_2, x_3, x_4\right)=x_3 e^{x_4} \end{aligned}
f1(x1,x2,x3,x4)=x1x2f2(x1,x2,x3,x4)=sin(x1)f3(x1,x2,x3,x4)=cos(x3)f4(x1,x2,x3,x4)=x3ex4
该函数的雅可比行列式是
∂
f
∂
x
=
[
∂
f
1
∂
x
1
∂
f
1
∂
x
2
∂
f
1
∂
x
3
∂
f
1
∂
x
4
∂
f
2
∂
x
1
∂
f
2
∂
x
2
∂
f
2
∂
x
3
∂
f
2
∂
x
4
∂
f
3
∂
x
1
∂
f
3
∂
x
2
∂
f
3
∂
x
3
∂
f
3
∂
x
4
∂
f
4
∂
x
1
∂
f
4
∂
x
2
∂
f
4
∂
x
3
∂
f
4
∂
x
4
]
\frac{\partial f }{\partial x }=\left[\begin{array}{llll} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \frac{\partial f_1}{\partial x_3} & \frac{\partial f_1}{\partial x_4} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \frac{\partial f_2}{\partial x_3} & \frac{\partial f_2}{\partial x_4} \\ \frac{\partial f_3}{\partial x_1} & \frac{\partial f_3}{\partial x_2} & \frac{\partial f_3}{\partial x_3} & \frac{\partial f_3}{\partial x_4} \\ \frac{\partial f_4}{\partial x_1} & \frac{\partial f_4}{\partial x_2} & \frac{\partial f_4}{\partial x_3} & \frac{\partial f_4}{\partial x_4} \end{array}\right]
∂x∂f=
∂x1∂f1∂x1∂f2∂x1∂f3∂x1∂f4∂x2∂f1∂x2∂f2∂x2∂f3∂x2∂f4∂x3∂f1∂x3∂f2∂x3∂f3∂x3∂f4∂x4∂f1∂x4∂f2∂x4∂f3∂x4∂f4
通过计算这些偏导数,我们得到
∂
f
∂
x
=
[
x
2
x
1
0
0
cos
(
x
1
)
0
0
0
0
0
−
sin
(
x
3
)
0
0
0
e
x
4
x
3
e
x
4
]
\frac{\partial f }{\partial x }=\left[\begin{array}{cccc} x_2 & x_1 & 0 & 0 \\ \cos \left(x_1\right) & 0 & 0 & 0 \\ 0 & 0 & -\sin \left(x_3\right) & 0 \\ 0 & 0 & e^{x_4} & x_3 e^{x_4} \end{array}\right]
∂x∂f=
x2cos(x1)00x100000−sin(x3)ex4000x3ex4
import numpy as np
from sympy import *
init_printing()
x=MatrixSymbol('x',4,1)
f=Matrix([[x[0]*x[1]],
[sin(x[0])],
[cos(x[2])],
[x[2]*E**(x[3])]])
JacobianSymbolic=f.jacobian(x)
JacobianFunction=lambdify(x,JacobianSymbolic)
testCaseVector=np.array([[1],[1],[1],[1]])
JacobianNumerical=JacobianFunction(testCaseVector)
定义符号向量“x”如下
x=MatrixSymbol('x',4,1)
非线性向量函数“f”定义为
=Matrix([[x[0]*x[1]],
[sin(x[0])],
[cos(x[2])],
[x[2]*E**(x[3])]])
JacobianSymbolic=f.jacobian(x)
JacobianFunction=lambdify(x,JacobianSymbolic)
测试向量处评估雅可比行列式。
testCaseVector=np.array([[1],[1],[1],[1]])
JacobianNumerical=JacobianFunction(testCaseVector)
存储在“JacobianNumerical”中的结果是一个 NumPy 数值数组(矩阵),可用于进一步计算。
示例:TensorFlow雅可比矩阵
%tensorflow_version 1.x
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
import numpy as np
import statsmodels.api as sm
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from tqdm import tqdm
import tensorflow as tf
np.random.seed (245)
nobs =10000
x1= np.random.normal(size=nobs ,scale=1)
x2= np.random.normal(size=nobs ,scale=1)
x3= np.random.normal(size=nobs ,scale=1)
x4= np.random.normal(size=nobs ,scale=1)
x5= np.random.normal(size=nobs ,scale=1)
X= np.c_[np.ones((nobs ,1)),x1,x2,x3,x4,x5]
y= np.cos(x1) + np.sin(x2) + 2*x3 + x4 + 0.01*x5 + np.random.normal(size=nobs , scale=0.01)
LR=0.05
Neuron_Out=1
Neuron_Hidden1=64
Neuron_Hidden2=32
Activate_output='linear'
Activate_hidden='relu'
Optimizer= SGD(lr=LR)
loss='mean_squared_error'
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(X, y, test_size =0.15, random_state =77)
from tensorflow import set_random_seed
set_random_seed (245)
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
model_ANN= Sequential()
model_ANN.add(Dense(Neuron_Hidden1, activation=Activate_hidden, input_shape=(6,), use_bias=True))
model_ANN.add(Dense(Neuron_Hidden2, activation=Activate_hidden, use_bias=True))
model_ANN.add(Dense(Neuron_Out, activation=Activate_output,use_bias=True))
model_ANN.summary()
model_ANN.compile(loss=loss, optimizer=Optimizer, metrics=['accuracy'])
history_ANN=model_ANN.fit(
x_train,
y_train,
epochs=125)
def jacobian_tensorflow(x):
jacobian_matrix = []
for m in range(Neuron_Out):
grad_func = tf.gradients(model_ANN.output[:, m],model_ANN.input)
gradients = sess.run(grad_func, feed_dict={model_ANN.input: x})
jacobian_matrix.append(gradients[0][0,:])
return np.array(jacobian_matrix)
jacobian_tensorflow(x_train)