【有限元方法】Newton-Raphson Method

news2024/12/27 16:29:40

Newton-Raphson Method

Linear vs Nonlinear Analysis:

  • At this point, we can conduct a linear analysis no problem

∫ ∑ i , j = 1 3 σ i j ε i j ∗ d v = ∫ t n ⋅ u ∗ d s + ∫ ρ b ⋅ u ∗ d v ⇒ ∫ e [ B ] T [ C ] [ B ] d x ⏟ k e u e = ∫ ∂ e [ N ] T t n d s + ∫ e [ N ] T ρ b d v \begin{aligned} &\int \sum_{i,j=1}^3\sigma_{ij}\varepsilon_{ij}^*dv=\int t_n\cdot u^*ds + \int \rho b \cdot u^*dv \\ \Rightarrow &\underbrace{\int_e[B]^T[C][B]dx}_{k_e}u_e=\int_{\partial e}[N]^Tt_n ds + \int_e[N]^T\rho b dv \end{aligned} i,j=13σijεijdv=tnuds+ρbudvke e[B]T[C][B]dxue=e[N]Ttnds+e[N]Tρbdv

  • Two Assumption:

(1) Small Deformations
(2) Linear Elastic Materials

  • This allowed us to solve our system using a system of linear equations:

[ K ] [ u ] = [ F ] \begin{aligned} [K] [u] = [F] \end{aligned} [K][u]=[F]

  • Unfortunately, the world is a cruel place and the majority of realistic scenarios require a nonlinear analysis:

Stress-Strain Relationship:
∫ ∑ i , j = 1 3 σ i j ε i j ∗ d v \begin{aligned} &\int \sum_{i,j=1}^3\sigma_{ij}\varepsilon_{ij}^*dv \end{aligned} i,j=13σijεijdv

t n t_n tn and ρ b \rho b ρb are solution dependent:

∫ t n ⋅ u ∗ d s + ∫ ρ b ⋅ u ∗ d v \begin{aligned} \int t_n\cdot u^*ds + \int \rho b \cdot u^*dv \end{aligned} tnuds+ρbudv

Types of Nonlinearities

There are many types of nonlinearities:

1. Geometric Nonlinearity

  • Buckling
  • Large deformation of the material
    在这里插入图片描述

2. Material Nonlinearity

  • Plasticity
  • Damage

在这里插入图片描述

3. Kinematic Nonlinearity

  • Contact

在这里插入图片描述

Newton-Raphson Method (Single Variable)

To solve nonlinear equations we typically employ the Newton-Raphson method (also referred to as Newton’s method). To see how this method works, let’s examine a single variable scenario:

If we hace a continuous function f f f, we can perform a Taylor series expansion:

f ( x ( i + 1 ) ) ≈ f ( x ( i ) ) + ∂ f ( x ( i ) ) ∂ x ( x ( i + 1 ) − x ( i ) ) + 1 2 ! ( ∂ 2 f ( x ( i ) ) ∂ x 2 ) ( x ( i + 1 ) − x ( i ) ) + . . . \begin{aligned} f(x^{(i+1)})\approx f(x^{(i)})+\frac{\partial f(x^{(i)})}{\partial x}(x^{(i+1)}-x^{(i)})+\frac{1}{2!}(\frac{\partial ^2f(x^{(i)})}{\partial x^2})(x^{(i+1)}-x^{(i)})+... \end{aligned} f(x(i+1))f(x(i))+xf(x(i))(x(i+1)x(i))+2!1(x22f(x(i)))(x(i+1)x(i))+...

Neglecting higher-order terms:

f ( x ( i + 1 ) ) ≈ f ( x ( i ) ) + ∂ f ( x ( i ) ) ∂ x ( x ( i + 1 ) − x ( i ) ) \begin{aligned} f(x^{(i+1)})\approx f(x^{(i)})+\frac{\partial f(x^{(i)})}{\partial x}(x^{(i+1)}-x^{(i)}) \end{aligned} f(x(i+1))f(x(i))+xf(x(i))(x(i+1)x(i))

Rearrange equation:

( x ( i + 1 ) − x ( i ) ) = ( f ( x ( i + 1 ) ) − f ( x ( i ) ) ) f ′ ( x ( i ) ) Δ x = ( x ( i + 1 ) − x ( i ) ) K t = f ′ ( x ( i ) ) \begin{aligned} (x^{(i+1)}-x^{(i)})&=\frac{(f(x^{(i+1)}) - f(x^{(i)}))}{f'(x^{(i)})} \\ \Delta x &= (x^{(i+1)}-x^{(i)}) \\ K_t &= f'(x^{(i)}) \end{aligned} (x(i+1)x(i))ΔxKt=f(x(i))(f(x(i+1))f(x(i)))=(x(i+1)x(i))=f(x(i))

Simply equation:

Δ x = F − f ( x ( i ) ) K t = K t − 1 ( F − f ( x ( i ) ) ) \begin{aligned} \Delta x &=\frac{F- f(x^{(i)})}{K_t} = K_t^{-1}(F-f(x^{(i)})) \end{aligned} Δx=KtFf(x(i))=Kt1(Ff(x(i)))

Now let’s examine the parameters of the equation:

F = F = F= Desired Value
x i = x^{i} = xi= Initial Value of x x x
f ( x ( i ) ) = f(x^{(i)}) = f(x(i))= Initial Value of the Function
K t = K_t = Kt= Slope of the Tangent Line
Δ x = \Delta x= Δx= Increment in x x x : solve it

在这里插入图片描述

In FEA:

  • F − f ( x ( i ) ) = F - f(x^{(i)})= Ff(x(i))= Different between applied forces ( F ) (F) (F) and internal forces ( f ( x ( i ) ) ) (f(x^{(i)})) (f(x(i)))
  • Δ x = \Delta x = Δx= Displacement increment

Example 1 - Single Variable Newton-Raphson Method

If f ( x ) = x f(x) = \sqrt{x} f(x)=x , find x x x such that f ( x ) = 3 f(x) = 3 f(x)=3

Function Information:

f ( x ) = x ⇒ K t = ∂ f ∂ x = 1 2 x \begin{aligned} f(x) = \sqrt{x} \Rightarrow K_t = \frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}} \end{aligned} f(x)=x Kt=xf=2x 1

Δ x = K t − 1 ( F − f ( x ( i ) ) ) \Delta x =K_t^{-1}(F-f(x^{(i)})) Δx=Kt1(Ff(x(i)))

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545

Δ x = K t − 1 ( F − f ( x ( i ) ) ) = ( 0.5 ) − 1 ( 3 − 1 ) = 4 \Delta x =K_t^{-1}(F-f(x^{(i)})) =(0.5)^{-1}(3-1)=4 Δx=Kt1(Ff(x(i)))=(0.5)1(31)=4

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.223.428.42

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.223.428.42
8.422.900.170.578.99

在这里插入图片描述

Newton-Raphson Method (Multi-Variable)

Unfortunately, most scenarios (especially finite element) involve many multi-variable equations:

f 1 ( x 1 , x 2 , . . . , x n ) = 0 f 2 ( x 1 , x 2 , . . . , x n ) = 0 ⋮ f n ( x 1 , x 2 , . . . , x n ) = 0 f_1(x_1, x_2, ..., x_n) = 0 \\ f_2(x_1, x_2, ..., x_n) = 0 \\ \vdots \\ f_n(x_1, x_2, ..., x_n) = 0 f1(x1,x2,...,xn)=0f2(x1,x2,...,xn)=0fn(x1,x2,...,xn)=0

We can perform the Taylor series expansion for each equation:

f 1 ( x 1 ( i + 1 ) , x 2 ( i + 1 ) , . . . , x n ( i + 1 ) ) ≈ f 1 ( x 1 ( i ) , x 2 ( i ) , . . . , x n ( i ) ) + ∂ f 1 ∂ x 1 ∣ x ( i ) ( x 1 ( i + 1 ) − x 1 ( i ) ) + ∂ f 1 ∂ x 2 ∣ x 2 ( i ) ( x ( i + 1 ) − x 2 ( i ) ) + ⋯ + ∂ f 1 ∂ x n ∣ x n ( i ) ( x ( i + 1 ) − x n ( i ) ) \begin{aligned} &f_1(x_1^{(i+1)}, x_2^{(i+1)}, ..., x_n^{(i+1)})\approx f_1 (x_1^{(i)}, x_2^{(i)}, ..., x_n^{(i)}) \\ &+ \frac{\partial f_1}{\partial x_1}|_{x^{(i)}}(x_1^{(i+1)}-x_1^{(i)})+ \frac{\partial f_1}{\partial x_2}|_{x_2^{(i)}}(x^{(i+1)}-x_2^{(i)}) + \cdots +\frac{\partial f_1}{\partial x_n}|_{x_n^{(i)}}(x^{(i+1)}-x_n^{(i)}) \end{aligned} f1(x1(i+1),x2(i+1),...,xn(i+1))f1(x1(i),x2(i),...,xn(i))+x1f1x(i)(x1(i+1)x1(i))+x2f1x2(i)(x(i+1)x2(i))++xnf1xn(i)(x(i+1)xn(i))

Writing all the equations in matrix form:

[ f 1 ( x ( i + 1 ) ) f 2 ( x ( i + 1 ) ) ⋮ f n ( x ( i + 1 ) ) ] = [ f 1 ( x ( i ) ) f 2 ( x ( i ) ) ⋮ f n ( x ( i ) ) ] + [ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ] [ ( x 1 ( i + 1 ) − x 1 ( i ) ) ( x 2 ( i + 1 ) − x 2 ( i ) ) ⋮ ( x n ( i + 1 ) − x n ( i ) ) ] \begin{aligned} \begin{bmatrix} f_1(x^{(i+1)}) \\ f_2(x^{(i+1)}) \\ \vdots \\ f_n(x^{(i+1)}) \end{bmatrix} = \begin{bmatrix} f_1(x^{(i)}) \\ f_2(x^{(i)}) \\ \vdots \\ f_n(x^{(i)}) \end{bmatrix} + \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix} \begin{bmatrix} (x_1^{(i+1)}-x_1^{(i)})\\ (x_2^{(i+1)}-x_2^{(i)}) \\ \vdots \\ (x_n^{(i+1)}-x_n^{(i)}) \end{bmatrix} \end{aligned} f1(x(i+1))f2(x(i+1))fn(x(i+1)) = f1(x(i))f2(x(i))fn(x(i)) + x1f1x1f2x1fnx2f1x2f2x2fnxnf1xnf2xnfn (x1(i+1)x1(i))(x2(i+1)x2(i))(xn(i+1)xn(i))

We can move the vector containing the value of the functions at the initial guess to the other side to obtain the following:

[ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ⋯ ∂ f 1 ∂ x n ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ⋯ ∂ f 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ f n ∂ x 1 ∂ f n ∂ x 2 ⋯ ∂ f n ∂ x n ] [ ( x 1 ( i + 1 ) − x 1 ( i ) ) ( x 2 ( i + 1 ) − x 2 ( i ) ) ⋮ ( x n ( i + 1 ) − x n ( i ) ) ] = [ f 1 ( x ( i + 1 ) ) f 2 ( x ( i + 1 ) ) ⋮ f n ( x ( i + 1 ) ) ] − [ f 1 ( x ( i ) ) f 2 ( x ( i ) ) ⋮ f n ( x ( i ) ) ] \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \cdots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \cdots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_n}{\partial x_1} & \frac{\partial f_n}{\partial x_2} & \cdots & \frac{\partial f_n}{\partial x_n} \end{bmatrix} \begin{bmatrix} (x_1^{(i+1)}-x_1^{(i)})\\ (x_2^{(i+1)}-x_2^{(i)}) \\ \vdots \\ (x_n^{(i+1)}-x_n^{(i)}) \end{bmatrix} = \begin{bmatrix} f_1(x^{(i+1)}) \\ f_2(x^{(i+1)}) \\ \vdots \\ f_n(x^{(i+1)}) \end{bmatrix} - \begin{bmatrix} f_1(x^{(i)}) \\ f_2(x^{(i)}) \\ \vdots \\ f_n(x^{(i)}) \end{bmatrix} x1f1x1f2x1fnx2f1x2f2x2fnxnf1xnf2xnfn (x1(i+1)x1(i))(x2(i+1)x2(i))(xn(i+1)xn(i)) = f1(x(i+1))f2(x(i+1))fn(x(i+1)) f1(x(i))f2(x(i))fn(x(i))

or in a more simple form:

[ K ] [ Δ x ] = [ F ] − [ f ( x ( i ) ) ] [K] [\Delta x] = [F] - [f(x^{(i)})] [K][Δx]=[F][f(x(i))]

Moving [ K t ] [K_t] [Kt] to the other side:

[ Δ x ] = [ K t ] − 1 ( [ F ] − [ f ( x ( i ) ) ] ) [\Delta x] = [K_t]^{-1}([F]-[f(x^{(i)})]) [Δx]=[Kt]1([F][f(x(i))])

Same as single variable case but now we have vectors and matrices

Convergence Checks:

With multiple variables it is difficult to determine if the method has converged or not. To make things easier, the norm of the vectors is used:

1. Force (and Moment) Convergence:

  • Force convergence is achieved when the norm of the vector f ( x ( i ) ) − F f(x^{(i)}) - F f(x(i))F divided by the norm of vector F F F is less than the specified tolerance e R e_R eR

∣ ∣ f ( x ( i ) ) − F ∣ ∣ ∣ ∣ F ∣ ∣ ≤ e R \frac{||f(x^{(i)})-F||}{||F||} \leq e_R ∣∣F∣∣∣∣f(x(i))F∣∣eR

2. Displacement Convergence:

  • Force convergence is achieved when the norm of the vector Δ u \Delta u Δu divided by the norm of vector x ( i ) x^{(i)} x(i) is less than the specified tolerance e u e_u eu
    ∣ ∣ Δ x ∣ ∣ ∣ ∣ x ( i ) ∣ ∣ ≤ e e \frac{||\Delta x||}{||x^{(i)}||} \leq e_e ∣∣x(i)∣∣∣∣Δx∣∣ee

Example 2 - Multi-Variable Newton-Raphson Method:

If f ( x 1 , x 2 ) = [ 20 x 1 4 x 2 + 3 x 2 3 , 20 x 1 2 x 2 3 ] f(x_1, x_2) = [20x_1^4x_2+3x_2^3, 20x_1^2x_2^3] f(x1,x2)=[20x14x2+3x23,20x12x23], find { x 1 , x 2 } \{x_1, x_2\} {x1,x2} such that f ( x 1 , x 2 ) = [ 20 , 1 ] f(x_1, x_2) = [20, 1] f(x1,x2)=[20,1]

Function Information:

f 1 ( x 1 , x 2 ) = 20 x 1 4 x 2 + 3 x 2 3 f 2 ( x 1 , x 2 ) = 20 x 1 2 x 2 3 . ⇒ [ K t ] = [ ∂ f 1 ∂ x 1 ∂ f 1 ∂ x 2 ∂ f 2 ∂ x 1 ∂ f 2 ∂ x 2 ] = [ 80 x 1 3 x 2 20 x 1 4 + 9 x 2 2 40 x 1 x 2 3 60 x 1 2 x 2 2 ] \begin{aligned} f_1(x_1, x_2) &= 20x_1^4x_2+3x_2^3 \\ f_2(x_1, x_2) &= 20x_1^2x_2^3. \\ \Rightarrow [K_t] &= \begin{bmatrix} \frac{\partial f_1}{\partial x_1} &\frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x_1} &\frac{\partial f_2}{\partial x_2} \end{bmatrix} = \begin{bmatrix} 80x_1^3x_2& 20x_1^4+9x_2^2 \\ 40x_1x_2^3& 60x_1^2x_2^2 \end{bmatrix} \end{aligned} f1(x1,x2)f2(x1,x2)[Kt]=20x14x2+3x23=20x12x23.=[x1f1x1f2x2f1x2f2]=[80x13x240x1x2320x14+9x2260x12x22]

Sample Calculations ( 1 s t 1^{st} 1st Iteration):
f 1 ( 1 , 1 ) = 20 ( 1 ) 4 ( 1 ) + 3 ( 1 ) 3 = 23 f 2 ( 1 , 1 ) = 20 ( 1 ) 2 ( 1 ) 3 = 20 [ K t ] = [ 80 ( 1 ) 3 ( 1 ) 20 ( 1 ) 4 + 9 ( 1 ) 2 40 ( 1 ) ( 1 ) 3 60 ( 1 ) 2 ( 1 ) 2 ] ⇒ [ K t ] = [ 80 29 40 60 ] [ Δ x ] = [ K t ] − 1 ( [ F ] − [ f ] ) [ Δ x ] = [ 80 29 40 60 ] − 1 ( [ 20 1 ] ) − [ 23 20 ] ) = [ 0.102 − 0.385 ] ) \begin{aligned} f_1(1, 1) &= 20(1)^4(1) + 3(1)^3 = 23 \\ f_2(1, 1) &= 20(1)^2(1)^3 = 20 \\ [K_t] & = \begin{bmatrix} 80(1)^3(1) & 20(1)^4+9(1)^2 \\ 40(1)(1)^3 & 60(1)^2(1)^2 \end{bmatrix} \\ \Rightarrow [K_t] &= \begin{bmatrix} 80 & 29 \\ 40 & 60 \end{bmatrix} \\ [\Delta x] &= [K_t]^{-1}([F]-[f]) \\ [\Delta x] &=\begin{bmatrix} 80 & 29 \\ 40 & 60 \end{bmatrix} ^{-1}( \begin{bmatrix} 20 \\ 1 \end{bmatrix}) - \begin{bmatrix} 23 \\ 20 \end{bmatrix}) = \begin{bmatrix} 0.102 \\ -0.385 \end{bmatrix}) \end{aligned} f1(1,1)f2(1,1)[Kt][Kt][Δx][Δx]=20(1)4(1)+3(1)3=23=20(1)2(1)3=20=[80(1)3(1)40(1)(1)320(1)4+9(1)260(1)2(1)2]=[80402960]=[Kt]1([F][f])=[80402960]1([201])[2320])=[0.1020.385])

x 1 ( i ) x_1^{(i)} x1(i) x 2 ( i ) x_2^{(i)} x2(i) f 1 ( x ( i ) ) f_1(x^{(i)}) f1(x(i)) f 2 ( x ( i ) ) f_2(x^{(i)}) f2(x(i)) Δ x 1 \Delta x_1 Δx1 Δ x 2 \Delta x_2 Δx2 e R e_R eR e u e_u eu
1123200.102-0.38596.128.1
1.1020.61518.8385.6500.125-0.21523.919.7
1.2270.40018.3251.9270.096-0.0859.69.9
⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots ⋮ \vdots
1.3480.30219.9791.0010.001-0.00010.10.04

6 Iterations

Other Methods:

Over the years the Newton-Raphson method has been modified to help improve stability and reduce computation time:

1. Modified Newton-Rapshson Method

  • [ K t ] [K_t] [Kt] is recalculated at selected iterations (say every 5 t h 5^{th} 5th increment) or kept constant during the entire analysis
  • Requires additional computations to converge but is able to reduce computation cost

Secant Method (Quasi-Newton)

  • Approximates [ K t ] [K_t] [Kt] after the first iterations

[ K t ] i = f ( x ( i ) ) − f ( x ( i − 1 ) ) x ( i ) − x ( i − 1 ) [K_t]_i = \frac{f(x^{(i)})-f(x^{(i-1)})}{x^{(i)}-x^{(i-1)}} [Kt]i=x(i)x(i1)f(x(i))f(x(i1))

  • Don’t need to calculate derivatives!
  • Provides a good convergence rate while reducing the computational cost

Example 3- Single Variable Modified Newton-Raphson Method:

If f ( x ) = x f(x) = \sqrt{x} f(x)=x , find x x x such that f ( x ) = 3 f(x) = 3 f(x)=3

Function Information:

f ( x ) = x ⇒ K t = ∂ f ∂ x = 1 2 x \begin{aligned} f(x) = \sqrt{x} \Rightarrow K_t = \frac{\partial f}{\partial x} = \frac{1}{2\sqrt{x}} \end{aligned} f(x)=x Kt=xf=2x 1

Δ x = K t − 1 ( F − f ( x ( i ) ) ) \Delta x =K_t^{-1}(F-f(x^{(i)})) Δx=Kt1(Ff(x(i)))

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52
6.522.550.50.897.41

在这里插入图片描述

x ( i ) x^{(i)} x(i) f ( x ( i ) ) f(x^{(i)}) f(x(i)) K t K_t Kt Δ x \Delta x Δx x ( i + 1 ) x^{(i+1)} x(i+1)
110.545
52.240.51.526.52
6.522.550.50.897.41
7.412.720.50.557.97

在这里插入图片描述

Slower Convergence!!

Material Nonlinearity Example

在这里插入图片描述
在这里插入图片描述

Wolfram Mathematica

(* Material Nonlinearity Example*)
 Clear["Global *"]
 
 (*Parameters *)  
 L = 1.0;
 A = 1;
 p = X1^2';

(* Newton-Raphson Parameters*)
Xi = 1;
ErrorLimit = 0.5;
MaxIter = 1;

(* Stress-Strain Relationship *)
s = 10 * u'[X1] + 100000 * u'[X1]^3;


(* Exact Solution *)
DE = D[s*A, X1]+p;
sol = NDSolve[{DE == 0, u[0] == 0, u[L] == 0}, u[X1], X1];   // Numerical  
uExact = u[X1] /. sol[[1]]; 

Plot[uExact, {X1, 0, L}]

在这里插入图片描述

(* RR Approximation *)



(* Approximation Function *)
uApprox = a0 + a1*X1 + a2*X1^2;

(* Essential Boundary Conditioons *)
BC1 = uApprox /. {X1 -> 0};
BC2 = uApprox /. {X1 -> L};
sol1 = Solve[{BC1 == 0, BC2 == 0}, {a0, a1}];
{a0, a1} = {a0, a1} /. sol1[[1]];

(* Total Internal Strain Energy *)
S = 10 * e + 100000 * e^3;
SED = Integrate[S, {e, 0, e}];      // 5e^2 + 25000e^4, Strain Energy Density
e = D[uApprox, X1];
TSE = Integrate[SED, {X1, 0, L}];  // 1.66667a2^2+5000 a2^4,   Total Strain Energy

(* Work *)
W = Integrate[p * uApprox, {X1, 0, L}];   // -0.05a2

(* Potential Energy *)
PE = TSE - W;             // 0.05a2 + 1.66667a2^2 + 5000a2^4

(* Newton-Rapshon Method *)
// What equation trying to solve?

ai = {a2};
Func = D[PE, a2];    // f(x)
Kt = D[Func, a2];    // f'(x)


For[i = 1, i <= maxIter, i++

(* Calculate Function and Kt at Current Iteration *)
FuncIter = Func /. {a2 -> Xi};
KtIter = Kt /. {a2 -> Xi};

(* Solving for Delta X *)
Delta = (-1) * FuncIter / KtIter;

(* Finding Xi for Next Iteration *)
XInit = Xi;
Xi = Xi + Delta;


(* Break Loop Once Convergence Occurs *)
Error. = (Delta / XInit) * 100;
If[Abs[Error] <= ErrorLimit, 
 Print["Analysis Converged!"];
 Print["     Coefficient a2 = ",  Xi];
 Print["    Error= ", Error, " (%)"];
 a2 = Xi;
 Conv = 1;
 Break[]];

(* Track Results *)
Print["  Iteration: ", i];
Print["  Coefficient a2 = ", Xi];
Print["Error = ", Error, "(%)"];

(* Error Message if Max Number of Iterations Reached *)
If[i==maxIter, 
  Print["Max of Number of Iterations Reached"]];



]

Plot[{uExact, uApprox}, {X1, 0, L}]

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1194880.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

2022最新版-李宏毅机器学习深度学习课程-P50 BERT的预训练和微调

模型输入无标签文本&#xff08;Text without annotation&#xff09;&#xff0c;通过消耗大量计算资源预训练&#xff08;Pre-train&#xff09;得到一个可以读懂文本的模型&#xff0c;在遇到有监督的任务是微调&#xff08;Fine-tune&#xff09;即可。 最具代表性是BERT&…

Arcgis连接Postgis数据库(Postgre入门十)

效果 步骤 1、矢量数据首先有在postgis数据库中 这个postgis数据库中的一个空间数据&#xff0c;数据库名称是test3&#xff0c;数据表名称是test 2、Arcgis中连接postgis数据库中 3、成功连接 可以将数据拷贝或导入到gdb数据库中

Python---练习:把8名讲师随机分配到3个教室

案例&#xff1a;把8名讲师随机分配到3个教室 列表嵌套&#xff1a;有3个教室[[],[],[]]&#xff0c;8名讲师[A,B,C,D,E,F,G,H]&#xff0c;将8名讲师随机分配到3个教室中。 分析&#xff1a; 思考1&#xff1a;我们第一间教室、第二间教室、第三间教室&#xff0c;怎么表示…

FPGA运算

算数运算中&#xff0c;输入输出的负数全用补码来表示&#xff0c;例如用三位小数位来表示的定点小数a-1.625和b-1.375。那么原码分别为a6b‘101101, b6b101011, 补码分别是a6’b110011&#xff0c;b6‘b110101&#xff1b; 如果想在fpga中实现a*b&#xff0c;则需要将a和b用补…

口水战,余承东从没输过,小鹏最终只能低头和解

小鹏汽车创始人何小鹏近日发言称与余承东握手言和&#xff0c;感谢余总的大度&#xff0c;还表示与余承东探讨了技术路线&#xff0c;双方成为好朋友&#xff0c;可以看出这场口水战最终的赢家还是余承东。 这场口水战先以何小鹏吐槽友商的AEB误触太多&#xff0c;还声言99%是假…

Git应用(1)

一、Git Git(读音为/gɪt/。中文 饭桶 )是一个开源的分布式版本控制系统&#xff0c;可以有效、高速地处理从很小到非常大的项目版本管理。 了解更多可到GIT官网&#xff1a;Git - Downloads GIT一般工作流程如下&#xff1a; 1&#xff0e;从远程仓库中克隆 Git 资源作为本地…

jenkins CSV编码导致乱码问题解决

问题&#xff1a;生产报告会乱码的问题&#xff0c;一般是有编码格式引起的。我遇到的问题是&#xff0c;jmeter需要读取csv的数据作为参数。但是我们并不知道csv保存是什么编码格式&#xff0c;有可能不是utf-8的编码格式&#xff0c;所以会导致中文乱码的问题 解决方案&#…

【广州华锐互动】太空探索VR模拟仿真教学系统

随着科技的不断发展&#xff0c;人类对宇宙的探索欲望愈发强烈。火星作为距离地球最近的行星之一&#xff0c;自然成为了人类关注的焦点。近年来&#xff0c;火星探测取得了一系列重要成果&#xff0c;为人类了解火星提供了宝贵的信息。然而&#xff0c;实地考察火星仍然面临着…

Linux安装MySQL8.0服务

Linux安装MySQL8.0服务 文章目录 Linux安装MySQL8.0服务一、卸载1.1 查看mariadb1.2 卸载 二、安装2.1 下载2.2 上传2.3 解压2.4 重命名2.5 删除2.6 创建目录2.7 环境变量2.8 修改配置2.9 配置文件2.9 用户与用户组2.10 初始化2.11 其它 三、开启远程连接MySQL 一、卸载 首先第…

【分布式id生成系统——leaf源码】

分布式id生成系统——leaf源码 号段模式双buffer优化id获取 Leaf &#xff0c;分布式 ID 生成系统&#xff0c;有两种生成 ID 的方式&#xff1a; 号段模式Snowflake模式 号段模式 由于号段模式依赖于数据库表&#xff0c;我们先看一下相关的数据库表&#xff1a; biz_tag&…

大模型+人形机器人,用AI唤起钢筋铁骨

《经济参考报》11月8日刊发文章《多方布局人形机器人赛道,智能应用前景广》。文章称&#xff0c;工信部日前印发的《人形机器人创新发展指导意见》&#xff0c;按照谋划三年、展望五年的时间安排&#xff0c;对人形机器人创新发展作了战略部署。 从开发基于人工智能大模型的人…

SQL优化之MySQL执行计划(Explain)及索引失效详解

1、执行计划基础 1.1、执行计划&#xff08;Explain&#xff09;定义 在 MySQL 中可以通过 explain 关键字模拟优化器执行 SQL语句&#xff0c;从而解析MySQL 是如何处理 SQL 语句的。 1.2、MySQL查询过程 客户端向 MySQL 服务器发送一条查询请求服务器首先检查查询缓存&am…

为什么我一直是机器视觉调机仔,为什么一定要学一门高级语言编程?

​ 为什么我是机器视觉调机仔&#xff0c;为什么一定要学一门高级语言编程&#xff0c;以后好不好就业&#xff0c;待遇高不高&#xff0c;都是跟这项技术没关系&#xff0c;是跟这个技术背后的行业发展有关系。 你可以选择离机器视觉行业&#xff0c;也可以选择与高级语言相关…

中国电信终端产业联盟5G Inside行业子联盟正式成立!宏电股份作为副理事单位受邀加入

11月9日&#xff0c;中国电信于广州召开“2023中国电信终端生态合作暨中国电信终端产业联盟&#xff08;以下简称CTTA&#xff09;第十四次会员大会”&#xff0c;联盟成员齐聚现场。作为CTTA大会的一个重要环节&#xff0c;中国电信终端产业联盟5G Inside行业子联盟正式成立&a…

LeetCode(5)多数元素【数组/字符串】【简单】

目录 1.题目2.答案3.提交结果截图 链接&#xff1a; 169. 多数元素 1.题目 给定一个大小为 n 的数组 nums &#xff0c;返回其中的多数元素。多数元素是指在数组中出现次数 大于 ⌊ n/2 ⌋ 的元素。 你可以假设数组是非空的&#xff0c;并且给定的数组总是存在多数元素。 示…

jenkins邮件告警

构建失败邮件通知 配置自己的邮箱 配置邮件服务&#xff0c;密码是授权码 添加构建后操作 扩展 配置流水线 添加扩展 钉钉通知 Jenkins安装钉钉插件 钉钉添加机器人 加签 https://oapi.dingtalk.com/robot/send?access_token98437f84ffb6cd64fa2d7698ef44191d49a11…

ios安全加固 ios 加固方案

​ 目录 一、iOS加固保护原理 1.字符串混淆 2.类名、方法名混淆 3.程序结构混淆加密 4.反调试、反注入等一些主动保护策略 二 代码混淆步骤 1. 选择要混淆保护的ipa文件 2. 选择要混淆的类名称 3. 选择要混淆保护的函数&#xff0c;方法 4. 配置签名证书 5. 混淆和测…

原型模式(创建型)

一、前言 原型模式是一种创建型设计模式&#xff0c;它允许在运行时通过克隆现有对象来创建新对象&#xff0c;而不是通过常规的构造函数创建。在原型模式中&#xff0c;一个原型对象可以克隆自身来创建新的对象&#xff0c;这个过程可以通过深度克隆或浅克隆来实现。简单说原型…

【Java SE】类和对象(上)

目录 一. 面向对象的初步认知 1.1 什么是面向对象 1.2 面向对象与面向过程 二. 类定义和使用 2.1 简单认识类 2.2 类的定义格式 三. 类的实例化 3.1 什么是实例化 3.2 实例化对象 四. this引用(重点&#xff09; 4.1 为什么要有this引用 4.2 this的使用 4.3 this引…

lua环境安装

文章目录 Linux 系统上安装Mac OS X 系统上安装Window 系统上安装 Lua第一个 Lua 程序 Linux 系统上安装 Linux & Mac上安装 Lua 安装非常简单&#xff0c;只需要下载源码包并在终端解压编译即可&#xff0c;本文使用了5.3.0版本进行安装&#xff1a; curl -R -O http://…