Title: Viscoplastic Models for Polymeric Composite
1Viscoplastic Models for Polymeric Composite
- Mentee
- Chris Rogan
- Department of Physics
- Princeton University
- Princeton, NJ 08544
- Mentors
- Marwan Al-Haik M.Y Hussaini
- School of Computational Science
- Florida State University
- Tallahassee, FL 32306
2Part-1 Explicit Model
- Micromechanical Viscoplastic Model
3Explicit Model
- Viscoplastic Model Proposed by Gates and Sun
?t ?e ?p
?p A(?)n
?e ?/E
The plastic portion of the strain is represented
by this non-linear equation, where A and n are
material constants found from experimental data
The elastic portion of the strain is determined
by Hooks Law, where E is Youngs Modulus
Gates, T.S., Sun, C.T., 1991. An
elastic/viscoplastic constitutive model for fiber
reinforced thermoplastic composites. AIAA Journal
29 (3), 457463.
4Explicit Model
The total strain rate is composed of elastic and
plastic components
d?e/dt (d?/dt)/E
d?vp /dt d?vp/dt d?vp/dt
The elastic portion of the strain rate is the
elastic component of the strain differentiated
with respect to time. The component of the
strain rate is further divided into two
viscoplastic terms,
5Explicit Model
The first component of the plastic strain rate is
the plastic strain differentiated with respect to
time
d?vp/dt A(n)(?)n-1(d?/dt)
The second component utilizes the concept of
overstress, or ?- ?, where ? is the
quasistatic stress and and ? is the dynamic
stress. K and m are material constants found from
experimental data.
d?vp/dt ((? - ?)/K)1/m
6Tensile Tests
Figure 1
7Methodology
- Firstly, the tensile test data (above) was used
to determine the material constants A, - n and E for each temperature. E was calculated
first, - fitting the linear portion of the tensile test
curve to reflect the elastic component of - the equation as shown in Figure 2. Next, the
constants A and n were calculated by - plotting Log(?) vs. Log(? - ?/E) and extracting n
and A from a linear fit as the - slope and y-intercept respectively. Figure 3
displays the resulting models fit to the - experimental data.
? ?/E A(?)n
8A-n
- Log(? - ?/E) nLog(?) LogA
Figure 2
Figure 3
Figure 4
Figure 4
9Table 1
10Load Relaxation Tests
The data from the load relaxation tests was used
to determine the temperature-dependent material
constants K and m. For each temperature, the load
relaxation test was conducted at 6 different
stress levels, as shown in Figure 4.
11Curve Fitting of Load Relaxation
Figure 5
Firstly, the data from each different strain
level at each temperature was isolated. The noise
in the data was eliminated to ensure that the
stress is monotonically decreasing, as dictated
by the physical model (Figure 5). The data was
then fit into two different trends exponential
and polynomial of order 9 functions (Figures 6
and 7).
12Figure 7
0 d?/dt (d?/dt)/E ((? - ?)/K)1/m
gt Log(-(d?/dt)/E) (1/m)(Log(? - ?)
(1/m)Log K
13From the exponential fits the constants K and m
were calculated by plotting Log(-(d?/dt)/E) vs.
Log(? - ?), and calculating the linear fit, as
shown in Figures 8 and 9. The tabulated material
constants for each temperature are pictured below.
Table 2
14Figure 9
Figure 8
? ?/E A(?)n
For each temperature and strain level, the
quasistatic stress was found by solving the above
non-linear equation using Newtons method. The
quasistaitc stress values are displayed in Table
1.
Table 1
15Simulation of Explicit Model
-(d?/dt)/E ((? - ?)/K)1/m
The total strain rate is zero during the load
relaxation test, leading to the differential
equation above. The explicit model solution was
generated by solving this differential equation
using the fourth order Runge-Kutta method.
Different step-sizes were experimented with, and
an example solution is shown in Figure 10.
Figure 10
16Part 2 Implicit Model
- Generalizing an Implicit Stress Function Using
Neural Networks
17Neural Networks (NN)
The Implicit Model consists of creating an
implicit, generalized stress function, dependent
on vectors of temperature, strain level and time
data. A generalized neural network and one
specific to this model are shown in Figure 11. A
neural network consists of nodes connected by
links. Each node is a processing element which
takes weighted inputs from other nodes, sums
them, and then maps this sum with an activation
function, the result of which becomes the
neurons output. This output is then propagated
along all the links exiting the neuron to
subsequent neurons. Each link has a weight value
to which traveling outputs are multiplied.
18Procedures for NN
- Based on the three phases of neural networks
functionality (training, validation and
testing),the data sets from the load relaxation
tests were split into three parts. The data sets
for three temperatures were set aside for
testing. The other five temperatures were used
for training, excluding five specific
combinations of temperature and strain levels
used for validation. -
19Pre-processing
Before training, the data vectors were put into
random order and were normalized by the equation
20Training NN
- Training a feed-forward backpropagating neural
network consists of giving the network a
vectorized training data set each epoch. Each
individual vectors inputs (temperature, strain
level, time) are propagated through the network,
and the output is incorporated with the vectors
experimental output in the error equation above.
Training the network consists of minimizing this
error function in weights space, adjusting the
networks weights using unconstrained local
optimization methods. An example of a training
sessions graph is shown in Figure 12, in this
case using a gradient descent method with
variable learning rate and momentum terms to
minimize the error function.
Figure 12
212 Hidden Layers NN
The architecture of the neural network is
difficult to decide. Research by Hornik et al.
(1989) suggests that a network with two hidden
layers can approximate any function, although
there is no indication as to how many neurons to
put in each of the hidden layers. Too many
neurons causes overfitting the network
essentially memorizes the training data and
becomes a look-up-table, causing it to perform
poorly with the validation and training data that
it has not seen before. Too few neurons leads to
poor performance for all of the data.
Hornik, K., Stinchocombe, M., White, H., 1989.
Multilayer feedforward networks are universal
approximators. Neural Networks, 359366.
22Error Surface
- Figure 13 shows the resulting mean squared
error performance values for neural networks with
different numbers of neurons in each hidden layer
after 1000 epochs of training.
Figure 13
23Figure 14
Figure 15
- Figures 14 and 15 display similar data, except
that only random data points are used in the
neuron space and a cubic interpolation is
employed in order to distinguish trends in the
neuron space. As figure 15 shows, there appears
to be a minimum in the area of about 10 neurons
in the first hidden layer and 30 in the second. A
minimum did in fact occur with a 10 31 1
network.
24Genetic Algorithm (GA) Pruning
- A genetic algorithm was used to try to determine
an optimal network architecture. Based on the
results of earlier exhaustive methods, a domain
from 1 to 15 and 1 to 35 was used for the number
of neurons in the first and second hidden layers
respectively. - A population of random networks in this domain
was generated, each network encoded as a binary
chromosome. The probability of a particular
networks survival is a linear function of its
rank in the population. - Stochastic remainder selection without
replacement was used in population selection. For
crossovers, a two-point crossover of chromosomes
reduced surrogates was used as shown in Figure
16.
Figure 16
25GA-Pruning
- This method allows pruning of not only neurons
but links, as each layers of neurons is not
necessarily completely connected to the next, and
connections between non-adjacent layers is
permitted. The genetic algorithm was run with
varying parameter values and two different
objective functions one seeking to minimize only
the training performance error of the networks
and another minimizing both the performance error
and the number of neurons and links. Figure 17
displays an optimal network when only the
performance error is considered. Figure 18 shows
and optimal network when the number of neurons
and links was taken into account.
Figure 17
Figure 18
26GA-Performance
- Figure 19 shows the results of an exhaustive
architecture search in a smaller domain than
earlier, the first arrow pointing to a minimum
that coincides with the network architecture
displayed in Figure 17.
Figure 19
27Results of NN Implicit Model
- A network architecture of 10 31 1 was used
for the training and testing of the neural
networks. Several different minimization
algorithms were tested and compared for the
training of the network and are listed in Figures
20 and 21. These two figures display the training
performance error and gradient over 1000 epochs.
Figure 21
Figure 20
28Training- Validation Testing of Final NN
Structure
- Figure 22 shows the testing, validation and
training performance for the Gradient Descent
algorithm while Figure 23 shows the plot of a
linear least squares regression between the
experimental data and network outputs for the
Polack Ribiere Conjugate Gradient method. -
Figure 23
Figure 23
Figure 22
29- Figure 24 displays the final performance of both
models compared to the experimental data. The
Quasi-Newton BFGS algorithm was used for the
Implicit model, as it performed the best. The
Implicit model ultimately outperformed the
Explicit, and required only the load relaxation
data to generate the solution.
Comparing Explicit and Implicit Models
Figure 24
30Conclusion
- The Implicit model(NNGA) ultimately
outperformed the Explicit( Gates), and required
only the load relaxation data to generate the
solution.