Application of Neural Networks to Lysine Production

Published on November 2016 | Categories: Documents | Downloads: 37 | Comments: 0 | Views: 198
of 8
Download PDF   Embed   Report

Comments

Content


E L S E VI E R The Chemical Engineering Jou~. 62 (1996) 207-214
THE BIOCHEMICAL
ENGINEERING
JOURNAL
Appl i cati on of neural networks to l ysi ne production
Y. - H. Zhu, T. Raj al aht i , S. Li nko *
Laboratory of Biotechnology and Food Engineering, Department of Chemical Engineering, Helsinki University of Technology, FIN-02150 Espoo, Finland
Received 1 September 1995; accepted 9 January 1996
Abstract
Lysine is an essential amino acid in human nutrition and also widely used in animal feed formulations. It is produced on a large scale by
fermentation in stirred tank bioreactors. In the present work lysine was produced by fed-batch fermentation with an industrial Brevibacterium
flavum strain grown in a 115 m 3 fermentor on a beet molasses based medium. The difficulties in on-line monitoring of substrate consumption
and of product formation complicate real-time process control. We demonstrate that well-trained backpropagation multilayer neural networks
can be employed to overcome such problems without detailed prior knowledge of the relationships of process variables under investigation.
Neural network models programmed in MS-Visual C + + for Windows and implemented on a personal computer were constructed and applied
to state estimation and multi-step-ahead prediction of consumed sugar and produced lysine on the basis of on-line measurable variables for
process control purposes.
Keywords: Neural networks; Lysine; Amino acid
1. Introduction
Complex biological systems are oft en difficult to model
and control with convent ional techniques, and in practice
subjective human expert knowledge in the form of "rul es of
t humb" is still widely applied [ 1,2]. In such situations arti-
ficial neural networks offer a simple and straightforward
approach to identification problems, inasmuch as they do not
require any prior knowledge of the relationships of the proc-
ess variables in quest ion [3, 4]. Further, neural networks are
charact erized by their ability t o learn from exemplar i nput -
output vect or pairs through iterative training and capable of
dealing with highly non-linear problems [5, 6]. The basics of
neurocomput ing have been discussed for example by Hert z
et al. [7] and neural net work control applications by Miller
et al. [ 8]. Collins [9] has reviewed the commercially avail-
able tools for neural net work applications.
Aft er the foundat ions of neuroengineering were estab-
lished, neural networks quickly became one of t he most stud-
ied research areas within artificial intelligence [10, 11].
Nevertheless, neural net work programming has only recently
been studied in the cont ext of bioprocess and food applica-
tions, although artificial neural networks exhibit an especially
great potential as "sof t war e" sensors in complex bioprocess
control applications [ 10,12,13]. The use of neural networks
* Corresponding author.
0923-0467/ 96/ $15.00 © Elsevier Science S.A. All rights reserved
PII 0923- 0467 ( 96 ) 03090- 4
has been reported in estimation and multi-step-ahead predic-
tion of a number of fermentation processes such as baker' s
yeast [14], glucoamylase [ 10,15] and penicillin [16] pro-
duction, beer brewing [ 12] and the advance prediction of the
end point of/3-galactosidase product ion [ 17]. Recent ly, the
ability of neural networks to learn from prior examples has
been exploit ed in hybrid systems [ 13,14,18,19]. A neural
network estimator may, for example, be employed in the
connect ion of a fuzzy expert cont roller [20]. Alternatively,
it is possible to carry out direct cont rol actions by a neural
controller. Clearly, a well-trained neural net work model
offers a novel way for on-line state estimation and prediction
on the basis of on-line measurable parameters, but very little
information is available on the optimization of neural network
architecture, transfer functions, running parameters, etc., for
specific bioprocess applications [ 21- 23] . In the present
paper neural net work assisted state estimation and prediction
of lysine fermentation in the cont ext of process control is
demonstrated.
2. Programming environment
The neural network program was written in Microsoft
Visual C + + for Windows, basically as described previously
[ 24,25 ]. It should be not ed here that at the t ime the research
program on the bioprocess applications of neural networks
208 Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214
time for a neural network of a certain complexit y has
decreased from days to hours, or even minutes [26]. Bot h
for the convenience of the development work and for subse-
quent applications we have chosen for our most recent work
with neural networks a fast personal comput er with a suffi-
cient memory and speed, equipped for example with an Intel
486 DX2 66 MHz processor or, more recent ly, a 75-100
MHz Pentium, 20 MB memory on the main board and 540
MB hard disk, and NEC Mult iSync 5FG display with Orchid
Super I DE/ VLB Controller using VESA local bus technol-
ogy, or equivalent.
Fig. 1. A dialogue for the topology definition of a neural network.
was started in the late 1980s, the availability of standard
program packages at a reasonable cost was very limited. Our
aim was to const ruct a program which would be compat ible
with existing cont rol systems running in the Windows envi-
ronment such as are t oday available to monit or and control
multiple bioreactors, to read C code, and to allow for the
convenient development of hybrid system applications,
including fuzzy logic capability. A further aim was a user-
friendly, graphical interface for easy applications at the plant
level. Finally, the syst em was to be flexible and to be able to
operate on-line, in real time. As a programming language,
Visual C + + for Windows implement ed in the PC environ-
ment appeared to fulfil such requirements. Meanwhile, the
development s bot h in software and hardware have been phe-
nomenal, and in j ust a few years for example the teaching
Fig. 2. A dialogue for editing running parameters and definition of a neural
network.
3. Supervised neural networks
Feedforward mult ilayer neural networks t oget her with the
backpropagation [27, 28] learning principle with a momen-
tum term were applied in the present work. The input and
output data vectors employed during training were scaled to
coded (normalized) values within the range [0.01, 0. 99].
Theoretically, the variable coding range should be close to
[0.0, 1.0]. In practice, the range [0.1, 0.9] is frequent ly used
in order to avoid problems with the digital comput at ion envi-
ronment involving non-linear transfer functions, such as the
sigmoid function used in the present work. We chose the
range [0.01, 0. 99], to gain a higher accuracy without still
creating computational problems. Weights bet ween the neu-
ronal connections were randomly initialized, usually within
a range [ - 0 . 6 , +0. 6] . However, the program allows the
selection of the range at will for special purposes. A logistic
sigmoid transfer function of the f or mf ( x) = 1 / ( 1 + e-X) was
used in each hidden and output neuron. It is import ant to
select the optimal learning rate coefficient for a given neural
network t opology and the number of patterns in each epoch
[ 26 ]. Maximum possible values for the learning coefficients
were det ermined by trial and error and chosen for speeding
up the learning process as much as possible without resulting
in oscillation [6, 29]. Fig. 1 shows an example dialogue for
the design of neural net work t opology. Fig. 2 gives an exam-
ple of definition of the running parameters, transfer function,
training mode, base value of the moment um term, training
rate, the initialization of all weight matrices with small ran-
dom values etc.
Fig. 3 illustrates a number of running child windows during
a training process. The informat ion frame shows dynamically
the messages of the opened files representing the data used,
neural network topology, running informat ion and statistical
analysis. The output, error history, and statistical dat a could
be monit ored dynamically and graphically at each iteration
cycle during the training process. This offered convenient
monitoring and control of the performance, including the
progress of the error function during training.
A number of neural net work topologies were created for
the given problem at hand along the lines described by Zhu
et al. [22]. The number of processing units in the input and
output layers corresponds to the desired model inputs and
Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214 209
i St a t e o f the data files used
R l e N a m e o f the Sample Data : LY19920.T~
The Sample D a t a : 7 7 P o f l e r n s
File Name of the Weights : L Y 2 5 2 6 O . W G T
~ T o p o l o g y o f the Neural Network
5 - 5 - 3 N e u r o m ; i n the Ne u r a l N e t w o r k
W i t h o u t O u t p u l T i m e D e l a y
- - Running Information
I t e r a t i n g L o o p s : 1 6 9
Current Error of the Network : 0 . 0 7 1 6 4
P r e v i o u s E r r o r o f t h e N s t w o r k : 0 . 0 7 2 2 4
Elapsed Time ( s ee} : 2 0 . 1 0 s e e
Average Speed : 0 . 1 1 9 s e c o n d s per IteraIto~
H o s t M e a n S q u a r e l R I d S ) : 2,02238e-OQ3
/
L a s t Average Error = 0 . 0 0 0 3 I
0 . 0 5
!
f~
0 . 0 0 1 ~'~ . . . . I t e r a t i o n s 2 0
. . . . .
rO l ; . y r ~
~d_gug~(t-I 1
_ _ lt~k~It.2)
MO S.g~iZ-2)
/ i I . L y l s i n e I . L y s i n e | t + l I . L y s l n e [ t + 2) i
t R S q u a r e = 0 . 9 9 4 2 1 R S q u a r e = 0 . 9 0 7 6
so0 o i ~ _ ~ _ _ ~ rsoo g_ / i r s ° ° ° f - R s q . . . . . . ~ 0 . 9 9 2 1
i r , , o i 1 , , i 0 o i
/ , , o ' o N u m b e r s 7~' I O.O N u m b e r s i I " N u m b e r s 7 7
Fig. 3. Running windows of a neural network during learning. The window
at the left-upper comer shows an information frame, at the fight-upper comer
an error history and at right middle the topology, and the windows at the
bottom show output variables.
outputs. The first t ask in designing neural net works for bio-
process cont rol applicat ions was the selection of the most
informat ive process variables to be used in the input vect or
(see, for exampl e, Ref. [ 11 ] ). The number of hidden neurons
was det ermined by trial and error on the basis of the learning
performance (see, for example, Ref. [30] ). An increase in
the number of hidden neurons usually results in a bet t er learn-
ing performance, alt hough there is a practical upper limit as
too many hidden neurons may result in problems such as
learning of the process noise [ 31 ]. Fig. 4 illustrates an exam-
ple case in which the effect of the number of hidden neurons
on the goodness of fit was invest igat ed for each out put vari-
able using mult ivariable regression. In the case of approxi-
mat el y equal fit the smallest number of hidden neurons was
chosen in order t o mi ni mi ze the calculation time. Conse-
quently, the select ed t opol ogy in this exampl e case was 5 - 5 -
3. Tabl e 1 describes net work t opologies most successfully
used in the present work in state est imat ion and prediction.
In all cases, only one hidden layer was used in the t opology.
The goodness of fit bot h in learning and testing is given by
the coefficient R 2 of det erminat ion, which describes the ext ent
of variance in the modelled variable that can be explained
with the model. I f the value of R 2 is unity, the model predict s
exact ly every experiment al point. Only t opologies result ing
in high R 2 values were included in the table.
4. Lysi ne fermentati on
Fed-bat ch lysine ferment at ions were carried out with an
industrial B r e v i b a c t e r i u mf l a v u m strain grown in a 115 m 3
ferment or on a beet molasses based industrial medium. The
temperature, pH, air flow, dissolved oxygen, exhaust gas oxy-
gen and carbon dioxide, base and ant ifoam consumpt ion, and
liquid vol ume were cont inuously monit ored, and oxygen
upt ake rate ( OUR) , carbon dioxide evolut ion rat e ( CER) ,
and respirat ory quotient ( RQ) calculat ed on line. The bio-
mass, sugar and lysine concent rat ions were det ermined off
line, and the sugar feed was cont rolled manual l y for pract ical
reasons, with about 1 h delay. The sampling t i me t was 1 h
in all experiment s. A well-t rained neural net work was
empl oyed for the non-linear est imat ion of missing values as
described by Linko et al. [ 25 ]. The coefficient of det ermi-
nation R2> 0.997 in neural net work est imat ion was higher
than the R2> 0.976 obt ained with a convent ional second-
order polynomial met hod.
5. State esti mati on
The neural net works of varying t opol ogy used for the esti-
mat ion of consumed sugar and produced lysine were usually
trained t hrough about 2000 iteration cycles using dat a from
one or more ferment at ions. Different ferment at ions were used
for testing the performance of the well t rained neural net-
works. Only exampl e results are present ed here in a great er
detail owing to the space constraints. Fig. 5 ( a) shows a neural
net work of 5- 5- 1 t opology for est imat ing consumed sugar at
t ime t, which produced carbon dioxide, RQ, and t hree out put
t ime delays of consumed sugar formi ng the input vector.
Fig. 5( b) shows the results, with a sat isfact ory fit of the
est imat ed values to the real measured dat a as indicated by an
R 2 of 0.990 (Tabl e 1, no. 1). This offers an interesting pos-
sibility to control aut omat ically the sugar feed on line. For
Accumulated CO2 - ~ / ~ -
~ - - - P r o d u c e d lysine (t)
RQ
Consumed sugar ( t ) - ~ ...... Produced lysine (t+l)
Consumed sugar ( t - l ) m~ _ _ _ Produced lysine (t+2)
Consumed Sugar (t-2)
%
1.00
0.98
0.96
0.94
0.92
3
t t
4 5 6 7
Number of hidden neurons
Fig. 4. Effect of the number of hidden neurons on the coefficient of determination R 2 between the neural estimate and corresponding result from mullivariable
regression using a second-order polynomial ( - - , time t; . . . . . , time t+ 1; . . . . . , time t+2).
210 Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214
Table 1
Neural net work t opologies used successfully in est imat ion and predict ion (t = t i me)
No. Input vect or Hidden neurons Out put vect or R ~
Training Test ing
S Lys S Lys
I [ ECO2( t ) , RQ( t ) , DS ( t - 1), 5 O[ S( t ) ] 0.998 - 0.990 -
DS ( t - 2) , DS ( t - 3 ) ]
l[EO2(t), ~~CO2(t), RQ( t ) , S( t ) ] 8 O[ Lys ( t ) ] - 0.996 - 0.974
l [ ~O2( t ) , ECO2( t ) , RQ( t ) , 10 O[ Lys ( t ) ] - 0.998 - 0.977
DLy s ( t - 1), DL y s ( t - 2),
DL y s ( t - 3 ) ]
l [F-O2(t ), ECO2( t ) ] 6 O[ Lys ( t ) ] - 0.996 - 0.976
I[F. CO2(t ), RQ( t ) , S ( t - 1), 4 O[ S( t ) , 0.997 0.985
L y s ( t - 1 ) ] Lys( t ) ] 0.998 0.987
I[F. CO2(t ), ECO2 ( t - 1 ), 6 O[ S( t ) , 0.997 - 0.989 -
F- CO~( t - 2) , DS ( t + I ) , DS( t ) , S ( t + 1), 0.998 0.989
DS ( t - 1)] S ( t + 2 ) ] 0.997 0.988
I [ ECO2( t ) , RQ( t ) , S( t ) , S ( t - 1), 5 O[ Lys ( t ) , - 0.998 - 0.981
S ( t - 2 ) ] Lys ( t + 1), 0.997 0.997
Lys ( t + 2 ) ] 0.997 0.997
I [ ECO2( t ) , RQ( t ) , S ( t - 1), 6 O[ S( t ) , Lys( t ) , 0.999 0.996 0.998 0.959
S ( t - 2 ) ] S ( t + 1), Ly s ( t + 1), 0.998 0.993 0.997 0.984
S ( t + 2 ) , Ly s ( t + 2 ) ] 0.997 0.991 0.994 0.985
Cas cade mode l
Primary model
I [~, C02( t ) , RQ( t ) ] 4 O[ Sp( t ) ] 0.997 - 0.989 -
Secondary model
I [ ECO2( t ) , RQ( t ) , 7 O[ Lys ( t ) , - 0.998 - 0.967
Sp(t), Sp(t-- 1), S p ( t - 2) Lys ( t + 1 ), 0.998 0.969
Lys ( t + 2 ) ] 0.996 0.965
1, input vect or of neural net work; O, out put vect or of neural net work; EO2, accumulat ed 02; ECOz, accumulat ed CO2; RQ, respirat ory quot ient ; D, delayed
variable from t he out put ; Lys, produced lysine; S, consumed sugar; Sp, consumed sugar est imat ed from t he primary model.
est imat ion of bi omass in industrial fed-bat ch penicillin fer-
ment at ion by Penicillium chrysogenum the use of CER and
subst rat e feed as input s has proven successful [ 11 ]. A more
compl ex neural net work was needed for the est imat ion of
biomass in fed-bat ch Saccharomyces cerevisiae baker ' s yeast
production, wit h the input vect or consist ing of on-line meas-
ured exit gas carbon dioxide, RQ, exit gas et hanol concentra-
tion, and substrate feed rate, at t imes t, t - 1 and t - 2 , and
t wo or three out put t ime delays [ 32]. In this case, for training
an R 2 of 0.991 or higher (an average error of less than 1.2% )
and for testing an R 2 of 0.935 (an average error of less than
2.5%) were obtained.
Ac c u mu l a t e d CO2 ........
RQ - _ ~
Co n s u me d s ugar (t - 1) . . . . 7[- Co n s u me d
Consumed sugar ( t - 2 ) , - ~ / ! i
Co n s u me d s uga r (t -3)! ;! i .,e_ . ~/ l i!
L
s u g a r ( t )
20000
15000
10000
5000
o
0 20 40 60 80
( b) Ti me ( h)
Fig. 5. ( a) The t opology of a neural net work for ( b) t he est imat ion of consumed sugar ( - ) in an example lysine ferment at ion ( 1 , cons ur ~d sugar det ermined
off l i ne).
Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214 211
Accumulated 0 2
Accumulated CO,
RQ
Consumed sugar (t)
Produced lysine (t)
3OO0
2000
~ 10~
.o
e~
0
/
[]
0 20 40 60 80
Time (h)
Accumulated 02 ..... ~. ~) ~
Accumulated CO2 - - - ' ~ I ~ ~ ,
R Q .....
? Produced lysine (t-Or.
i i : i ,
Produced lysine (t-2)i i ~
I, ' ili
Produced lysine (t-3)] I :- . ~ Ii
i : ; t
i f ' li~
Produced lysine (t)
3OOO
2000
m
~ zooo
0
0 20 40 60 80
Time (h)
• 3OOO
_ ' :,~\", ~ 2000
Accumulated 02 ~ / ~ ~, .~
Produced lysine (t)
1000
Accumulated CO2 --
Ch
\ ; 0
(a) (b)
f
20 40 60 80
T i m e (h)
Fig. 6. (a) Different topologies of neural networks for (b) the estimation of produced lysine (D, produced lysine determined off line).
Fig. 6 shows t hree neural net works of di fferent t opol ogi es
f or t he est i mat i on of pr oduced l ysi ne and t he correspondi ng
result s obt ai ned (Tabl e 1, nos. 2--4). A number of observa-
t ions can be made on t he basis of t he results. I n all cases t he
goodness of fit over t he t ot al peri od of ferment at i on was high,
wit h a coeffi ci ent of det ermi nat i on of t he same order of mag-
nit ude ( 0 . 9 7 4 < R2 < 0 . 9 7 7 ) . Int erest ingly, bot h t he con-
sumed sugar (Tabl e 1, no. 2) and t he out put t i me del ays of
produced lysine (Tabl e 1, no. 3) coul d be omi t t ed f r om t he
input vect or (Tabl e 1, no. 4) wi t hout an adverse effect on t he
result. An excellent fit was obt ai ned wi t h t he si mpl e neural
net wor k of 2--6-1 t opol ogy ( Tabl e 1, no. 4) , wi t h t he i nput
Accumulated CO 2 - . - - ~
R Q .....
Consumed sugar (t -l)- ,
Produced lysine (t -l)-- _~¢___3~
(a)
~
- --- Produced lysine (t)
....... Consumed sugar (t)
30000 3000
~ 2000.~
i
0
0 20 40 60 80
Time (h)
25OOO
200OO
1 5 o o o
0
(b)
Fig. 7. (a) Topology of a neural network for (b) simultaneous estimation ( - ) of consumed sugar ( m values determined off line ) and produced lysine (D,
values determined off line) in an example lysine fermentation.
212 Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214
i i ( E ' : ' i /~ II- - II~ ;!p :'p :!~ ' E
[ ] ] i n s ~ " ' , if!)
i l I t S/ . t ) ~ ! ! S q ;
E l I s<,+~> ., . . . . ! ~; ; , : "
P r e d i c t o r ! ! S ( t + 2 ) ~
~A~umeated CO,(t4)~
( a )
20000 I • " •
10000
i - o j +
0 20 40 60 80
10 I
(b) Ti,~)
Ac c u mu l a t e d CO 2
i
(c)
3000
: j
2OO0 ~
1
0 20 40 60 80
I o
g
( d ) r~<h)
Fig. 8. (a), (c) Architectures of neural networks (6-6-3 topology) for (b),
(d) multi-step-ahead prediction ( - ) of (a), (b) consumed sugar (11,
values determined offline) and (c), (d) produced lysine ([~, values deter-
mined off line).
vect or consist ing only of on-line measurable consumed oxy-
gen and produced carbon dioxide.
Fig. 7 ( a) shows the t opol ogy ( 4- 4- 2) of a neural net work
used in the simult aneous est imat ion of bot h consumed sugar
and produced lysine at t i me t and Fig. 7( b) shows the exam-
ple results. In this case the input vect or consisted of produced
carbon dioxide and RQ, and the consumed sugar and pro-
duced lysine at t i me t - 1 were det ermined of f line. Again,
the est imat es were quite sat isfact ory from the point of view
of industrial ferment at ions, with an R 2 of 0.985 for consumed
sugar and R 2 of 0.987 for produced lysine (Tabl e 1, no. 5).
6 . M u l t i - s t e p - a h e a d p r e d i c t i o n
For efficient cont rol of fed-bat ch processes it is of ut most
import ance to be able to monit or the substrate concentration
during the ferment at ion. In the present work, the consumed
sugar was predict ed by a mult i-st ep-ahead neural net work on
. ~ : - ~ - - - Produced iysine (t)
Accumulated CO2
RQ
Produced lysine (t+ 1)
Produced lysine (t+2)
Consumed sugar (t- 1)
Consumed sugar (t-2)
-- Consumed sugar (t+ 1)
the basis of dat a measured on line only. The neural net work
t opology was 6--6-3, and the produced carbon dioxide, its
t wo t ime delays, and three out put delays formed the input
vect or (Fig. 8( a) ). A good fit of the model with R 2 values
of 0.989, 0.989 and 0.988 (for t imes t, t + 1 and t + 2 respec-
t ively) (Tabl e 1, no. 6) was obtained. Similar results were
obt ained for the mult i-st ep-ahead predict ion of produced
lysine on the basis of produced carbon dioxide, RQ, and past
consumed sugar det ermined of f line (neural net work t opol-
ogy 5- 5- 3; Fig. 8( c ) ) , with R 2 values of 0.981, 0.997 and
0.997 (for t imes t, t + 1 and t + 2 respect ively) (Tabl e 1, no.
7). In the case of bot h the consumed sugar and the produced
lysine an increase in the number of hidden neurons from 5 or
6 to 9 did not further i mprove the coefficient of det erminat ion
and, therefore, the performance of the neural net work.
In the present work, a number of neural net works of dif-
ferent t opologies were empl oyed for the est imat ion and mult i-
st ep-ahead predict ion of consumed sugar and produced lysine
simultaneously. The best t opology among those t est ed con-
sisted of accumulat ed carbon dioxide and RQ measured on
line t oget her with consumed sugar at t imes t - 1 and t - 2
based on off-line analyses (Fig. 9( a) ). Typical results when
testing with dat a from anot her fed-bat ch lysine ferment at ion
are shown in Fig. 9( b) . Again, the result s were quite satis-
fact ory, with R 2 values of 0.998, 0.997 and 0.994 (for times
t, t + 1 and t + 2 respect ively) for sugar and R 2 values of
0.959, 0.984 and 0.985 (for t imes t, t + 1 and t + 2 respec-
t ively) for lysine (Tabl e 1, no. 8). Mult i-st ep-ahead predic-
tion of key process variables simult aneously on the basis of
on-line measurement s has been previously demonst rat ed for
exampl e in the prediction of enzyme act ivit y and glucose
concent rat ions in fed-bat ch As pe r gi l l us ni ge r glucoamylase
ferment at ion [ 15,24]. Promising results were obt ained using
a neural net work of 7- 10- 3 t opol ogy with CER, OUR, accu-
mulat ed carbon dioxide, consumed ammoni a, and out put t ime
delays forming the input vector. These result s suggest ed that
satisfactory results may be obt ained by using quite different
neural net work architectures for solving a given problem,
although it should be emphasized that each different appli-
cat ion should be carefully invest igat ed in the cont ext of the
given problem, and the most suit able net work architecture
should be defined and t uned case by case.
20000
15000
10000
! o o o
o
(a) (b)
4OOO
oO 2000
o
o
0 20 4o 6o 80
Time (h)
Fig. 9. (a) Topology of a neural network for (b) simultaneous multi-step-ahead prediction of consumed sugar (11, values determined off line) and produced
lysine (I-7 values determined off line) by neural network.
Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214 213
(a)
(b)
20OOO
15000
I ( ~
0
i
i l m l I
0 20 40 60 80
10 ~- . . . . . . . . . . . . . . . . . . . . . . 7
-10 . . . . . . . . . . . . . . . . . . . . . . .
T~e ( h )
3000 [
2500
2o00_
0 20 40 60 80
| 0 , . . . . . . . . . . . . . . . . . . i
-10 . . . . . .
(c) T~ne Oa)
Fig. I0. (a) A hierarchical neural net work syst em for estimation (2--4-I
t opology ) of ( b ) consumed sugar at t ime t ( I , values determined off line )
and, subsequently, (c) mnit i-st ep-ahead prediction ( 5- 7- 3 t opology) of
produced lysine (I7, values det ermined off line).
Finally, for the predict ion of the produced lysine on the
basis of on-line measurement s only a sequential system con-
sisting bot h of a neural estimator (for consumed sugar) and
a neural predict or (for produced lysine) was constructed
(Fig. 10( a) ) . First, t he consumed sugar was estimated by
using a relat ively simple neural net work of 2--4-1 topology
with the produced carbon dioxide and RQ forming the input
vector. The obt ained estimate and its t wo t ime delays were
then included in the input vect or of the neural predict or of 5-
7- 3 t opology for produced lysine as shown in Fig. 10(c).
The fit of t he est imat ed sugar with the real data analysed off
line was very good, with an R 2 of 0.990 and a maximum error
of less than +7 % over the experimental time frame
(Fig. 10(b) ). The results of the prediction of the produced
lysine were also very satisfactory. R z values of 0.967, 0.969
and 0.965 (for times t, t + 1 and t + 2 respect ively) (Table 1,
no. 9) were obtained in comparison t o the R 2 values of 0.981,
0.997 and 0.997 respect ively when off-line sugar analyses
were used as input (Figs. 8 ( c) and 8 ( d) ) . The corresponding
maximum errors were of the order of + 10% (Fig. 10( c) )
and about +6 % (Fig. 8( d) ) .
The results obtained on the application of neural networks
as " s of t " sensors in monit oring a compl ex biological process
were very encouraging. The success depends largely on good,
representative data for the training and statistical validation.
Although there is still room for furt her improvement owing
to the relatively wide variations in the industrial fermentation
data available for training and testing of the neural networks,
the results clearly showed that neural net work comput at ion
is well suited for the estimation and predict ion of key process
variables for monit oring and control purposes in industrial-
scale lysine fermentation. We are current ly expanding our
work to provide on-line estimation and predict ion functions
based on neural networks and fuzzy logic for an existing
multibioreactor control system in Windows NT.
Acknowledgements
The authors are grateful to the Academy of Finland for
financial support.
References
[1] P. Linko, Uncertainties, fuzzy reasoning, and expert syst ems in
bioengineering, Ann. NY Acad. Sci., 542 (1988) 83-101.
[2] T. Eurikiiinen, Y.-H. Zhu and P. Linko, An expert syst em wit h fuzzy
variables and neural net work estimation, in Proc. EUFIT '93, Aachen,
September 7-10, 1993, Vol. 1, ELITE Foundat ion, Aachen, 1993, pp.
202-208.
[ 3 ] P. Linko, T. Eerikiiinea, S. Linko and Y.-H. Zhu, Artificial intelligence
for t he food industry, in Proc. AIFA Conf. 93 (Ara'ficial Intelligence
f or Agriculture and Food - Equipment and Process Control), Nimes,
October 27-29, 1993, EC2, Paris, 1993, pp. 187-200.
[4] S. Linko, Y.-H. Zhu, T. Eerik~inen, T. Slimes and P. Linko, Artificial
intelligence in bioprocess modelling, estimation and control, Dev.
FoodSci., 36 (1994) 143-158.
[5] LC. Hoskins and D.M. Himmelblau, Artificial neural net work models
of knowledge representation in chemical engineering, Comput. ChenL
Eng., 12 (1988) 881-890.
R.P. Lippmann, An introduction t o comput ing wit h neural nets, IEEE
AASPMag., 4 (2) (1987) 4-22.
J. Hertz, A. Kxogh and R.G. Palmer, Introduction to the Theory oJ
Neural Computation, Addi son-Wesl ey, Reading, MA, 1991.
W.T. Miller, R.S. Sutton and P.J. Werbos, Neural Networks f or
Control, MIT Press, Cambridge, MA, 1990.
M. Collins, Empiricism strikes back: neural networks in
biotechnology, Bio/Technology, 11 (1993) 163-166.
P. Linko and Y.-H. Zhu, Neural net works in bioengineering, Kemia-
Kemi, 19 (1992) 215-220.
M.J. Willis, G.A. Montague, C. Di Massimo, . M. T. Tham and A.J.
Morris, Artificial neural networks in process est imat ion and control,
Automatica, 28 (1992) 1181-1187.
[61
[71
[81
[91
[101
[111
214 Y.-H. Zhu et al. / The Chemical Engineering Journal 62 (1996) 207-214
[ 12] R. Simutis, I. Havlik and A. Liibbert, Fuzzy-aided neural network for
real-time state estimation and process prediction in the alcohol
formation step of production-scale beer brewing, J. Biotechnol., 27
(1993) 203-215.
[13] P. Linko, A. Kosola, T. Slimes, Y.-H. Zhu and T. Eerik~iinen, Hybrid
fuzzy neural bioprocess control, in EUFIT '94 (Second European
Congress on Fuzzy and Intelligent Technologies), Aachen, September
20-23, 1994, ELITE Foundation, Aachen, pp. 84-90.
[ 14] P. Linko, T. Siimes, A. Kosola, Y.-H. Zhu and T. Eerikainen, Hybrid
fuzzy knowledge-based and neural systems for fed-batch baker' s yeast
bioprocess control, in Proc. l st Asian Control Conf., Tokyo, July 27-
30, 1994, Vol. 1, pp. 491--494.
[15] P. Linko and Y.-H. Zhu, Neural network modeling for real-time
variable estimation and prediction in the control of glucoamylase
fermentation, Process Biochem., 27 (1992) 257-283.
[ 16] G.A. Di Massimo, M.J. Willis, M.T. Tham and A.J. Morris, Towards
improved penicillin fermentation via artificial neural network, Comput.
Chem. Eng., 16 (4) (1992) 283-291.
[17] S. Linko, S. Envald, M. Vahvaselk~i and A. M~iyr~i-M~ildnen,
Optimization of the production of/3-galactosidase by an autolytic strain
of Streptococcus salivarius subsp, thermophilus, Ann. NY Acad. Sci.,
672 (1992) 588-594.
[18] M. Caudill, Expert networks, Byte, 16( 10) (1991) 108-116.
[ 19] J.J. Barron, Putting fuzzy logic into focus, Byte, 18 (4) (1993) 111-
118.
[20] T. Eerik~iinen, Y.-H. Zhu and P. Linko, Neural networks in extrusion
process identification and control, Food Control, 5 (1994) 111-119.
[21] Y.W. Huang, R. Mithani, K. Takahashi, L.T. Fan and P.A. Seib,
Modular neural networks for identification of starches in
manufacturing food products, Biotechnol. Prog., 9 (1993) 401--410.
[22] Y.-H. Zhu, T. Nagamune, I. Endo and P. Linko, Neural network model
in state variable prediction during start-up of chemostat culture for
ethanol production by Saccharomyces cerevisiae yeast, Trans. Inst.
Chem. Eng. C, 72 (1994) 135-142.
[23] Y. Horimoto, T. Durance, S. Nakai and O.M. Lukow, Neural networks
vs principal component regression for prediction of wheat flour loaf
volume in baking tests, J. Food Sci., 60 (1995) 429--433.
[24] Y.-H. Zhu, S. Linko and P. Linko, Neural networks in enzymology,
Adv. MoL CellBiol., (1996) to be published.
[25] S. Linko, Y.-H. Zhu and P. Linko, Neural networks in lysine
fermentation, in A. Munack and K. Schtigefl (eds.), Preprints,
Computer Applications in Biotechnology, Garmisch-Partenkirchen,
May 14-1Z 1995, IFAC, 1995, pp. 336-339.
[26] Y.-H. Zhu, Neural network applications in bioprocesses, Teclt
Biochem. Rep. 1/1995, 1995 (Helsinki University of Technology).
[27] P.J. Werbos, Beyond regression: new tools for prediction and analysis
in the behavioral science, Ph.D. Thesis, Harvard University,
Cambridge, MA, 1974.
[28] D.E. Rumelhart, G.E. Hinton and R.J. Williames, Learning
representation by backpropagation, Nature (London), 323 (1986)
533-536.
[29] A. Adams, Momentum in a back-propagation artificial neural network,
in C.P. Tsang (ed.), Proc. 4th Australian Joint Conf. on Artificial
Intelligence, Perth, November21-23, 1990, pp. 191-200.
[30] Y. Hirose, K. Yamachita and S. Huiya, Back-propagation algorithm
which varies the number of hidden units, NeuralNetworks, 4 ( 1991 )
61-66.
Using NWorks, NeuralWare Inc., Pittsburgh, PA, 1990.
A. Kosola and P. Linko, Neural control of fed-batch baker' s yeast
fermentation, Dev. Food Sci., 36 (1994) 321-328.
[311
[321

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close