1、纺织专业 人工神经网络 中英文 外文 资料 文献 原文和翻译Textile Research Journal Article Use of Artificial Neural Networks for Determining the LevelingAction Point at the Auto-leveling Draw FrameAssad Farooq1and Chokri CherifInstitute of Textile and Clothing Technology, Technische Universitt Dresden. Dresden, GermanyAbstract
2、Artificial neural networks with their ability of learning from data have been successfully applied in the textile industry. The leveling action point is one of the important auto-leveling parameters of the drawing frame and strongly influences the quality of the manufactured yarn. This paper reports
3、 a method of predicting the leveling action point using artificial neural networks. Various leveling action point affecting variables were selected as inputs for training the artificial neural networks with the aim to optimize the auto-leveling by limiting the leveling action point search range. The
4、 Levenberg Marquardt algorithm is incorporated into the back-propagation to accelerate the training and Bayesian regularization is applied to improve the generalization of the networks. The results obtained are quite promising. Key words: artificial neural networks, auto-lev-eling, draw frame, level
5、ing action point。 The evenness of the yarn plays an increasingly significant role in the textile industry, while the sliver evenness is one of the critical factors when producing quality yarn. The sliver evenness is also the major criteria for the assessment of the operation of the draw frame. In pr
6、inciple, there are two approaches to reduce the sliver irregularities. One is to study the drafting mechanism and recognize the causes for irregularities, so that means may be found to reduce them. The other more valuable approach is to use auto-levelers 1, since in most cases the doubling is inadeq
7、uate to correct the variations in sliver. The control of sliver irregularities can lower the dependence on card sliver uniformity, ambient conditions, and frame parameters. At the auto-leveler draw frame (RSB-D40) the thickness variations in the fed sliver are continually monitored by a mechanical d
8、evice (a tongue-groove roll) and subsequently converted into electrical signals. The measured values are transmitted to an electronic memory with a variable, the time delayed response. The time delay allows the draft between the mid-roll and the delivery roll of the draw frame to adjust exactly at t
9、hat moment when the defective sliver piece, which had been measured by a pair of scanning rollers, finds itself at a point of draft. At this point, a servo motor operates depending upon the amount of variation detected in the sliver piece. The distance that separates the scanning rollers pair and th
10、e point of draft is called the zero point of regulation or the leveling action point (LAP) as shown in Figure 1. This leads to the calculated correction on the corresponding defective material 2,3. In auto-leveling draw frames, especially in the case of a change of fiber material, or batches the mac
11、hine settings and process controlling parameters must be optimized. The LAP is the most important auto-leveling parameter which is influenced by various parameters such as feeding speed, material, break draft gauge, main draft gauge, feeding tension, break draft, and setting of the sliver guiding ro
12、llers etc.Use of Artificial Neural Networks for Determining the Leveling Action Point A. Farooq and C. Cherif Figure 1 Schematic diagram of an auto-leveler drawing frame. Previously, the sliver samples had to be produced with different settings, taken to the laboratory, and examined on the evenness
13、tester until the optimum LAP was found (manual search). Auto-leveler draw frame RSB-D40 implements an automatic search function for the optimum determination of the LAP. During this function, the sliver is automatically scanned by adjusting the different LAPs temporarily and the resulted values are
14、recorded. During this process, the quality parameters are constantly monitored and an algorithm automatically calculates the optimum LAP by selecting the point with the minimum sliver CV%. At present a search range of 120 mm is scanned, i.e. 21 points are examined using 100 m of sliver in each case;
15、 therefore 2100 m of sliver is necessary to carry out the search function. This is a very time-consuming method accompanied by the material and production losses, and hence directly affecting the cost parameters. In this work, we have tried to find out the possibility of predicting the LAP, using ar
16、tificial neural net-works, to limit the automatic search span and to reduce the above-mentioned disadvantages.Artificial Neural NetworksThe motivation of using artificial neural networks lies in their flexibility and power of information processing that conventional computing methods do not have. Th
17、e neural network system can solve a problem “by experience and learning” the inputoutput patterns provided by the user. In the field of textiles, artificial neural networks (mostly using back-propagation) have been extensively studied during the last two decades 46. In the field of spinning previous
18、 research has concentrated on predicting the yarn properties and the spinning process performance using the fiber properties or a combination of fiber properties and machine settings as the input of neural networks 712.Back-propagation is a supervised learning technique most frequently used for arti
19、ficial neural network training. The back-propagation algorithm is based on the Widrow-Hoff delta learning rule in which the weight adjustment is carried out through the mean square error of the output response to the sample input 13. The set of these sample patterns is repeatedly presented to the ne
20、twork until the error value is minimized. The back-propagation algorithm uses the steepest descent method, which is essentially a first-order method to determine a suitable direction of gradient movement.OverfittingThe goal of neural network training is to produce a network which produces small erro
21、rs on the training set, and which also responds properly to novel inputs. When a network performs as well on novel inputs as on training set inputs, the network is said to be well generalized. The generalization capacity of the network is largely governed by the network architecture (number of hidde
22、n neurons) and this plays a vital role during the training. A network which is not complex enough to learn all the information in the data is said to be underfitted, while a network that is too complex to fit the “noise” in the data leads to overfitting. “Noise” means variation in the target values
23、that are unpredictable from the inputs of a specific network. All standard neural network architectures such as the fully connected multi-layer perceptron are prone to overfitting. Moreover, it is very difficult to acquire the noise free data from the spinning industry due to the dependence of end p
24、roducts on the inherent material variations and environmental conditions, etc. Early stopping is the most commonly used technique to tackle this problem. This involves the division of training data into three sets, i.e. a training set, a validation set and a test set, with the drawback that a large
25、part of the data (validation set) can never be the part of the training.Regularization The other solution of overfitting is regularization, which is the method of improving the generalization by constraining the size of the network weights. Mackay 14 discussed a practical Bayesian framework for back
26、-propagation networks, which consistently produced networks with good generalization. The initial objective of the training process is to mini-mize the sum of square errors: (1)Where are the targets and are the neural network responses to the respective targets. Typically, training aims to reduce th
27、e sum of squared errors F=Ed. However, regularization adds an additional term, the objective function, (2)In equation (2), is the sum of squares of the network weights, and and are objective function parameters. The relative size of the objective function parameters dictates the emphasis for trainin
28、g. If , training will emphasize weight size reduction at the expense of network errors, thus producing a smoother network response 15.The Bayesian School of statistics is based on a different view of what it means to learn from data, in which probability is used to represent the uncertainty about th
29、e relationship being learned. Before seeing any data, the prior opinions about what the true relationship might be can be expressed in a probability distribution over the network weights that define this relationship. After the program conceives the data, the revised opinions are captured by a poste
30、rior distribution over network weights. Network weights that seemed plausible before, but which do not match the data very well, will now be seen as being much less likely, whilethe probability for values of the weights that do fit the data well will have increased 16. In the Bayesian framework the
31、weights of the network are considered random variables. After the data is taken, the posterior probability function for the weights can be updated according to Bayes rule: (3)In equation (3), D represents the data set, M is the particular neural network model used, and w is the vector of network wei
32、ghts. is the prior probability, which represents our knowledge of the weights before any data is collected. is the likelihood function, which is the probability of data occurring, given the weights w.is a normalization factor, which guarantees that the total probability is 1 15. In this study, we employed the MATLAB Neural Net-works Toolbox function “trainbr” which is an incorporation of the LevenbergMarqaurdt algorithm and the Bayesian regularization theorem (or Bayesian learning) into back-propagation t