automatica
M. Hadjiski. Integration of Decision Making and Control Systems

Key Words: Artificial intelligence; control systems; data-drivenprocess; Decision Making; human-machine interaction; optimization.

Abstract. The work examines the main aspects of the integration of decision making (DM) processes and the control of systems (Control Systems, CS) in modern objects with great complexity and complexity from the perspective of effective application of artificial intelligence methods using data flow. The DM procedure is treated in the light of modern settings and requirements for successful functioning under difficult to overcome conditions such as great uncertainty, scarcity and low quality of data, multicriteria, stochasticity, significant limitations, variability of structures and parameters. Particular attention is paid to the need to increasingly involve machine methods in hybrid Decision Making (DM). The solution of the DM task is treated as an optimization problem that can be solved by different formal, heuristic and hybrid methods depending on the context, existing experience and knowledge, available resources – cost, time, qualification. Special attention is paid to DM and CS approaches in centralized, decentralized and distributed site structures. The wide range of possibilities for the application of technologies based on artificial intelligence and the perspective of their application are indicated.

Read More

automatica
N. Deliiski, D. Angelski, P. Vitchev, N. Tumbarkova. Computation of Energy Consumption and Efficiency of Autoclaves during Steaming of Wooden Prisms by Regimes with Dispatching Intervention in them

Кey Words: Non-frozen wooden prisms; unsteady models; autoclave steaming; energy consumption; energy efficiency; dispatching interference; veneer production.

Abstract. An approach for computing the energy consumption and energy efficiency of autoclaves for intensive steaming of non-frozen wooden prisms at limited heat power of the steam generator in cases of dispatching intervention into the temperature-time parameters of steaming regimes in order to obtain a duration that is suitable for the subsequent cutting of the veneer from the heated plasticized prisms has been presented. The approach is based on the use of two personal mathematical models: 2D non-linear model of the unsteady distribution of the temperature in the cross-section of non-frozen prismatic wood materials subjected to heating at conductive boundary conditions and consequent conditioning in an air environment, and model of the non-stationary thermal balance of autoclaves for steaming wood materials. For numerical solving of the models and practical application of the suggested approach, a software package was prepared in the calculation environment of Visual FORTRAN Professional developed by Microsoft and operating under Windows. Using this package computations and research of the change in the energy consumption and energy efficiency of steaming autoclaves for plasticizing of non-frozen beech prisms before cutting them into veneer have been carried out. The variables used during the simulations were equal, as follows: an initial wood temperature of 0 °C, moisture content of 0.6 kg·kg−1, and cross-section dimensions of the beech prisms 0.4 × 0.4 m; steaming autoclave with inner diameter of 2.4 m, length of its cylindrical part of 9.0 m, and loading level with prisms of 50%; limited heat power of 500 kW of the generator, which feeds the autoclave with saturated water steam. The calculations were carried out for the cases of reducing by the dispatcher the maximum temperature of the basic steaming regime from 130 °C to 120, 110, and 100 °C in the 3rd , 7th , and 11th hour of this regime. The suggested approach can be applied for creation of software for systems used for computing and model-based automatic realization of energy-efficient regimes for autoclave steaming of non-frozen wood materials with desired duration set by the dispatcher.

Read More

informatics
I. Velkova. Unstructured Data Processing and Analysis Using Artificial Intelligence

Key Words: IDOL; Hadoop; artificial intelligence; big data; analysis, processing; unstructured data.

Abstract. This paper presents at the challenges and prospects of handling and analyzing unstructured big data with artificial intelligence. A huge amount of data like video, text, images, and social content from various applications is created, collected, and accumulated every day in all areas of human activities. This unstructured data cannot be effectively stored, processed, and analyzed using traditional tools and databases. This paper addresses this issue and proposes an approach to help process and subsequently extract knowledge from unstructured data collected from various sources using artificial intelligence. Some of the few existing technological tools for processing unstructured data in a Hadoop environment are presented. As a result of the study, an architecture for such processing and analysis is proposed. It includes a set of technologies that provide a possible solution to the problem.

Read More

informatics
M. Kovacheva. Evaluation of NoSQL Databases for Digital Financial Services Implementation

Key Words: NoSQL; digitalization; Big Data; financial services.

Abstract. Digital transformation covers different processes and one of the most important industries that has been developing and becoming increasingly digitized in the last years is banking and financial services. For the banks to meet their customers’ demands, needs and expectations it is important to make changes and adapt quickly to the developing technologies. In addition to that, banks need to manage great volumes of data while making changes to their systems without causing data losses and problems for their customers. Financial services provided by banks, credit unions, accountant companies, insurance companies, investment funds, stock brokerages, etc. need databases that can adapt to the increasing need for automation for which NoSQL databases can be very useful. Non-relational databases handle big amounts of data, they are scalable and deliver better performance when processing data. This study presents specifications for NoSQL databases and evaluation approach for non-relational database for financial services.

Read More

100 years PID controller
M. Hadjiski, S. Koynov. 100 Years of Proportional-Integral-Derivative (PID) Controller

Key Words: Basic control; industrial automation; operating technology; parametric tuning; PID controller.

Abstract. The paper is dedicated to a remarkable date in the development of industrial automation – 2022 marks the 100th anniversary of the publication of Nicholas Minorsky’s paper “Directional Stability of Automatically Steered Bodies” which is unanimously considered the first theoretical study and justification of modern Proportional-Integral-Derivative (PID) controller. This paper marks the beginning of centuries of practical implementation, theoretical research and engineering innovations that have led to a remarkable achievement – currently over 95% of the basic control circuits of industrial automation are with PID controllers in autonomous, centralized or decentralized implementation. The historical development of the PID controller is a particularly telling example of the effectiveness of their cooperation between researchers, engineers and industrial managers, as a kind of synthesis between scientific and technical feasibility, operational simplicity and economic efficiency. There is every reason to believe that PID controllers will continue to be sustainable in the coming decades, dominated by big data and artificial intelligence technologies. Many of the modern extensions of the PID controller aimed at overcoming its limitations and shortcomings are still in the research stage, but following the centuries-old tradition of ensuring a high value for money, there is no doubt that it will continues to be the main building block at the basic level of industrial automation in the construction of intelligent control systems of arbitrary complexity.

Read More

100 years PID controller
K. Boshnakov, M. Hadjiski. What Makes the PID Controller So Successful already 100 Years

Key Words: Efficiency/price; life cycle; industrial automation; PID controller; design.

Abstract. A scientific and engineering achievement has a chance to become a sustainable influencer of industrial innovation only if it is accepted by business and has the resources to respond adequately to the dramatically changing driving forces of technological development. Otherwise, it gradually turns into a useful, respectable, often widespread, but conservative, even retro achievement, a product of the past. Fortunately, the modern Proportional-Integral-Derivative (PID) controller, with its already 100-year history, can be referred to the category of sustainably developing achievements. With its more than 95% distribution in the operating basic control systems, it continues to have a significant impact on industrial automation. The present study substantiates the main arguments forming the positive answer to the question why is the PID controller still so successful for 100 years. The comparative analysis of the results of research on the quality of control of dynamic systems under different objects, criteria, limitations, uncertainty, shows that control systems with a PID controller are good enough in most cases of process control and beyond, and when requires a more complex solution. The reasons for the significant contribution to the wide acceptance of the PID controller by business, management and operational personnel are analyzed as the fact that the PID controller is simple, intuitively clear and easy to understand. It is justified why in PID controller systems, the performance/cost ratio is high, with particular attention paid to the design process. It is shown that one of the main reasons for the long-term success of the PID controller is its continuous functional and technological development using modern scientific achievements from various fields (artificial intelligence, data science, large systems), but remaining faithful to the fundamental principles of classic PID control.

Read More

100 years PID controller
M. Hadjiski. 100 Years of Evolution and Prospects for the Development of the PID Controller in the Era of Artificial Intelligence

Key Words: Artificial Intelligence; Big Data; controller synthesis; controller tuning; industrial automation; PID control.

Abstract. The centenary of the PID controller is significant not only with the pioneering article by N. Minorsky, with which he scientifically substantiates its algorithm, structure, components and dynamic behavior. It is an important reminder of a phenomenon in the development of automation – that for an entire century, and especially in its last three-quarters, the PID controller has invariably been the dominant device in industrial automation – a fact that continues to be relevant even now. In the present research, the continuous upward development of the PID controller has been traced, which has always been associated with the constant use of new technologies, methods and technical solutions from the most modern achievements of its time in various fields. The achievements in the development of the classical PID controller have been tracked, which continue to be relevant in a number of cases even now. Particular attention is paid to the problem of tuning the PID controller, which has already an 80-year history since the works of Ziegler and Nichols in 1942, and continues to attract the attention of many researchers applying state-of-the-art methods from the theory of optimization, metaheuristic approaches, machine learning. The main approaches for analytical synthesis of PID and PID-like controllers using classical methods of state-space control theory, including robust synthesis, are reviewed. Special emphasis is placed on the use of methods using big data and technologies from the field of artificial intelligence for the implementation of nested intelligent structures for hierarchical management of the executive level, on which PID controllers are applied. The trend towards the management of complex compound systems with significantly increased requirements for accuracy and robustness, to overcome the challenges of uncertainty and stochasticity in multi-criteria setting, has been traced. It is shown that the accelerated expansion of the functionality of the lassical PID controller, especially characteristic in the last two decades, has led to a new view and capabilities of the PID control technology, including promising directions such as anticipation and modeling of the surrounding environment, PID controller in open systems, implementing adaptability through switching, diagnostics and fault tolerance, cyber security. Some of the main engineering and business problems that need to be solved in order for the PID controller to continue to enjoy its popularity are indicated: preserving the spirit of simplicity and intuitive clarity, ensuring convenience and efficiency in design and operation, affordable pricess, opportunities for seamless integration into the emerging computer and communication systems in the era of big data and artificial intelligence. Due to the small historical distance, the existing modern results rather outline certain trends and mark promising directions for reaching significant achievements for the practice of industrial automation in the near future.

Read More

automatica
V. Ruykova. Generalized Predictive Control with Disturbance Compensation for Time-delay Processes

Key Words: Generalized predictive controller; disturbance compensation; time-delay; Diophantine equation.

Abstract. The aim of this paper is to design predictive algorithm for control of time-delayed systems with the possibility of measurable disturbance compensation. A new approach to extended Generalized Predictive Control (GPC) has been proposed. Three Diophantine equations are solved non recursively. The cost function and the control law of the GPC are derived. Simulation examples are presented to show results of the design. Two cases are considered – in case of delay between the input and the output of the process, less than the delay, between the disturbance and the output of the process, and in case of delay between the input and the output of the process, greater than the delay between the disturbance and the output of the process. In both cases, comparisons were made between a standard GPC, a compensator with future interference values included in the GPC, a compensator without future interference values included in the GPC and a classic compensator for one case. The results show that the GPC cannot completely eliminate the disturbances, even when the future disturbances values are included in the control. Complete elimination of the disturbances is possible only when the weighting factor, that distributes the energy of the controller, is zero.

Read More

automatica
M. Petrov. Multiple Objective Optimisation for the Ethanol Production from Strain Saccharomyces sereviciae

Key Words: Multiple objective optimisation; ethanol production; Saccharomyces sereviciae; general multiple objective optimisation; fuzzy optimisation; fuzzy multiple-objective optimisation.

Abstract. A fuzzy procedure is applied in order to find the optimal feed policy of a fed-batch fermentation process for ethanol production using a Saccharomyces serevicieae. The policy consists in feed flow rate, feed concentration, and fermentation time. In this study biotechnological process is formulated as a general multiple objective optimisation problem. By using an assigned membership function for each of the objectives, the general multiple objective optimisation problem can be converted into a maximizing decision problem. In order to obtain a global solution, a method of fuzzy sets theory is introduced to solve the maximizing decision problem. After this multiple optimisation, the useful product quality is raised and the residual substrate concentration is decreased at the end of the process. Thus, the process productiveness is increased.

Read More

automatica
S. Miteva, D. Popov. Integration of Cryobattery into Conventional Energy Conversion System

Key Words: Liquid air; energy storage; regenerative steam cycle; feed water; waste heat.

Abstract. The aim of the present study is to evaluate an option of integration of liquid air energy storage (LAES) into regenerative steam turbine cycle. The waste heat from LAES discharge is used for feed water preheating. Thermodynamic analysis shows that the newly proposed hybrid LAES system has a round-trip efficiency of 16% higher than the standalone LAES.

Read More

informatics
Zh. Zhejnov, J. Urumov. Modeling the Influence of Deformation on Bragg Fibers Losses

Key Words: Bragg fiber; bending; losses; Photonic Crystal Fiber; model of deformation.

Abstract. The article analyzes the losses in one type of Photonic Crystal Fiber – Bragg fiber. It is one-dimensional fiber, made as coaxial cylindrical layers. The fiber cladding is a dielectric mirror realized as a multilayer dielectric coating. The paper proposes a method based on geometric optics for the analysis of light propagation losses in a multilayer microstructured fiber with an air core, as a result of tunneling. A model of M-layer fiber with set refractive indices of the dielectric layers is proposed. The angles of incidence and reflection of the boundaries of each two layers of the fiber cladding according to Snelius’ law are calculated. The transverse reflection coefficients of each boundary between two layers for TM and TE polarization are calculated. The electric fields of the reflected and incident beams are calculated. The magnitudes of these vectors are recursively related to each boundary of the previous and next fiber layer. The input power of the light in the first fiber layer for TE and TM polarization is calculated. The losses from light reflection when it passing through all layers of the fiber as a function of the reflection coefficients are calculated. The normalized attenuation is calculated. The characteristic equation for the optical waveguide is decided. The angles of reflection in different modes are calculated. The distance between two consecutive reflections of the beam is calculated as a function of the reflection coefficients for the different polarizations and the reflection angles of the layers for TE and TM polarizations. They determine the reflection coefficient and the phase change when light passes through the whole fiber. The delays of the rays with TE and TM polarization are obtained. The expressions for the chromatic dispersion of the fiber for TE and TM polarization are then derived. A mathematical model of the deformation, presented as a change in the geometry of the fiber in this section as part of a circle is proposed. For this section of the deformed fiber by geometric transformations the connection between the angles of incidence and reflection in the straight and in the round section of the deformed fiber is calculated. The distance between two consecutive reflections from the fiber boundary surface of the rays trajectory is determined by the number of consecutive reflections in the core of the fiber. The reflection losses in TE and TM polarization, which are proportional to the number of beam reflections, are then determined. The phase delays for beams with TE and TM polarization for a fiber of a certain length are determined as the sum of the individual delays in the reflections of the beams. The proposed mathematical model and algorithm for calculating the attenuation of the fiber for different modes of TE and TM polarization, an example of M-layer PCF fiber with air core and alternating dielectric layers with two alternating coefficients and thicknesses is solved, so as to create at the average wavelength of light a phase shift of 900. A MATLAB program has been written. It simulates the attenuation and dispersion of a fiber with set optical parameters. After solving the characteristic equation for this fiber with the introduced parameters, the normalized attenuations for TE and TM polarization are calculated for fiber with and without deformation for different propagating modes. Calculations have been made for fibers with core radius of 20-200 μm for a light wavelength of 1559 nm. The attenuation and chromatic dispersion graphs for TE and TM polarization of several straight and deformed fibers with the same length and different number of layers were plotted. A conclusion about the influence of the core diameter, the bending radius and the number layers of the fiber cladding on the losses of the different propagated modes is made. A comparison of PCF losses and ordinary quartz single-mode fibers was made. In conclusion, the disadvantage of the proposed method is that only the meridional rays of the propagating light are analyzed and the dielectric losses in the fiber cladding are not taken into account. The advantage of the proposed method is the small computational complexity and the correct qualitative result of the PCF analysis with certain parameters. Guidelines for future development are proposed – analysis of the losses of the fiber, taking into account the non-meridional rays in the fiber. The possibilities for using PCF for information transmission and as dispersion compensators in telecommunications are pointed. The possibility of using the method for optimization of certain parameters of PCF of this kind is proposed.

Read More

intelligent systems
G. Gergov, J. Cruz, E. Kirilova. A Comparative Evaluation of the Predictive Ability of PLS and RBF ANN Calibration Techniques Applied to SW-NIR Meat Data

Key Words: NIR spectroscopy; Partial Least Square method; Radial Basis Functions Neural Networks; moisture content; fat content; pork meat samples.

Abstract. In this study the performance of linear and nonlinear chemometric methods has been investigated and compared. The transmittance spectra of pork meat samples were collected by SW-NIR (short wave near infrared) analyser in the spectral range of 850 nm to 1,050 nm. Partial Least Square (PLS1) method and Radial Basis Functions Neural Networks (RBF NN) were chosen for chemometric analysis of that samples for determination of moisture and fat content. The reason for using RBF ANN is significant nonlinearity which is exhibited between the spectra and the fat and moisture content. PLS1 and RBF NN with different architecture have been combined with different pre-processing techniques such as first derivative (D1), standard normal variate (SNV), multiplicative signal correction (MSC) and the combinations of MSC and SNV with first derivative. It was found that optimal pre-processing was MSC for moisture, and the combination of D1and SNV for fat. When PLS1 was used, results showed reduction of RMSEP and REP using MSC with 15 and 13% for moisture determination. In case of PLS1 fat determination considerable reduction of RMSEP and REP was observed using a combination of D1 and SNV with 48 and 47%. Compared to PLS1 regression with suitable preprocessing, RBF ANN showed better results: reduction of RMSEP and REP using a combination of D1 and SNV with 48% for moisture, and reduction using a combination of D1 and SNV with 59% for fat determination. These improvements together with the facility of SW-NIR technology to be implemented in the process engineering made it ideal for the meat industry.

Read More

intelligent systems
G. Kolev, E. Koleva. Integrated Smart Home System

Key Words: Internet of Things; Wireless Sensor Network; smart home; integration; voice assistant.

Abstract. An integrated Smart Home system for monitoring and management of the elements of the working environment or at home in Home Assistant platform and is integrated with a voice assistant (Google Assistant) has been developed. The structure of the Smart Home (SH) system is based on the concept of the Internet of Things (IoT), which includes connectivity of devices and actuators, as well as the presence of Wireless Sensor Network (WSN). The main functions in each SH depend strongly on the requirements, way of live, special health issues, availability of pets and appliances etc. of the household members, as well as the home/house architecture, location etc. The developed integrated system structure consists of the following main modules: connected devices, measuring (sensor) module, processing module, visualization module (interface), communication module (voice communication). The system allows monitoring and control of various parameters of the environment, determination of geolocation, tracking the state of the connected devices, provides ascertainment of conditions or constraints during the implementation of logical algorithms or actions, etc. The developed integrated system solves the problem of using various interface applications, communication protocols and standards by integration of all its elements in one Application Programming Interface (API) and simultaneously the system is expanding its scope through its integration with a voice assistant (Google Assistant). In the developed integrated system solutions with pre-set functions, default functions and user selection functions are implemented. Also, specially designed by the author (G. Kolev), made and tested IoT boards for stepper motor control, RGB LED strip, as well as IoT board for control of small relays for concealed mounting are applied. The system operation rules can be set by the user (directly or in time) or depending on the values obtained from the sensors (Sensor-based Linked Open Rules). The developed SH system gives also the possibility for building actions according to the geolocation of each of the devices (users) via the GPS system of the phone. Elements, connected with the energy utilization (consumption) efficiency – electricity, water and heat consumption efficiency are also considered. The optimization of the consumption is directly connected with cost savings which adds an additional benefit to the undeniable advantages of the Smart Home system development. The process of development of integrated remote-control Smart Home systems can continue without limitation in time as far as the imagination of the designer and/or the users reaches.

Read More

automatica
M. Hadjiski. Мutual Penetration and Enrichment as a Bilateral Accelerating Factor for the Development of Machine Learning and Automation

Key Words: Artificial intelligence; control theory; deep learning; industrial automation; machine learning; reinforcement learning.

Abstract. The study shows that the existence of fundamental similarities between control theory and machine learning is a real basis for productive interpenetration and enrichment with new concepts, methods and tools, which are mutually beneficial for overcoming a number of serious modern challenges such as the control of complex and autonomous systems, cybersecurity, intelligent robotics, bioautomatics. The continuous development of the control theory as a result of its own progress and under the influence of the ideas of artificial intelligence will not allow the transformation of automation into a routine engineering discipline. In turn, the systems based on artificial intelligence and machine learning are enriched with well-developed methods and procedures from the control theory in order to improve and create new algorithms and to ensure a higher speed, robust stability and optimality of the learning process. Industrial automation systems will absorb the innovative results of the interpenetrating development of artificial intelligence and control theory improving both the quality and scope, safety and security of operations.

Read More

automatica
M. Petrov. Optimisation of Biotechnological Processes through Combined Algorithm

Key Words: Fuzzy sets theory; random search with back step; combined algorithm; initial condition.

Abstract. A combined algorithm for optimisation of biotechnological processes has been developed. The algorithm includes a random search method for an optimal choice of an initial point and a method based of the fuzzy sets theory. Combining both methods overcomes а major disadvantage of the fuzzy optimisation method, connected with a determination of large scale problems. The combined algorithm has been successfully applied for optimisation of the initial condition and optimal control of the biotransformation process of whey fermentation by а strain Kluyveromyces lactis MC5.

Read More

informatics
R. Ivanov. Automatic Beacon Deployment for Indoor Localization

Key Words: Beacon deployment; indoor localization; Indoor Positioning Systems.

Abstract. Localization of visitors in public buildings is a key technology to create accessible environments using location-based services. The usability of these services and the satisfaction of their clients depend on the accurate calculation of the position of visitors. This is most often realized using wireless sensor networks. The localization accuracy depends on a number of factors, one of which is the deployment of the sensor nodes. This is a complex task, which is most often realized by experts. In this article, we offer a complete solution for creating systems for indoor localization, which includes: (1) Algorithm for fast partitioning of the building components into non-overlapping rectangles; (2) Algorithm for sensor nodes deployment based on the geometry of the rectangles obtained; and (3) Algorithm for optimizing the sensor nodes placement taking into account the connectivity between the building components (rooms and corridors), as well as the static objects, which are obstacles for radio signals. Numerous experiments have been implemented in stimulating and real environments, which prove the applicability of the proposed solution.

Read More

informatics
P. Vasilev, V. Sindrakovska. Relations between Standards IEC/EN 62264 and RAMI 4.0

Key Words: RAMI 4.0; IEC/EN 62264; asset; communication and presentation capabilities; activity; scheduling; model; web services; B2MML.

Abstract. The Reference Architectural Model for Industry (RAMI) 4.0 tends to unify the efforts for rapid development of new technologies of the information and communication technology sector by stating definitions of an asset and its role in the cyber-physical world with the use of presentation and communication functions applied to the asset. Furthermore, the standard describes how an asset should interact with the cyber-physical world. In order to introduce the asset as a conceptual fundament, the standard RAMI 4.0 extends the Equipment hierarchy model of another standard – IEC/EN 62264, which is well known and applied as an architectural framework within software integration projects of various information systems. The differences between, are that in IEC/EN 62264, a physical asset model corresponds to the equipment model, whereas the RAMI 4.0 states that an asset may present even a Product or Connected World (Cyber-Physical World). The introduced extension is necessary because th perspective of the two standards is different. IEC/EN 62264 defines objects and models for integration, in which a Product, as an abstraction of “what should be made?” is a part of Product Definition Model (Operations Definition Model), and when it is physically performed “what is actually made?”, it is a part of Material Model. As such, the objects “carrying” the integration information are different. The Connected world also is an asset in RAMI 4.0, because it has value for a company, while the whole subject of the IEC/EN 62264 standard is to integrate information systems, thus achieving “Connected World”. This is why the standard IEC/EN 62264 should be considered as a “roadmap” for the cyber-physical transformation of the RAMI 4.0 standard. The aim of the article is to discuss the importance and the assumption of non-implicit relations between the two standards as well as the concept for development of RAMI 4.0 administration shell based on IEC/EN 62264 architecture.

Read More

informatics
R. Kaltenborn. Integration in Educational Systems Extended with Artificial Intelligence – Based Technologies

Key Words: Artificial intelligence; intelligent learning system; integration; learning technologies; optimization; system of systems.

Abstract. The main problems related to the integration of diverse functional elements of advanced intelligent learning systems are considered. It is shown that the integration of the elements in the learning process is a complex multilayered process due to the great variety and complexity of the ongoing basic processes – cognitive, pedagogical, technological, social and interpersonal. It is emphasized that the need for the integration process to be solved as a multifactor optimization data-driven problem and the use of modern techniques in the field of artificial intelligence – machine learning, pattern recognition, natural language processes, network management.

Read More

automatica
N. Deliiski, D. Angelski. Computation of the Processing Medium Temperature during Autoclave Steaming of Non-frozen Wooden Prisms

Кey Words: Wooden prisms; veneer production; autoclave steaming; 2D model; processing medium temperature; energy consumption; heat balance; loading level of autoclave; moisture content of prisms.

Abstract. A methodology for computation of the processing medium temperature and connected with it energy consumption and heat fluxes during autoclave steaming of non-frozen ooden prisms for veneer production at limited heat power of the steam generator, depending on the dimensions of the prisms cross section, wood moisture content, and loading level of the autoclave has been suggested.
The methodology is based on the use of two personal mathematical models: 2D non-linear model of the temperature distribution in subjected to steaming non-frozen prismatic wood materials and model of the non-stationary heat balance of autoclaves for steaming wood materials. The heat balance of the autoclaves consist of following components: energy used for heating of the subjected to teaming wood materials, energy used for heating of the body of the autoclave and of the situated in it metal trolleys for positioning of the wood materials, energy used for heating of the insulating layer of the autoclave, energy used for covering of the heat emission from the autoclave in the surrounding air, energy used for filling in with steam the free (unoccupied by wood materials) part of the working volume of the autoclave and energy, which is accumulated in the gathered in the lower part of the autoclave condense water. For numerical solving of the models and practical application of the suggested methodology, a software package was prepared in the calculation environment of Visual FORTRAN Professional developed by Microsoft. Using this package computations and research of the non-stationary change of the processing medium temperature in an autoclave with a diameter of 2.4 m, length of its cylindrical part of 9.0 m and loading level of 40, 50, and 60% at a limited heat power of the steam generator, equal to 500 kW during the initial part of the steaming process in it of beech prisms with cross-section dimensions of 0.3 × 0.3 m, 0.4 × 0.4 m, and 0.5 × 0.5 m, initial temperature of 0 °C, basic density of 560 kg·m-3, moisture content of 0.4, 0.6, and 0.8 kg·kg-1 have been carried out. Simultaneously with this the heat fluxes and energy consumption required for the heating of the prisms and for the whole steaming process in the autoclave have been calculated. The suggested methodology can be used for the computation and model based automatic realization of energy efficient optimized regimes for autoclave steaming of different wood materials

Read More

informatics
P. Petrov, G. Kostadinov, P. Zhivkov, V. Velichkova, T. Balabanov. Approximated Sequences Reconstruction with Genetic Algorithms

Кey Words: Reconstruction; sequencing; genetic algorithms.

Abstract. Sequences reconstruction is a problem often met in many research areas. The core of this problem is how to recreate a full sequence of ordered information when only chunks of it are known.
Sequence reconstruction is applied in fields like genetics, cryptography and encoding. In this research, an approximate reconstruction of sequences is proposed, which is applied in the gambling industry for reverse engineering of slot machine virtual reels. The importance of this problem is related to the control of gambling games parameters by law regulators. The problem in this research has a combinatorial nature. The analyzed sequences are taken from commercial slot machine games. When the gambler spins the virtual reels he/she does not know how symbols are distributed on the reels. The gambler sees only small parts of the reels when they are stopped. By observing the game for a long enough time the gambler can record these small parts of the reels observer on the screen.
Collecting enough information the gambler can try to reconstruct all positions of the symbols on the virtual reels. Such reconstruction is not trivial, because there is too much unknown information. If there are absolutely identical repeating patterns on the virtual reels the gambler can not find this out by just observing small chunks.
Reverse engineering is commonly used during the analysis of slot machine gambling games. There are many approaches such reverse reconstruction to be done. In most cases, the goal of the engineers is to reconstruct the virtual reels completely. The reconstruction is similar to puzzle solving by attaching each observed piece with a proper corresponding one. Such reconstruction is not always possible because in some cases there is more than one way of connecting the observed pieces. The contribution of this study is related to the approximate reconstruction of the sequences. It does not give a direct solution to the reconstruction problem. Instead of this, it gives an approximate solution. The goal is a little bit changed from the exact reconstruction of the virtual reels to the construction of such reels which will give close behavior of the slot machine gambling game if they are loaded inside the machine instead of the original reels. The innovative approach is that the search for a sub-optimal solution is done in the space of the chunks instead of in the space of the sequences. The proposed approximate reconstruction is approved by experiments with real industrial data.

Read More