Design of control applications on WSAN with mesh architecture

Tamaño: px
Comenzar la demostración a partir de la página:

Download "Design of control applications on WSAN with mesh architecture"



2 Design of control applications on WSAN with mesh architecture Diego Martínez, Francisco Blanes, Patricia Balbastre, José Simo and Alfons Crespo Abstract The current trend in the development of industrial applications is the use of wireless networks to communicate the system nodes, principally to increase flexibility and reliability of these applications and to reduce the implementation cost. However, in control applications, as a consequence of latency and jitter generated by the network, not always a similarity between experimental results and desired performance can be obtained. This is because imprecise models for analyzing and designing these systems have been used, and to use inadequate validation methods and platforms that do not support the models utilized. Therefore this paper presents a method to get a system optimal configuration in order to fulfill with desired performance in control applications and a significant energy saving. I. INTRODUCTION s a consequence to the increasing complexity of control Asystems, most of activities have been distributed over different nodes, where control loops are closed through a communication network. These systems are called Networked Control Systems (NCS). The implementation of NCS also reduces the impact of failures in a system component and facilitates the diagnosis, maintenance and traceability processes. The MAC (Medium Access Control) algorithm utilized by the network characterize the latency and jitter presented in the transmission period, which produce that not always a similarity between experimental and simulation results can be obtained. This is because imprecise models for analyzing and designing these systems are used, and to make use of inadequate validation methods and platforms that do not support the models utilized. The use of wireless networks to communicate the system nodes enable the development of new applications on wireless sensors and actuators networks (WSAN), which increase the applications flexibility and reliability, at the same time its impact on the reduction of implementation cost is significant. Consequently to limited resources in nodes and the constraints imposed by applications, it is necessary to develop methods for the cooperation between different Manuscript received July 28, This work was supported in part by the CYTED Project D2ARS. UNESCO code: ;330417;120314; D. M. is with the Departamento de Automática y Electrónica, Universidad Autónoma de Occidente, Cali, Colombia ( F. B., P. B., J. S., and A. C., are with the Departamento de Informática de Sistemas y Computadores, Universidad Politécnica de Valencia, Valencia, España. ( levels of the system to achieve optimal solutions. One of the more important challenges in developing these applications is the minimization of some factors may cause the increase of another, then a compromise between them must be reached. This paper presents a design procedure that helps in this regard, which includes different levels of node architecture, with the aim of finding solutions to ensure fulfillment of time constraints and minimal power consumption. The paper is organized as follows. In section 2 the related work is presented. A proposal for the nodes architecture and a method to feasibility testing for real time constraints in WSAN is presented in section 3. Section 4 presents the design procedure proposed. Simulation results of a case study are presented in section 5. Finally in section 6 conclusions and future work are presented. II. RELATED WORK This area integrates several disciplines; the following is an analysis of works presented at each one. A. Networks Protocols In [1] and [2] the use of Bluetooth and IEEE b like networks for control applications is analyzed, while giving a four layers architecture to achieve a predictable behaviour in IEEE b. [3] examines several methods for reducing the power consumption at different levels of the communication stack in wireless sensor networks, one of which is the TDMA MAC algorithm that contributes to ensure time transmission bounds, to minimize collisions, and power-aware. The results presented in [4] and [5] allow to conclude that IEEE e EDCA mode offers a good alternative to fulfill the requirements in real-time industrial applications. A conclusion of scenarios presented is that using the EDCA default values can guarantee the requirements for submitting information in industrial applications, with a number of stations smaller than 10 and message stream periods above 10 ms. At this time there are several commercial devices that use Zigbee [6] and WirelessHART [7], which have lower power consumption than developments supported by IEEE Zigbee is supported by the IEEE protocol, which implements two basic types of media access, without synchronization and synchronization through beacons. The advantage of the MAC algorithm without synchronization is to facilitate the scalability and autoconfiguration of the network, but no warranty time transmission bounds.

3 In the IEEE synchronization mode the maximum time to transmit information can be bounded using guaranteed time slots (GTS) within a superframe, which is a very important characteristic for the development of control applications [8]. It is possible to assign a maximum of seven slots, with a minimum frame period of ms, which may be enough in some cases. In [9] a method for the allocation of GTS is presented. However, the use of GTS is restricted to networks with star topology, which limits the reliability and scalability of the application. Although networks with star topology allow the use of GTS, the development of control applications in industrial environment requires a reliable and secure communication. This requirement is easier to provide by networks with mesh topology, which also benefits the applications scalability. In order to construct a mesh networking architectures, Zigbee uses a MAC algorithm without synchronization, nevertheless some proposals have been developed to construct cluster tree networks synchronized as presented in [10]. WirelessHART is based on the physical layer of IEEE protocol, but specifies new levels of Data link, Network, Transport and Application, [11]. WirelessHART uses a MAC mode by TDMA, with 100 slots per second. Additionally WirelessHART allows developing mesh topologies networks providing redundant paths that permit routing messages through different routes to avoid broken links, interferences, and physical obstacles. B. Feasibility Testing for Messages and Real Time Tasks By the effect of delays on the performance of NCS these applications have end to end real time constraints. In its general formulation, the problem of assessing the feasibility of a real time distributed system is NP-hard. In order to overcome this inherent difficulty, problem restrictions and heuristics must be used. A common approach is to statically allocate application tasks at system nodes and locally utilize either a well known scheduling algorithm like Rate Monotonic (RM) or Earliest Deadline First (EDF), [12]. Distributed applications are characterized by precedence relationship between their tasks. If the tasks are statically allocated to single processors, end to end timing constraints can be analyzed by a theory which assumes release jitter [13]. Several papers have been developed oriented to analyze end to end schedulability, which have used tasks scheduling algorithms like RM and EDF, and MAC protocols based on TDMA, Token and Priorities [12], [14], [15], [16]. These works consider the use of buffers to store messages in the network nodes, employ a scheduling method to get out the messages from buffers and are based on finding the maximum response time of all messages. About tasks schedulers, in the context of wireless sensor networks the most commonly used operating system is TinyOS [17]. It was designed to be used in large constraint computer systems, such as 8-bit microcontrollers with small memory. It is supported by a programming model based on components and guided by events, with event handlers priority higher than tasks, which are executed based on scheduling policy FCFS. However, such schedulers are not appropriate for real time systems. Zigbee products from Chipcon and Texas Instruments use a scheduler based on static priorities. Although fixed priority scheduling is the most popular on-line scheduling policy in real-time systems, usage of the EDF policy is starting to get more attention in industrial environments, due to its benefits in terms of increased resources usage. EDF is currently available in real-time languages such as Ada 2005 and RTSJ. It is also available in real time operating systems like SHark and Erika. C. Dynamic Voltage Scaling Several papers have been developed about DVS (Dynamic Voltage Scaling) ensuring fulfillment of real time constraints. In [18] a methodology based on heuristics for DVS is presented, which requires a low computation time. [19] shows a method to achieve the optimal operating frequency to minimum power consumption, however this method is very complex and therefore its use online is not appropriate. [20] proposes a method combining DVS with a task scheduler based on a task elastic model, which, according to various performance targets, adjusts the period of the system tasks. [21] uses a feedback control scheduling for the DVS processor and an EDF scheduler to scheduling tasks. In [22] performances of several algorithms including static voltage scaling were examined, which selects the lowest operation frequency to fulfill real time constraints. By using static voltage scaling, the operation frequency is assigned statically and is not modified unless the tasks set change. The advantages of this method are its easy implementation and the very low load generated to the system. However, as result of performing the analysis using the worst case execution time it is very restrictive, then the highest savings energy is not obtained. D. Networked Control Systems There are several authors who have analyzed the performance and stability of NCS for network protocols with constant and variable delays, by the same way some proposals to modify the control algorithms in order to compensate delay effects have been developed, [23], [24], [25], [26], [27], [28], [29], [30], [31]. However there is not a general method to analyze stability and performance to any NCS. Each method is limited to the network configuration, the communication protocol used and system and control algorithm assumptions. Furthermore, the design methodologies use different ways, by designing the control algorithm with traditional methods and transferring the problem to the design of the

4 computer system, considering the characteristics of the computer system and designing a control algorithm to compensate its effect, or using an approach where the computer system and control algorithm are co-designed. In most cases only the closed loop system stability is analyzed. Currently there is a great interest by using event-based controller algorithms for NCS, which helps to reduce the use of processors and the network. Despite its good performance in systems with limited computing capabilities, analysis and design of these systems require further development. In [32] a review of events- based control systems is presented. III. NODES ARCHITECTURE AND SCHEDULING ANALYSIS As the design of these systems is characterized by application constraints, based on the study presented in [33] an architecture for nodes in the system is proposed, figure 1. This architecture enables the fulfillment of real time constraints and facilitates the implementation of strategies for saving energy. Fig. 1. Logical architecture for nodes The roles of different levels of the architecture are: Tasks: perform activities related to the application. Middleware: Receive requests from tasks to execute a defined scenario, understanding it as a set of tasks with computation requirements and constraints known. Based on a static voltage scaling algorithm sets the correct values of frequency and voltage to operate the processor, and load the tasks in the operating system. Kernel: Perform the execution of tasks based on an EDF scheduling policy. Main processor: executes the software and allows the use of DVS strategies to save energy. Co Processor: Implements the Physic, Data link and Network layers to communicate the nodes. So these activities do not affect the signal processing performance while ensuring the synchronization of network nodes. TDMA is used as MAC algorithm. In this proposal, a mesh network architecture where all nodes have the capability to forward messages was assumed. Because the size of messages in industrial applications is small compared to the amount of data supported by each message in current standard protocols (maximum payload in physical layer PDU of 127 bytes for WirelessHART and ZigBee), then it is assumed that each message is sent within the space reserved for a node in the TDMA network. A. System Model and Notation To link the nodes a mesh network architecture, with a global TDMA and guaranteed time slots for each node, was considered. The routes for sending messages between nodes are fixed, and they are stored in the memory to each node. Following definitions are given: T TDMA i, is the period to repeat the guaranteed time slots for node i in the TDMA based network. D, is the end to end deadline for control CGR j loop j according to the control performance goals. This time is calculated from the moment when the measurement is made until the actuator applies the control action on the system, and its magnitude depends on worst-case response time of control tasks and the time required sending the information by the network T, is the sampling period used by the task sensor S j in control loop j, which is defined according to the dynamic system and comply with D + D + D D T, with D, M j C j A j CGR j S j DC j and D A j are the deadlines of Sensor, Control and Actuator tasks respectively from control loop j. τ = Task 1, Task2,..., Task n, is a tasks set feasible by EDF policy with Task i = ( WCETi, Di, Ti ); WCET i, Di and T i are the respective values of worst case execution time, deadline and period of task Task. { } WCRT, is the worst case response time for task i Task i. B. Schedulability Analysis According to schedulability analysis in EDF [34]: n t + T i Di Hτ ( t) = Ci, is the amount of i= 1 Ti computation time that has been attended by the processor until time t to fulfill with all tasks deadlines. Initial critical interval (ICI), is the time interval between time zero and the first time such that no outstanding computation exists, [ 0, R). The schedulability test for τ consists in to verify that i M j

5 H τ ( t) < t t R (1) To analyse if the system fulfill with end to end deadline and for tasks implementation, corresponding min D x j value is assigned to D M j, D C j and D A j, which is the minimum value of D x j to get local schedulability, this is obtained from the Deadlinemin algorithm presented in [35]. The considerations made in calculating the maximum time spent by a message on a router node, and the reception of it by the next node in the information flow are: The transmission order of messages stored in a router node follows a policy based on EDF. The messages deadline for each hop in control loop j is: min min min D ( DM + D D CGRi i C + i A ) i DMLoop = i N N is the number of hops in the information flow between snesor controller and controller actuator. During the slot allocated to node i each T TDMA i, this node only sends a message from those it has stored. The maximum blocking time to the highest priority message is T. TDMA i For control loop j, a message is generated from sensor task for each T S. j According to previous, the following schedulability test for sending messages to each node is proposed: m t G t T τ N ( ) = TDMA k + slot, is the i T =1 S i amount of time required for sending m messages stored in node k. Network initial critical interval (ICI N ), is the time interval between time zero and the first time such that no outstanding messages stored, [ 0, R N ), which is calculated as follow: o K N 0 = 1; K Ni+ 1 = Gτ ( K Ni) N o If K Ni+1 = K Ni then R N = K Ni m t + TS D i MLoop i Hτ ( t) = TTDMA + slot N k, i=1 TS i is the amount of time that has been attended by the network to node j until time t, to fulfill with all messages deadlines. The end to end schedulability test consists in to verify the fulfilment of messages deadlines, for each hop in every control loop, by: Hτ N ( t) < t t RN (2) Local and end to end schedulability tests are applied independently to system scenarios, and processors operation values of frequency and voltage for each scenario are stored in the nodes local memory. IV. DESIGN PROCEDURE TO WSAN To present the proposed design procedure a case study with five nodes was considered, on which two NCS are implemented, figure 2. Fig. 2. Case study considered A. General Assumptions To simplify the space of solutions to the problem, we assume homogeneous nodes with two processor operation frequencies, low and high. In this approach it is considered that the task WCET for the lower frequency is provided and then the high frequency WCET is estimated, assuming a scaling factor λ of 10, therefore: 1 fhigh = 10 f low, then WCET high = WCET low 10 To calculate the WCET for sensor tasks when high frequency is used, a value to represent the ADC time response has been added, which does not scale. As a consequence to the long time required to evaluate the schedulability and realize the system configuration, these tests are performed offline. Therefore all the system scenarios and their parameters for configuration are known and stored in the nodes memory to develop changes on-line. New scenarios are generated from changes in the tasks modes, creation and destruction of tasks, changes in the number and location of nodes, and broken links or interferences. The context switches time is part of the task execution time. To choose the best system configuration for any scenario is necessary to develop an optimization process. To start the tasks for each scenario on the appropriate node, it is considerate that each node has a replica of every task. The network parameters are: Frame size of 144 bits

6 Data rate 250 kbps Guaranteed time slots of 1 ms in a TDMA based network The time slots are repeated in the TDMA network every 8 ms. B. NCS Assumptions Most results presented about NCS analyze the system performance based mainly on the close loop stability. Although this is a basic requirement, in many practical cases the desired performance is specified in terms of the transient response in control systems. For this reason, in this paper a linear model to analyze the transient response of close loop system for a constants delay is presented. Assumptions about the system are: Single Input Single Output. Sensor task is time-driven, with a sampling period T S, but Controller and Actuator are event-driven, then the system is synchronized and it is possible to consider a single delay located in the forward loop, figure 3. The delay from task Sensor start until task Actuator finalized is less than or equal to T S. Fig. 3. Control system representation e τ r The network delay can be modelled as, with τ r = ( 1 m) T s, 0 m 1, and τ r represents the latency from task Sensor start until task Actuator finalized. Then, using the modified Z transform is possible to find a transfer function to analyze the system performance for a specific delay as follow: H ( Z, m) For the case study Gp( s Gp Gc Gp = (3) ' ( Z ) ( Z, m) ' 1+ Gc( Z ) Gp ( Z, m) ) 0.3 =, then 0.86s + 1 mts e ( z 1 1 Ts Z 0.86 z e ' ) ( Z, m) = In order to reduce the settling-time the following PI control algorithm was used: k p z + ( kits k p ) G =, kp = and ki = C ( z) z 1 It is assumed that for each control loop the accepted performance is presented in figure 4, which is generated s with T S = 40ms and τ r = 40ms, then D CGR = 40ms. The approach is as follows: For each control loop three tasks are required, Sensor, Controller and Actuator. Tasks Sensor_1, Sensor_2, Actuator_1 and Actuator_2 are preassigned to nodes 1, 2, 4 and 5 respectively, while Controller_1 and Controller_2 can be executed on any of the 5 nodes. WCET are equal for tasks in both control loops, these are presented in Table 1. Fig. 4. Control system response to a square input signal, with τ =. T S = 40ms and r 40ms C. Cost Functions The objectives for the optimization procedure are related to the design quality of the computer system and the quality of the response generated by the control algorithms. Its individual weights depend on the context peculiarities for a specific application. To analyze each of the targets there have been two cost functions, F(Co) S_Comp for assessing the cost of a configuration taking into account parameters related to the computer system and F(Co) Control to evaluate quality control parameters that affect the dynamic characteristics of the control system, each of which has the following form, i i Ci ( ) ( Co ) F Co = Ki + KCiFC ( Ci, Ci ( Co )) 0 Ci 0 Ki is the weight assigned to the parameter i, Ci(Co) is the solution of a given configuration Co, Ci is the design constraint applied to the i-th parameter and it is used as a normalization factor; Kci is a weight assigned to the correction factors and Fc is a correction function. The correction function does not contribute to the cost function when the solution is within the allowable search space. The expression of the correction function is: 2 F C ( Ci, Ci ( Co )) = r [ Ci, Ci ( Co )] Where the function r [ Ci, Ci ( Co )] is: ( ( )) [ Ci ( Co ) Ci ] r C, C C = máx 0, i i o TABLE 1 Task WCET (ms), low frequency WCET (ms), high frequency Sensor_x Controller_x Actuator_x C i

7 The target for the computer system is generating an optimal solution to increase the lifetime of the WSAN. In the approach used in this work, knowing the WCET of tasks and the different nodes operation frequencies and since the power consumption is directly related to these parameters, an indirect analysis of power consumption for different system configurations is realized, taking into account the following indexes: fop M j, is the average of the nodes operation frequency for the configuration j in the WSAN. Δ fopdej, is the standard deviation of the nodes operation frequencies. Δ U P EDEj, is the standard deviation of the index that relates the utilization of the main processor and the energy for each node. Where the index that relates the utilization and the energy of the main processor in the node j which executes i tasks is n WCET i i T i U PE =1 =, E E j is the energy in node Δ U C E DEj j j., is the standard deviation of the index that relates the utilization and the energy for each node. Where the index that relates the utilization of the co-processor and the energy in the node j through which circulate i messages is n slot i T Si U N E =1 = E j Based on the evaluation of the power consumed in different operation modes for the transceivers RFM TR1000, Chipcon CC1000 and Chipcon CC2420, presented in [36], an equal value of power consumption for receiving and transmitting messages is considered. Regarding the performance of the control system two parameters were considered. These are: r M, is the average of delays τ r for each control loop, measured from the instant when the Sensor task is activated until the Actuator task is completed. Δr DE, is the standard deviation of delays τ r for each control loop, ideally it should tends to zero so its value can be easily compensated by the control algorithm. Maximum operation values are selected according to the architecture for synthesis of components which have been selected, the applications constrains and the parameters for each scenario. These values restrict the choice of the best system configuration for any scenario. D. Design Procedure There is only one scenario on the case proposed which control applications are functioning. The procedure consists of two phases: Phase 1: Verify local and end to end schedulability, equations (1) and (2), for every possible system configuration. Phase 2: For each scenario choose the configuration that minimizes the cost function, considering only configurations that fulfill local and end to end schedulability. To get the temporal parameters for every system configurations, necessaries to apply costs functions, the case was simulated using Truetime [37]. The values assigned to parameters were: Computer system K = 0. 4, K = 0. 2, K = 0. 2 fop M j Δfop DEj ΔU P EDEj K ΔU C E = 0. 2, K = 150 DEj c i. It gives more importance to fop Mj (lower frequencies solutions). Control system K = 0. 35, K = r M 1, 2 cr = K Δr K = 150. Δr DE 1, 2 It gives more importance to r M, because this parameter affects more the dynamic response of the system. 864 configurations were obtained, of which 272 fulfill end to end deadlines. The optimization of both criteria is a multiobjective problem, to find the optimal solution considering both criteria had been used a linear combination of weights: F ( Co ) = KS _ CompF( Co) S _ Comp + KControlF( Co) Control Where K Comp and K Control are the weight assigned to each kind of parameters and depend on the application. The values of the coefficients used were: K = 0.4, K = 0. 6 Cómputo Control The optimal configuration is: All nodes, except node 1, use high processor operation frequency. Controller_1 on node 3, Controller_2 on node 4. Messages routes for each control loop are: Loop 1: Loop 2: V. SIMULATION RESULTS [38] presents the following model of power consumption for XScale processors: P = P Estática + P Dinámica => P = VIFugas + CLVdd f 2 1 2

8 The model used for simulation, considering an equal scaling factor for the frequency and voltage is: During the execution time of each task, operation f P = f operation f is the frequency used by node on a specific moment. At the moment when the ICI interval is finished, the processor will start a low power state, then P = The simulation results match with the analysis performed. In figure 5 the energy consumption of the nodes that execute the control tasks, when using the optimal configuration is presented. As an additional strategy to reduce the energy saving, the performance of control loops considering loop 1 timetriggered and loop 2 event-based following the proposal in [39] was analyzed. low 3 network. The response of the control loops is shown in Figure 6. It is noted that both strategies fulfill with the specifications and the lifetime in control loop 2 was increased from 4.5 s, in figure 5a, to 9 s, in figure 6b, as a result of the significant reduction of the use of processors and the network by use an event-based control strategy. Although as a result of its implementation event-based, there is a small overshoot on the response of the control loop 2 relative to the control loop 1. Fig. 6a. Response of the control loop 1 Fig. 6b. Response of the control loop 2 Fig. 6. Simulation results. Control loop 1 time-triggered and control loop 2 event-based Fig. 5a. Energy in node 4. Optimal configuration. Fig. 5b. Energy in node 3. Optimal configuration Fig. 5. Energy consumption of the nodes that execute the control tasks when using the optimal configuration. For the control loop time-triggered T S = 40ms was considered. While in the control loop 2 the events were: e k e s 0.1 or tk ts 120ms e k and e s represent the current error and the error when a control signal was calculated the last time respectively. t k and t s represent the current time and the time when a control signal was calculated the last time respectively. The maximum difference allowed between these values depend on the bandwidth system and it is equal to the maximum sampling period which the signal must be sampled. The minimum sampling period is 40 ms. In this way the event-based control loop realizes actions on the system every 40 ms by the transient states and wait until 120 ms in stationary states. Thus achieving a significant reduction in the use of processors and the VI. CONCLUSION This paper introduced a node architecture and a method for analyzing end to end schedulability on WSAN, for which a TDMA based network with GTS is used, an EDF policy is chosen like tasks scheduler and to send messages to each node. Additionally, static frequency and voltage scaling for each scenario is realized. A method to get the optimal configuration of the system, supported on cost functions related to power consumption and delays in the control loops was presented. The parameters used in the cost functions depend on the application characteristics. The results show the importance of realizing parameters balance in the design in order to energy saving and minimize delays in the system. In addition by the use of the design method proposed the control specifications were fulfilled. At the same time it was seen as event-based control strategies contribute to reduce resource the use of processors and the network and therefore increase the applications lifetime. Future work will focus on: Design tools and simulation environments offer an alternative to observe the system performance, but because of the large number of nodes in the network and its complex architectures, it is extremely easy for the human designer to miss some important interaction patterns when designing such a system, leading to gaps or malfunctions in the system design. So it is important to propose formal models to facilitate the analysis and design of these applications. The consideration of all possible configurations for every

9 system scenarios in the optimization process allows to get the best configuration, but it requires a long time. So it is important to automate the analysis process and integrate heuristics to reduce the number of options examined. REFERENCES [1] Hristu-Varsakelis D., Levine W. S. (Eds.): Handbook of Networked and Embedded Control Systems. Páginas: Birkhäuser [2] Lee S., Park J. H., Ha K. N., Lee K. C. Wireless Networked Control System Using NDIS-based Four-Layer Architecture for IEEE b. IEEE International Workshop on Factory Communication Systems, May [3] Pantazis, N.A.; Vergados, D.D. A survey on power control issues in wireless sensor networks. IEEE Communications Surveys & Tutorials, 4th Quarter vol 9, No. 4. [4] Moraes R., Portugal P., Vasques F., Fonseca J. A. Limitations of the IEEE e EDCA Protocol when Supporting Real-Time Communication. IEEE International Workshop on Factory Communication Systems, May [5] Cena I G., Bertolotti A C., C. Zunino V. Industrial Applications of IEEE e WLANs. IEEE International Workshop on Factory Communication Systems, May [6] Zigbee Specification. [7] WirelessHART: atasheet.pdf [8] Martínez D., Blanes F., Simo J., Crespo A. Evaluación del Comportamiento Temporal de Sistemas Distribuidos de Control Sobre IEEE y CAN. 21st Symposium on Integrated Circuits and Systems Design Workshop on Sensor Networks and Applications. Gramado, Brasil. Septiembre de [9] Koubâa A., Alves M., and Tovar E., "GTS Allocation Analysis in IEEE for Real-Time Wireless Sensor Networks", in 14th International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS 2006). Rhodes Island (Greece): IEEE, [10] Koubâa A., Cunha A., Alves M.. A Time Division Beacon Scheduling Mechanism for IEEE /Zigbee Cluster-Tree Wireless Sensor Networks. Real-Time Systems, ECRTS '07. 19th Euromicro Conference on IEEE Computer Society [11] Lennvall T., Svensson S., Hekland F.. A Comparison of WirelessHART and Zigbee for Industrial Applications. IEEE International Workshop on Factory Communication Systems, May [12] Spuri M. Holistic Analysis for Deadline Scheduled Real-Time Distributed Systems. Tech. Rep. RR-2873, INRIA, France, April [13] Audsley N., Burns A., Richardson M., Tindell K., and Wellings A.J., Applying New Scheduling Theory to Static Priority Pre-emptive Scheduling. Software Engineering Journal, September [14] Tindell K. and Clark J., Holistic Schedulability Analysis for Distributed Hard Real-Time Systems, Microprocessors and Microprogramming 40, [15] Tindell K., Burns A., and Wellings A.J., Analysis of Hard Real-Time Communications, The Journal of Real-Time Systems 9, [16] Pereira N., Tovar E. and Andersson B. Exact Analysis of TDMA with Slot Skipping. 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007). IEEE Computer Society [17] TinyOS: [18] Mejia-Alvarez P., Levner E., and Mosse D., Power-Optimized Scheduling Server for Real-Time Tasks, Proc. IEEE Real-Time and Embedded Technology and Applications Symp. (RTAS 02), p. 239, [19] Saewong S. and Rajkumar R. Practical Voltage-Scaling for Fixed- Priority RT-Systems. Proceedings of the 9th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 03) 2003 IEEE. [20] Marinoni M., Buttazzo G. Elastic DVS Management in Processors With Discrete Voltage/Frequency Modes. IEEE Transactions on industrial informatics, vol. 3, No. 1, [21] Zhu Y. and Mueller F. Feedback EDF Scheduling Exploiting Dynamic Voltage Scaling. Proceedings of the 10th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS 04), 2004 IEEE. [22] Pillai P. and Shin K.G., Real-Time Dynamic Voltage Scaling for Low-Power Embedded Operating Systems, Proc. ACM Symp. Operating Systems Principles, pp , [23] Gregory C W., Octavian B., Linda B. Asymptotic Behavior of Networked Control Systems. Submitted to Control Applications Conference, International Conference on Control Applications. Hawaii, USA. August 22-21, [24] Walsh G.C., Ye H., and Bushnell L. Stability analysis of networked control systems. IEEE Transactions on Cotrol Systems Technology, Vol. 10, no.5, pp , [25] Yang T.C. Networked control system: a brief survey. IEE Proc.- Control Theory Appl., Vol. 153, No. 4, July [26] Hespanha J. P., Xu Y. A Survey of Recent Results in Networked Control Systems. Proceedings of the IEEE, Vol. 95, No. 1, January [27] Tabbara, M., Nesic, D.; Teel, A.R. Stability of Wireless and Wireline Networked Control Systems. Automatic Control. IEEE Transactions on Automatic Control, Vol. 52, pp September [28] Hu S., Yan W. Stability of Networked Control Systems Under a Multiple-Packet Transmission Policy. IEEE Transactions on Automatic Control, Vol. 53, pp August [29] Huang D., Nguang S. K. State Feedback Control of Uncertain Networked Control Systems With Random Time Delays. IEEE Transactions on Automatic Control, Vol. 53, pp April [30] Zhang Y., Sun J.; Feng G.. Impulsive Control of Discrete Systems With Time Delay. IEEE Transactions on Automatic Control, Vol. 54, pp April [31] Xiong J., Lam, J. Stabilization of Networked Control Systems With a Logic ZOH. IEEE Transactions on Automatic Control, Vol. 54, pp February [32] Dormido S., Sánchez J., Kofman E. Muestreo, control y comunicación basados en eventos. Revista Iberoamericana de Automática e Informática Industrial. Vol. 5. No. 1, pp. 5-26, [33] Martínez D., Blanes F., Simo J., Crespo A. Wireless sensor and actuator networks: Characterization and case study for confined spaces healthcare applications. Proceedings of the International Multiconference on Computer Science and Information Technology, pp [34] Ripoll I., Crespo A., and Mok A. Improvement in feasibility testing for real-time tasks. Journal of Real-Time Systems, 11:19 40, [35] Balbastre P., Ripoll I., and Crespo A. Minimum Deadline Calculation for Periodic Real-Time Tasks in Dynamic Priority Systems. IEEE Transactions on computers, vol. 57, No. 1, January [36] Li Y., Thai M. T., Wu W. Wireless Sensor Networks and Applications. Springer [37] Cervin A., Henriksson D., Lincoln B., Eker J., Årzén K.: "How Does Control Timing Affect Performance?, Analysis and Simulation of Timing Using Jitterbug and TrueTime". IEEE Control Systems Magazine, June [38] Varma A., Debes E., Kozintsev I. and Jacob B., Instruction-Level Power Dissipation in the Intel XScale Embedded Microprocessor. Proceedings of the SPIE, 17th Annual Symposium on Electronic Imaging Science & Technology, vol. 5683, pp [39] Årzén K. J., A simple event-based PID controller. Proceedings of 14th IFAC World Congress. vol. 18. Beijing, China

10 Wireless Sensors Network for the Cerro Corona Tailings Pipeline System Rocío Beatriz Ayala Meza¹ - Diego Alberto Aracena Pizarro² Abstract An Industrial Wireless Architecture, based on low power self-organizing sensors network (IEEE ), to transport data and video for the Cerro Corona Tailings Pipeline System in Cajamarca-Perú, was designed and simulated. In order to have a preliminary view for decision making, the DCS (Distributed Control System) architecture modeling and simulation was considered. For this purpose, the commercial software simulator QUALNET of Scalable Network Technologies was used, since it is easy to use for embedded and aggregated protocols in order to get results in data sensors communication and in operation reliability for monitoring and control in the industrial environment. This paper shows the applicability and reliability of the wireless networks used in the environment, and according to the results, cost and time savings are found for the mining company; both because the amount of cables and manpower recruitment are reduced, and a faster startup is attained. Key Words: IEEE , Simulation, Pipeline, DCS I. INTRODUCTION Wireless sensors networks consist of a large number of small autonomous devices physically distributed in an environment and installed around a phenomenon to be monitored with the ability to store and communicate data in a distributed network. The most important characteristics of these distributed sensing devices are easy integration with other technologies; applications can interact with humans and environment; low resource requirements; etc. Reported applications are in the field of Agriculture, Smart Homes, Military Technology, Construction, Industry, Emergency Offices (ONEMI - National Bureau of Emergency - Ministry of Interior of Chile), Fire Brigade, etc., and its projection is broad because they can collect information in inaccessible and hostile environments. The collected data is used to control and detect failures in processes so that a thorough control of the system can be Manuscript received August 4, This work was supported in part by PSI Perú SAC. -that was on charge of the Cerro Corona Tailings Pipeline System engineering design for providing documentation; and to the mining company GoldFields La Cima S.A. for allowing access to its installations. ¹ is with the Electrical-Electronic Engineering School, Universidad de Tarapacá, P.O. Box 6-D Arica Chile; on leave from Deus Corporation, Urb. Sta. Beatriz A-3 Yanahuara, Arequipa-Perú ( ² is with the Industrial, Informatics and Systems Engineering School, Universidad de Tarapacá, P.O. Box 6-D Arica-Chile ( done. Some examples of existing applications are of sensing materials for critical infrastructures such as in tunnels, mines, pipelines and underground. Mohanty [1] presents an application to track the miners activities and monitor the actual conditions of the mines. Chehri et al [2] present a study on the need to monitor the physical conditions of mines. Stoianov [3] reports a way to monitor the physical condition of pipes by controlling their vibrations and water level to identify potential gaps and leaks. To sum up, there are specific organizations such as Common Wealth Scientific and Industrial Research Organization (CSIRO [4]), and the Center for Sense Critical Infrastructure Research (CenSCIR [5]) which are doing applications in the industry to monitor their machinery and/or the environment where they operate. The paper begins with an overview of the tailings pipeline system, followed by a description of Zigbee advantages over other protocols, to continue then with the design and modeling characteristics of the simulation, and concluding with cost estimation and relevant ideas for future research. II. TAILINGS PIPELINE SYSTEM OVERVIEW GoldFields developed an open-pit gold-copper mine located in Hualgayoc province in Cajamarca, to the north of Lima, Perú. For the Cerro Corona Project, it was considered the use of a Tailings Disposal System for water reclaim and recirculation processes. The original design of the communications system is based on a traditional DCS (Distributed Control System) architecture which has three levels: Plant network, control network and field network. In the field network that is the case of study kilometers of different types of cables were used; and all the cabling meant a high investment in money and time. In money because of the large quantity needed in cables, conduits, electric trays, and labor; and in time because too much is spent in the design of engineering drafts, and the arrival and installation of cables. Furthermore, there must always be good documentation and maintenance schedules to prevent possible data loss. Another problem was that no redundancy for field instruments was designed, so possible communications problems might have arisen. A wireless design for the field network has always been unacceptable, until now. Actually there are many wireless technologies [1] [2] and protocols. Choosing the right one will determine the advantages against wired ones; which is demonstrated in this paper using Qualnet simulator.

11 The Cerro Corona Tailings Pipeline System has two systems, which are monitored and controlled from the control room (See Fig. 1). A. Rougher-Scavenger Tailings (RST) and Cleaner- Scavenger Tailings (CST) disposal system Areas 1100 and 1150 In the RST subsystem the slurry is filtrated to produce a thickened discharge and recover water. Then the thickened slurry flows through the RST disposal line to the impounding dam and the recovered water is recycled back to the concentrator. In the CST subsystem the slurry flows directly to the impounding dam. It is deposited below the water line to prevent CST solids from reacting with air and producing acid. Both lines flow goes through the piping system which follows the contour of the terrain. There are concrete boxes and drop pipes along the pipeline to minimize its weakening by reducing the slurry speed and pressure. (See Fig. 2 for modeling in QUALNET). B. Water Reclaim System The principal objectives are to provide a continuous and sufficient supply of water to the concentrator, and preserve the environment by capturing seepage water. It has two subsystems: Areas 1500, 1550 and 1600 which include the pumps and reservoirs connecting the impounding dam with the process water pond; and Area 1700 that connects the seepage ponds with the impounding dam. These ponds capture water that seeps through the rock of the impounding dam (See Fig. 3 for latency and jitter values in QUALNET, and Fig. 4 for percentages of time and battery charge values in QUALNET). Fig. 2. The subnet covers an approximate area of m². Its configuration is in Hybrid star-mesh topology. Seven routers had been used to link the sensors with the coordinator, which is next to the Control Room. C. Control and Supervision of Field instruments For all areas a DeltaV system was employed and therefore many kilometers of various types of cables for the several industrial network protocols (profibus DP, foundation fieldbus, devicenet, etc) had been used. Fig. 1. Cerro Corona Tailings Pipeline System

12 Fig. 3. Latency values are in blue and jitter values in purple. The three areas show reliability for monitoring applications, and also for Control and Alarming, which are the most difficult and important in an automation system. Fig. 4. Percentages of time in TX (blue), RX (green), Idle (black), Sleep (orange) and Battery charge (red). III. WIRELESS COMMUNICATION SYSTEMS FOR FIELD NETWORK Inside the process industry exist many applications with different requirements to consider in their design such as: security, reliability, scalability, power management, data movement, cost [6] and standardization; there also are various wireless technologies to choose from [7][8][9][10][11]. [7] and [8] belong to Emerson and Honeywell which are pioneers in automation and control for the process industries for more than 30 years. So their technical solutions will definitely be reflected on the market. [11] was the first protocol to be standardized. And that is why it is also the most used in many industries. [10] and [9] are quite different from the former three. [10] is the only one in which every node uses battery. And [9] is the only one in which every node will use IP addresses. So, for the wireless simulation of the Cerro Corona Tailings Pipeline System the Zigbee [11] protocol was chosen (See Table 1). In Security, besides of encryption, it uses key management [12]; both very important characteristics taking

13 into account that the Project is surrounded by communities, that not any worker must have access to instrument configuration, and that always new people arrive to the mine for example the vendors. It offers Reliability despite all the network depends only on the Coordinators. Because in order to avoid environment interferences the entire network can change its frequency of work (current QUALNET version does not support this characteristic yet), and the routing it employs guarantees control applications. Its Scalability is the best among the other technologies nodes- this allows the use of more sensors (higher level of supervision) and future expansions. In Data Movement, the problem is if too many routes discovery is needed; because this means a heavy traffic. However once the routes are defined the latency becomes deterministic and the throughput improves. Regarding to Power Management, the disadvantage is that only End Devices can use battery. Zigbee is used in numerous markets and that is why its chipset has lots of vendors, which reduce its Cost. IV. DESIGN AND MODELING CHARACTERISTICS As for the wireless design no wiring is employed, the use of antennas and radios stand out. All the antennas are omnidirectional and the radios work on the 2.4 GHz band. The field instruments (sensors and transducers) location stays the same as in the original design. Also corrosion sensors, vibration sensors and cameras have been added. For the Full Function Devices the maximum range (100 meters) supported by Zigbee is used. All field instruments use MicaZ [13] energy model which is one of the more used. - Corrosion sensors. To foresee spillage, pipes thickness is monitored. Stoianov in [3] put forward other way to monitor the physical state of the pipes by controlling its vibrations, and the water level to determine possible gaps and leaks. - Vibration sensors. Real-time information about functioning conditions of any equipment. One has been used in every pump. - Cameras. For monitoring and tracking inside the plant. They have a programmed time of operation to save battery energy. Yan et al. in [14] and Barton-Sweeney et al. in [15] present a distributed network of low cost intelligent cameras [16]. The power supply for all the sensors are batteries because due to weather conditions it is not feasible the use of solar or thermal power sources. The battery model is linear (the battery charge is obtained by subtracting the number of dissipated coulombs from the initial charge). To access the sensors for configuration and maintenance, laptops are used. These become part of the network, but must use a Zigbee card (which is cheaper than buying a PDA Personal Digital Assistant). The routing protocol used is AODV (Ad Hoc Ondemand Distance Vector). This is a reactive one because routes are established under demand important characteristic to save energy. To simulate traffic from sensors, routers or switches (client nodes) to the coordinator (server node) a CBR (Constant Bit Rate) generator is used. The traffic rate are 100 packets of 512 bytes each (4 Kb); which are send every second from start to end of simulation (30 seconds). Table 1. WPAN (Wireless Personal Area Network) protocols evaluation *Unknown parammeters due to protocols are still in development

14 A. Evaluation Parameters To test the performance of the network for each area, it is measured (See Table 2, Table 3, Table 4). - Percentage of time in TX, RX, Idle and Sleep states. In Idle, the node is pending for any message to transmit (TX) or receive (RX). Nodes must remain most of the time in Sleep comparing to other states- to save energy. - Battery charge. It is monitored every 10 seconds. - Throughput. As TX = 4 Kbps, an equivalent throughput proves that networks are reliable. - Latency. For control and monitoring, it must be low or inexistent. Most control loops last 1 second or more, that is a why a latency of less than 500ms is acceptable [7]. For monitoring, it is not a condition to determine the reliability of the network due to for those applications more than 30 seconds are required. - Jitter. To check latency changes. The monomode fiber optic (1 Gbps full-duplex) from the original design is kept as backbone. Table 2. System Performance Areas 1100 and 1150 Table 3. System Performance Areas 1500, 1550 and 1600 *The addition of each area is the final throughput value for the control room switch

15 Table 4. System Performance Area 1700 B. Cost estimation. When comparing both tables (see Table 5 and Table 6) it stands out the difference in total price that exists (labor was not considered). Although routers and coordinators need a continuous power source, there still exists money saving; because it is needed 5 VDC and not the typical 120 VAC. V. DISCUSSION AND COMMENT For the simulation, it was considered the battery use in routers and coordinators. But, actually these require a continuous power source. And that is why they showed the highest values in battery consume and null percentages in Sleep state. Anyway, power supplies for these were considered in the price estimation. The throughput values are the same in all areas because the transmission rate = 100 packets of 512 bytes each every second until the end of simulation. When using this heavy rate -for an industrial network- it was also considered electromagnetic interferences and noise usually found in these environments. The normal rate is 100 packets of 128 bytes each every second. Latency, in all areas, is very below the minimum value [7]. A reason could be that no control loops were considered in the simulation. The highest latency is in area Then follow areas 1700, 1550, 1600 and The reason is that in this area are many sensors whose traffic is centralized in a Coordinator, and no Routers were used (point to point communication). With this there are more collision probabilities in heavy traffic. For areas 1550 and 1600, the circumstance is the same. But these are closer to the control room and have less quantity of sensors. For area 1700, the distance to the control room is the farthest even though its latency is not the highest. Which demonstrates that the higher the number of routers as backbone the more reliable the network. Other example is in area 1100 which also uses routers as backbone; but it is also closer to the control room which explains its low latency and inexistent jitter. Regarding to savings in the system startup, is because too much time is used during engineering designs (type of cable, quantity, routing and circuit interconnection); besides of electric trays and conduits laying. Table 5 Cable price in dollars for the wired network Cable Type Length (m) Price ($) BELDEN 3079A purple PVC jacket BELDEN 3076F orange PVC jacket x5/c 14 AWG 600 VAC XLPE 5x14AWGMULTI x3/c 14 AWG 600 VAC XLPE 3x14AWGMULTI BELDEN 3082A monomode fibres BELDEN 8760 gray PVC jacket TOTAL $13080

16 Table 6 Materials price in dollars for the wireless network Materials Quantity Price ($) Zigbee Chipset 99 u Zigbee Routers 20 u Zigbee Cordinators 5 u. 900 USB Zigbee for laptops to access the network 7 u. 280 Power Supply 5V 7V 25 u x3/c 14 AWG 600 VAC XLPE 3x14AWGMULTI m TOTAL ($) 9300 VI. CONCLUSION AND FUTURE WORK The WPAN Zigbee protocol was shown as an alternative for implementing industrial wireless sensors networks in the Cerro Corona Project; which would have led to the employment of fewer resources such as: cables, conduits, workforce, etc., and the fast startup of its operations; and also demonstrating that a wireless network offers reliability for monitoring, alarming and control in any industrial environment. For a higher level of redundancy, two Coordinators could have been used for each area. So that if any of them fails, the secondary one takes control and the network continues working. As backbone network, instead of fiber optics, could have been used IEEE s s is other choice to the use of Apprion or Honeywell Infrastructure network. WPAN, WLAN and WWAN technologies are converged on both of these for the industrial environment. Nowadays, simulators are not employed in wired industrial field networks design because its traffic is not as heavy as in administrative networks. However it would be important to consider the use of simulators to study the behavior of WPAN with WLAN, WMAN and WWAN technologies. There exist many protocols for wireless industrial networks. And whenever Zigbee is chosen, it must be able to coexist with the other ones. There also exist Wi-Fi industrial sensors with batteries lasting from 5 to 10 years. These could delay the standardization of WPAN protocols; because the ideal is to take IP to field network instruments with a worldwide known technology that avoids the use of gateways. It is important to continue the research in wireless industrial networks but taking into account control loops, security, and other routing protocols to improve performance, production and security in industrial plants. VII. REFERENCES [1] P. Mohanty, Application of Wireless Sensor Network Technology for Mziner Tracking and Monitoring Hazardous Condition in Underground Mines, A FI Response (MSHA RIN 1219-AB44), [2] A. Chehri, P. Fortier, P. Tardiff. An Investigation of UWB-Based Wireless Networks in Industrial Automation. IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.2, February [3] I. Stoianov, L. Nachman, S. Madden, and T. Tokmouline. PIPENET: A Wireless Sensor Network for Pipeline Monitoring, in IPSN 07, [4] Commonwealth Scientific and Industrial Research Organisation (CSIRO) [5] Center for Sensed Critical Infrastructure Research (CenSCIR), [6] J. Young, Digi International. What a Mesh! Part 1 The Ins and Outs of Mesh Networking. Fecha de actualización: Noviembre Fecha de consulta: Febrero Disponible en: [7] HART Communication Foundation. Control with WirelessHART. Fecha de consulta: Enero Disponible en: 7/lit_127.html [8] International Society of Automation. ISA100, Wireless Systems for Automation. Fecha de consulta: Febrero Disponible en: 891&CFID=103605&CFTOKEN= [9] J. Hui, D. Culler, S. Chakrabarti. 6LoWPAN: Incorporating IEEE into the IP architecture. IPSO Alliance White Paper. Fecha de actualización: Enero Fecha de consulta: Febrero Disponible en: [10] Digi International. The DigiMesh Networking Protocol. Fecha de consulta: Febrero Disponible en: [11] Zigbee Alliance. Zigbee Specification. Fecha de actualización: Enero Fecha de consulta: Febrero Disponible en: Default.aspx [12] L. Piètre-Cambac e and d`e and,p.sitbon. Cryptographic Key Management for SCADA Systems-Issues and Perspectives. International Conference on Information Security and Assurance (ISA 2008), pp , [13] Scalable Network Technologies. Qualnet Wireless - Model Library, pp Julio 2008 [14] T. Yan, D. Ganesan, R. Manmatha. Distributed Image Search in Camera Sensor Networks. SenSys 08, November 5 7, 2008, Raleigh, North Carolina, USA. [15] A. Barton-Sweeney, D. Lymberopoulos, A. Savvides. Sensor Localization and Camera Calibration in Distributed Camera Sensor Networks. Fecha de consulta: Noviembre Disponible en: pdf [16] A. Barton-Sweeney, D. Lymberopoulos, A. Savvides. Sensor Localization and Camera Calibration using Low Power Cameras. Fecha de consulta: Noviembre Disponible en:

17 A New Methodology for Massive Alarm Management System in Electrical Power Administration Omar Aizpurúa, Rony Caballero Department of Electrical Engineering Technological University of Panamá, Panamá, Panamá Ramón Galán, Agustín Jiménez School of Industrial Engineers Polytechnic University of Madrid Madrid, Spain Abstract This paper presents a methodology that integrates several available techniques to manage the massive amount of alarm signals in electrical power dispatch control centers, as well as the contribution of each entity involved in the system. Artificial intelligence techniques that can be used to solve this problem are reviewed herein. The final objective of this methodology is to find the root cause of avalanches of alarms and their effects and reduce their number through grouping or clustering techniques, complying with the EEMUA 191 standards. Even though other contributions in this topic have been made before, the alarm management problem continues to be practically unsolved for many applications in the industry. Here, the integration is developed using the ontology of the alarms. Additionally, in this methodology, a rule based expert system is used to find the Alarm Root Cause and Clustering Technique (data segmentation) approach to treat the historical database of alarms. I. INTRODUCTION Based on the EEMUA 191 Standard [1], an operator in a control room has the capacity to effectively handle an average of 1 alarm each 10 minutes. However, there are control rooms that register about two thousand alarms per day (as in electrical companies in Europe, Latin America, U.S.A and other regions). In critical cases, as in electrical power dispatch control centers, that amount could reach more than forty thousand alarms per day. This amount is much larger than the average that one operator can manage effectively and efficiently. Although there are several commercial products designed to attend the alarm administration problem, the technical literature does not present a methodology that integrates all the entities and their domains involved in the alarm administration process. Work has been conducted to attempt solving the alarm administration problem; however, it has failed because of the necessity of integrating the methodologies with entities and their domains [2]. Different approaches have looked into Alarm Management [3] and publications have presented a general compendium for Artificial Intelligence and Neural Network Applications in Power Systems [4], Parallel Computing [5] and Dynamics Simulation [6]. Other special applications with Expert Systems and Knowledge-Based Systems are presented in [7] and [8]. Others applications using Artificial Intelligence Techniques for Complex Problem Solving are shown in [9]. This paper presents five sections, the first section presents the introduction, and the second section shows the integration methodology. In this part, the integration architecture, the expert system and the historical database treatment are presented. The third, fourth and fifth parts are the results, conclusions and future works respectively. Finally, the references are presented. II. A. The Schema Integration INTEGRATION METHODOLOGY The Integration Schema is shown in Fig. 1. The schema has been divided in two sections. On the upper section, the schema presents the analytic integration. On the other hand, the lower section of the schema, presents the Historical Database Integration. The first step in the process is to determine in which group the alarm avalanche is located. There are three possibilities: Auxiliary Equipment (Aux), Sub-Station (SE- 230), and Generation (Gen). The selection criterion relies on both the order of the alarm and repetitiveness times. To determine the location of the alarm avalanche region two assumptions were made. First of all, the alarms occurrence time and the region repetitiveness were determined. This information comes from the SCADA (Supervisory Control And Data Acquisition) system. In this way, pondered weight can be calculated as follows: W i 1 p = Where p is the priority and i represents the order. So that, the repetitiveness time can be calculated as the sum of the pondered weight as follows: i

18 n 1 Sum ( Gen, Sub, Aux ) = (2) i =1 p i The alarm localization probability is defined by the following expression: SumGen AlarmsGen = (3) SumTotal SumSE AlarmsSE = (4) SumTotal SumAux AlarmsAux = (5) SumTotal Where SumGen, SumSub and SumAux are the sum of the pondered weight for each region and SumTotal is the sum of SumGen, SumSub and SumAux. Once the selection of the alarm avalanche region is done, the alarm avalanche is treated like a single cluster and after that, it is reduced under the assumption that the first alarms from the avalanche contain the Alarm Root Cause (ARC). This group of alarms or Alarm-Avalanche-Cluster conform an Avalanche-Mask which together with the Alarm-Effect Matrix produces the Avalanche-Effect Matrix (AEM) as is shown in Fig. 1. The whole knowledge within the AEM is used to build the Expert System with the help of a JESS, that is, an Expert System Shell. Jess contains an Inference Machine that has been developed in a JAVA platform. From the Expert System developed, both the Probable Effect/Element and the ARC are obtained. B. The Expert System The Expert System (ES) provides the most probable root alarm by means of the experience and the accumulated knowledge. In this case, JESS was used as the inference engine and Eclipse as JAVA editor. After interviewing seven experts in six sessions of approximately 35 hours in total, and with the knowledge contained in the Alarm Ontology, 256 rules, from 156 digital alarms and 12 analog alarms were generated. In order to build the Expert Systems, the schema shown in Table 1 was implemented. The Alarm List Column represents the whole set of SE- 230 Alarms. There are 168 alarms for the Substation (n=168). The Associated Effects Column defines the different associated effects that can take place when an alarm appears. The Effect/Element Avalanche Matrix is conformed by ones and nulls. If the Effect/Element is related with an alarm, then, that element within the matrix must be filled with a one; otherwise with a null. For example, the following situation could take place: Alarm (A1): 87T50/51L2(230-8) PTT/T2 The Associated effects or elements are: E 1 =Overcurrent on Line (Line 2) E 2 =Phase Difference among line parameters E 3 =Differential Relay (87) E 4 =Temporized Overcurrent Relay (51) E 5 =Instantaneous Overcurrent Relay (50). For the previous example, Row 1 (A1) in the matrix depicted in table 1, shows the effects elements of the list. As it can be seen in the Integration Methodology presented in Fig. 1, when an alarm avalanche appears, the block of alarms can be treated as a Single-Cluster. This is so since the time among alarms is very short and, the alarms in an avalanche are part of the system response to the same event unless two or more simultaneous independent events occur. On the other hand, if the first 10 alarms from the avalanche define the type or class of the event, these alarms can be treated like a mask or trace sign. Now, each alarm from this Avalanche-Mask (or Cluster) must be fed with the Effect-Avalanche-Matrix, i.e., each alarm from the avalanche has a specific Effect/Element assigned. With the schema depicted in Table 1, and with the group of alarms defined by the Avalanche-Mask, a set of rules can be developed. The rule-based expert system uses IF-THEN rules. An example from the rules is shown in Fig. 2. These rules are implemented by the Expert Systems. The task of the expert system is to find which alarm has been set of, and the Alarm Root Cause (ARC) to help the users to make decisions. TABLE I. ASSOCIATED EFFECTS TO THE ALARMS (EFFECT/ELEMENT AVALANCHE MATRIX) ALARMS AVALANCHE LIST ASSOCIATED EFFECTS/ELEMENTS E 1 E 2 E 3.. E m A A An Figure 1. Integration Methodology Finally, the ES gives the Probable Effect/Element from the Analytic Integration (AI) as can be seen in Fig. 3.

19 Rule RULE # 148 a IF A1, A2,.An AND E 1 =a,e 2 =b, Em=c AND a>b>.>c THEN E 1 is the Probable Effect or Failed Element AND The Alarm-Root-Cause is: Ax Figure 2. Example of a Rule Associated Failure Overcurrent on the Line (Line 2) then the distance can be calculated by means of the Normalized Mahalanobis Distance, shown in equation 6. d 2 2 ( xi 1 x j1) M ( i, j) n 2 = (6) σ Where: x i1= is the occurrence time of the Alarm 1 x j1= is the occurrence time of the Alarm 2 2 σ = is the Variance The Mahalanobis Distance becomes a one-dimensional problem (The same form that The Normalized Euclidean Distance). The value of σ 2, has been calculated from a sample of n signals. The size of the sample was determined using the following expression: n = ( N NZ 1) e α 2 2 P(1 P) + Z 2 α 2 P(1 P) (7) Figure 3. Definition of two rules in Jess plattform and its execution C. Historical Database Treatment With respect to the historical database treatment, a distance-based cluster analysis has been implemented. In machine learning, clustering is an example of unsupervised learning. Unlike classification, clustering and unsupervised learning do not rely on predefined classes and class-labeled training examples [10]. According to [10] the following are typical requirements of clustering in data mining: Scalability Ability to deal with different types of attributes Discovery of a cluster with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Ability to deal with noisy data Incremental clustering and insensitivity to the order of input record High dimensionality Constraint-based clustering Interpretability and usability 1) Clustering Algorithm Since the unique significant parameter that can be obtained from the SCADA system is the Alarm Time, Where: N= is the population size= Z α/2 =1,96 for the 0,95 selected confidence interval P=Variable Category Proportion= 0,5 e=maximal expected error= 10% With these values, the sample size results in n= 96 alarms. The final value for the Variance, after the previous analysis was conducted, was σ 2 = ) Cluster Calculation The decision criteria if an alarm belong or not belong to a specific cluster is defined by the following expression: If δμ 2 <γ The parameter γ is the acceptance or rejection threshold and is specified by the experience of the expert. This value is validated with known clusters. For the alarms historical database, the Variance σ 2 =3600 and the Threshold Criteria is 4 (obtained from experimental tests). In Fig. 4, the Y- Axis represents the regions of failure; each number represents a specific region. Under the X-Axis is represented the time in seconds that correspond to 1 day. In this case the day selected is (i.e., February 15 th, year 2003). It can be seen, the regions that involve the clusters. The accumulation of alarms in a defined region allows the segmentation. This segmentation is calculated automatically by means of the MATLAB program and uses the MD algorithm. In Fig. 5, the list of the alarms belonging to a specific cluster is shown. The first 13 alarms belongs to the cluster identified with the number 1, the following 6 alarms belongs to the cluster 2, and so on. (8)

20 the failed element is Generator #3. Thus, the features of the failure associated to the avalanche are known. From the historical database treatment, the Probable Effect/Element from the Historical Integration (HI) is obtained. Figure.4. Views of a clusters in MATLAB platform Figure 5. Views of clusters on Matlab Command Window for the selected day III. RESULTS A. Database To determine a cause it is necessary to have an alarm database, and a database containing the clusters from the historical database, the failure type and failed element associated to each cluster, i.e., each cluster can be defined or matched to a specific failure determined by an expert. This assignment is done In Situ, with contributions from operators, users and information from historical files. The alarm list, shown in Fig. 5, presents a portion of a historical database. From the knowledge stored in the cluster information, the failure type associated is a Low Frequency type, and B. Comparison of The Most Probable Effect/Element. In order to choose the Most Probable Effect/Element, i.e., The Effect/Element Failed and the ARC, a match between the Avalanche-Mask and the Clusters-Mask from the historical database must take place. When the match takes place, automatically, the Effect/Element obtained from the expert system should pass or agree with the Effect/Element from the historical clusters shown in Fig. 1. If the Match is successful, then the most probable effect/element is obtained. In this research case, from 100 tests, 95% of the tested Alarm-Avalanche was successful, i.e., Both Alarm-Mask (Avalanche and Historical) matched. If the Match is not successful, the methodology presents two probable effects/elements: One from the historical database integration (HI) and the other one from the analytic integration (AI). IV. CONCLUSIONS This research effort helps users or operators to make decisions. When the alarms avalanche appears, there is an instrument that helps determine, with a very high probability of success, which of these alarms is the Alarm-Root-Cause and their effects and failed elements (around 95%). This methodology is a combination of Data Mining Techniques, Artificial Intelligence and Ontological Methods. This combination of various techniques makes of this approach a breakthrough in this matter, since this methodology attempts to integrate all the domains of the Power Alarm Management. V. FUTURE WORK First of all, it is good to say, that the methodology is looking for the Alarm-Root-Cause, and not for the Event- Cause. It means that the most important region to analyze is the SE-230, because it is associated with the higher probability of occurrence. Naturally, most of these alarms could be a consequence of a failure at the Gen or Aux region. However, the target is to know which alarm is the root-cause in the SE-230 region. As future work, the other regions could be studied, i.e., Aux and Gen region. On the other hand, a dynamic cycle can be implemented when the final match is not validated and also the historical database can be increased. Figure 6 A Part of a Historical Database REFERENCES [1] EEMUA 191 Alarm Systems - A Guide to Design, Management and Procurement ISBN [2] P. Andow, Alarms and Abnormal Situation Management, Experience from the Process Industries, [3] N. Broadhead,, and J. K. Earnshaw, IEE Optimization of the size well b alarm list displays, [4] K. Wong, Artificial Intelligence and Neural Network Applications in Power Systems, IEE 2 nd International Conference on Advances in Power System Control. Hong Kong. pp , [5] Q. Huang, K. Qin, and W. Wang, IEEE Proceedings of the International Symposium on Parallel Computing in Electrical Engineering (PARELEC 06), A Software Architecture based on

21 Multi Agent and Grid Computing for Electric Power Systems Applications., [6] A. Manzoni, A.S. Silva, and I.C. Decker, Power Systems Dynamics Simulation Using Object-Oriented Programming, IEEE Transactions on Power Systems. vol. 14, No. 1, pp , [7] C. León, J.B. Casado, F. Gonzalo, and J. Luque, Sistema Experto para la Gestión de Fallos para una Red de Radioenlaces. Tesis Doctoral, Departamento de Tecnología Electrónica, Facultad de Informática y Estadística, Universidad de Sevilla, España, [8] E. Vásquez,, O.L. Chacón,, and H.J. Altuve, IEEE. An online Knowledge-Base System for Fault Section Diagnosis Incontrol Centers, Nuevo León, México, pp , [9] Luger, GF., and W.A. Stubblefield, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Addison- Wesley, 5 th Edition, [10] Han, and Kamber, Data Mining : Concepts and Techniques. Morgan Kaufmann, 2d Edition, San Francisco CA, 2006.

22 Frogs species Classification using LPC and Classification Algorithms on Wireless Sensor Network Platform Juan Manuel Mayor T.-Michael Fernando Martinez R.-Msc. Luis E. Tobon Ll. Computer Science Department Javeriana University Colombia, Cali, Valle del Cauca. October 5, 2009 Abstract This paper presents the evaluation of Linear Prediction Coding (LPC) implemented on a Wireless Sensor Networks (WSN) platform, with particular interest in the application of these techniques on frog s phonemes classification. LPC gives the sound, as a discrete signal, an important role on species phonemes characterization and subsequent classification processes. Therefore, we studied the possibility of species classification and recognition, according to the structure of the phoneme, sound characteristic statistical variance and signal probability densities. To evaluate this option we should consider the WSN limitations and contrast them against the LPC s high temporal and spatial complexity. Autocorrelation and PCA are complementary techniques of LPC whose features will also be studied in this article. Besides these techniques, we propose Modified K-Means (MKM) as classifier which is a postprocessing stage outside of WSN platforms Introduction Biologists demand distributed data collection to obtain appropriate information with high biological meaning and easy to access. Wireless Sensor Network (WSN) has been used to accomplish these features [1, 2]. Aspects like computing resources, data storage and sensing capability have been applied in important environmental topics such as habitat monitoring, ecological analysis and species preservation[3]. According to this, the impact of WSN will be measured with respect to their ability to support high complexity algorithm or collect relevant data. For example, to measure animal vocalization signal to quantify the species population, or in our case the recognition of frogs species through its singing. A complex and critical aspect of the recognition process is the characterization process. LPC method [4] is a signal Figure 1: LPC model Representation. characterization process with a particular sound model that includes a discrete time representation made of a series of coefficients. The disadavantages of this method are sensibility with respect to dataset size and computational complexity. These disadvantages are analyzed to obtain a basis to express a mathematical solution allowing an efficient classification with less information, computational complexity and hardware requirements on WSN platform. Previous Works There are multiple ways to recognize the frog singing. One of the previous works uses a dedicated machine with a non-distributed acquisition tool and without a DSP characterization algorithm like LPC technique. In the past and nowadays the biologists work on obsolete software platforms and in some cases with analogue tapes recording system. This reason suggests that the justification of this paper resides on the following hypothesis: the biologists and the ecological researchers are still searching better ways to take high quality environmental samples. WSN Previously Impact The first work that we analyze is a Quinlan s machine [3], a single device without a sensor network distribution, 1

23 Figure 2: Cyclorana Cultriples sound samples,cane Toad Work Previous LPC developments and the DSP Background While LPC algorithm has been analyzed and supported with different perceptual theories, the main perceptual base is the human ear s signal acquisition. This term refers to how many channels are in the human ear, dedicated to sound acquisition process. If we think of the ear as a digital recorded machine, it would has a n finite number of acquisition channels, with a increasing bandwidth. The basis of the algorithm is perceptual theory representations such as the Bark scale. For instance Bark scale describes 24 specific channels of acquisition, modeled in equation 1 [6]. Bark = 13arctan [ 0.76f ] [ f ] + 3.5arctan Bark Scale Expression (1) Figure 3: Spectrogram of Cyclorana Cryptotis singing and the only DSP tool into the device was the spectrogram analysis and the biologist had the role of recognition techniques. This situation causes low productivity, low accuracy, high prediction error and limitations in the time-frequency analysis. WSN advantages such as spatial distribution and area coverage can save time and reduce data collection time. Hence, WSN becomes a practical solution for complex problems such enviromental sensing. On the other hand, the closest work to our hypothesis describes a WSN hybrid network[5] with a environmental context similar to ours since the authors have found a new Australian cane toad and characterized the toad s singing with a group of sensor Motes, (Micaz MPR2400 made by Crossbow Inc.). The cane toad singing and its spectrogram are shown in Figure 2 and Figure 3. The difference is that in our case we characterize frog singing acquiring the signal and executing the LPC algorithm on the WSN system, to recognize this signal within a finite number of clusters (species), and to represent the individual characterization of frog species in a similar environment with less computational complexity and memory space. In equation 1 and figure 4, frequency is in Hertz with their corresponding equivalence in Barks. The Bark scale is between 0 and 23. For every level there is a corresponding amplitude value in the bandwidth of every channel.the Mel scale describes the same bandwidth that Bark scale with finite number of M channels. The M channels that describes the signal are the representation of the signal recognized by the ear, equation 2 describes the relation between frequency in Hz and frequency in Mels. VAD (Voice Activity Detection) is another example of perceptual theory [7, 8, 9, 10]. VAD uses high computational complexity methods such as Bayesian theory, noise analysis and hypothesis testing. This makes the process longer but more precise; this technique is used for complex phonemes recognition such as long sentences and words, which is not the case in our work. ( f ) Φ = 2595log (2) Mel Scale Expression LPC Model Description LPC Linear Prediction Coding parameters h can follow the probability distribution of the digitalization process (Gaussian). LPC requires a large signal to noise ratio (SNR) and low harmonic distortion to have a good sample representation. LPC is a linear characterization method with a solid basis in digital signal processing theory. The basic idea of LPC model is that a given at time n, s(n) can be approximated as a linear combination of the past p speech samples [4] like in equation4. The expression is a linear algebraic system, where the error between the original signal and the LPC model would be the least possible. All the 2

24 Representacion Psicoacustica de la Escala Bark that will be associated with a new squared error term. p s(n) = a k s(n k) + Gu(n) (4) k=1 Equivalencia en Barks frecuencia en HZ Figure 4: Frequency, Hertz and Bark Relationship expression characterization process established its basis around this parameter. LPC was a measurement of an audio pattern compared to a discrete time model of p coefficients with a low capacity of representation. To have a higher capacity of representation techniques such as Autocorrelation, framing and windowing allows to give more importance to time variation of the same signal. Now to increase the importance of signal representation, we try to infer about the Autocorrelation method. Autocorrelation is a mathematical relation between a sampled signal with the different segment of itself, developing a phase variation between them, the equation number 3 express m samples of phase variation between the two segments of signal. r l (m) = N 1 m n=0 x l (n)x(n + m) (3) With these elements we proceed to define a LPC- Autocorrelated algorithm with additional variations, as the purpose of a new optimization process applied to information management. Algorithm Description and Model Analysis We should begin the LPC description with the basic linear model expressed in the Figure 1 model. The generalized expression have two principal components, one is the sum of z-coefficients, a time discrete signal representation and the other is the gain term. It is the error term, x 10 4 The error represents a critical variable in the algorithm development, because the main argument of the algorithm is the error optimization using the derivative criteria. Following this issue we can express the equation of the square error and optimize it as a k function, like this way. E n = 0, (5) a k E n = [ 2 p s(m) a k s(m k)] (6) m k=1 The equations number 5,6 define the procedure that Rabbiner s book[4] raises with the expression s minimization aim; the equation 5 is the squared error expression. When the error expression is the derivative function; We obtain the following expression and a new definition of signal variance behavior over the algorithm context. s(m i)s(m) = m p â k s(m i)s(m k) (7) k=1 φ(i,0) = p â k φ(i,k) (8) k=1 The nomenclature phi is a covariance value respect to another audio sample in the digitalized signal, In the derivative procedure, some sum expressions can be fixed using the equation number 3 and with the assumption of Gu(n) is zero. With these parameters we will refer to the real autocorrelation concept. This phi values or a phi matrix is a Toeplitz matrix that contain the correlation or the alternative covariance between a little fragment (k-th frame) of the original signal and other similar frames, but displaced m samples from the end of the k-th frame. This signal can compared with different segments of itself; this concept is a real autocorrelation process. Now, we give importance to LPC analysis and take the â k LPC coefficients, after the framing, windowing(hamming window) and filtering process (band-pass filter), (The framing process is detailed in Figure 5) Using the Levinson Durbin strategy on the basic LPC process, it means that the route of Toeplitz matrix gives to the algorithm some recursive nuances, like equation 10. In all the implementation of the strategy, we note that the resulting numbers have a higher numerical representation like near 10 9 magnitude order; this might be expend some spatial complexity and numerical (binary) representation. E (0) = r(0) (9) 3

25 0.4 Dendrobates Oophaga Histrionica Phoneme Amplitude 0 Figure 5: LPC s signal blocking process k i = { r(i) L 1 a (i) j } r( i j ) (10) E (i 1) j=1 a(i 1) j a (i) i = k i (11) = a (i 1) j k i a (i 1) i j (12) E (i) = (1 k 2 i )E (i 1) (13) We emphasize the number of samples, and the number of LPC coefficients; these parameters are critical in temporal complexity of the algorithm, but other parameters like the frequency, mean number of cycles per instruction, data transfer rate, data bandwidth (Motes hardware infrastructure)[11, 12] are critical too, however these aspect are not related here. c 0 = lnσ 2 (14) m 1 ( ) k c m = a m + c k a m k m (15) c m = m 1 k=1 k=1 ( k m ) c k a m k (16) With the a k LPC values calculated, we can find a new frequency representation as Cepstral values, that represents a log scale of this frequency representation, these should be Q = 3p 2 [4] where p are LPC coefficients and Q the number of Cepstral coefficients; it is represented by these three equations 14,15,16. In the end, the equations represent a different kind of discrete Fourier Transform with less iteration instructions. With these two characteristics, we explore the LPC frame s representation and the Cepstral variation too, as a first approximation of classifier pair (XY cluster parameters). Now the new problem is the huge quantity of information. This issue is justified for this simple reason: LPC s Levinson Durbin process produces p coefficients per frame, and if the signal has a long length like L, the resulting information will be plk,where k is the number of bits per coefficient, and Cepstral coefficients have the same pattern with the unique difference that final expression will be 3 2 plk. 5 2pLk bits per each signal to classify, it s a Time Figure 6: Audio Sample of Dendrobates Oophaga Histrionica excessive resources expended in memory and data acquisition for WSN, now we propose a minimization method similar to PCA [13], over a defined sampling range and without critical variation. Mode and Norm minimization process When LPC Autocorrelated process has finished, the system deliver L vectors with p elements, with a implicit Gaussian distribution as we commented previously; for take out these affirmations about these vectors we should consider the following hypotheses: all random samples in the signal represents a normal probability data distribution and if LPC algorithm works with a linear and basics operations like a sum, subtraction, multiplication and other similar operations like empowerment, the LPC resulting data will belong a similar normal probability function, with other skewing, mean and standard deviation. The hypotheses is correct, but our LPC algorithm delivers only eight representative data per frame, (all the considerations have been taken from Rabiner [4]); for each frog audio sample, it s too poor for the requirements of a normal probability distribution characteristics [14]. For this reason we should take more data from the source LPC s vectors,we take the L vector norm of each vector (assuming that all L vectors begin from the origin), with the result of L new data that response to a similar distribution with so much data since L p. Ω l = p [ ] 2 (l) â k, 0 l L (17) i=1 4

26 12 10 x Dendrobates Phyllobates Terribilis Phoneme Cepstral Amplitude LPC x Time Figure 7: scatter-histogram between LPC and Cepstral distribution per frame Figure 8: Audio Sample of Dendrobates Phyllobates Terribilis Results α l = p [ ] 2 (l) ĉ k, 0 l L (18) i=1 The general expression of this new procedure, would be like the equation 17; where Ω would be a complete representative LPC norm vector for each audio sample, similar procedure s implemented in a Cepstral coefficients α distribution, following Rabiner and the equation 14, the Cepstral vectors should follow a lognormal distribution. With these parameters we can infer that the mean or mode would be a representative values; meanwhile the framing rate of LPC initial phase would not change the nature of the probability distribution; it is that when we refer to a mode, in fact a real mode central trend measure of the distribution. λ (x,y) = [mode(ω l ),mode(α l )] (19) Finally in the the scatterplot of figure 7(normal and lognormal distribution, since LPC and Cepstral Frame of the same specie) the final equation 19, shows the minimization process that includes only one value represented by a ordered pair (LPC,Cepstrum) over a classification cluster space, these values can represent with fidelity a determined frog species candidate, as we will show below, this pair should be emphasize that Cepstral and LPC (Normal distribution); Note that both would be lognormal for the mathematical nature of square empowerment and square root norm process. In the figure 8 and figure 6, we show two audio samples of Dendrobates Phyllobates Terribilis and Dendrobates Oophaga Histrionica frogs species of Colombian s Pacific Coast, both choruses were treated by the LPC- Autocorrelated and Mode and Norm minimization processing, and with Matlab PRtools toolbox version We implemented 3 classificatory processes, one of them is K-means,the particularity of the irregular cluster partition shape, and the implementation of different way of distortion distance calculation, in the figure 9,we are showing the resulting cluster. The spectrogram of this species show a frequency behavioral around 900 and 2000 KHz [15], it is the same spectral analysis that we found in the digitalized audio samples. Despite of fundamental frequency variation, the important pattern remains in the characterization and post-classification process, as the LPC and Cepstral probability distribution and their central trend measures over a framing range,it is noteworthy because our range works between [ ] samples per frame, so that is considerable taking into account the WSN and LPC algorithm constraints. The previous theory of the mode-norm minimization process is observed in the clustering process. There the modes from LPC and Cepstral analysis are drawn in the XY Cluster and the n species, the main species, in our analysis are the two Dendrobates Phyllobates Terribilis (7 audio samples) and Dendrobates Oophaga Histrionica(3 audio samples) species, but we include other independent frog specie like, Cane-Toad audio sample; in the figure number 9 we can see that it works on successful way. Besides we can express that in a other variation parameter as the skewness of the probability distribution, that could be important in the classification process, but it 5

27 Cepstral Mode Norm Coefficient Resulting Cluster Dendrobates Phyllobates Terribilis Dendrobates Oophaga Histrionica Cane Toad K Means Dataset K Means Cluster LogLc Dataset LogLc Cluster PerLc Dataset PerLc Cluster Density Probability LPC Vectors Norm Probability Distributions Fixed Histogram S1 Mode #1 Logn Distribution S1 Fixed Histogram S2 Mode #2 Logn Distribution S2 Fixed Histogram S3 Mode #3 Logn Distribution S LPC Mode Norm Coefficient Data Figure 9: Resulting Cluster of K-means,LogLc and PerLc classificatory procedures Figure 10: LPC vectors Logn Probability distributions implies more computational resources request than the mode and norm plain process. A fundamental aspect of LPC algorithm results is the numerical representation, the LPC algorithm has more mathematical particularity. When a frame obtains a substantial part of silence or noise samples, the resulting error can be big and the LPC and Cepstral Coefficients can be huge, around magnitude order. The second characteristic is about the error because it s less than 20 percent of the signal the LPC coefficients and consequently the Cepstrals can be assign like a valid group of coefficients into the probability distribution, it means that the coefficients remain into the peak of prob distribution and are representative for the particular phoneme. All these results work of double (64 bits) or Float (32 bits) of Matlab notation. In figure 10(Note that a thin lognormal would be a exponential distribution), we show the lognormal distribution of the three Dendrobates Oophaga Histrionica audio samples. It shows the close relationship between the modes of each distribution, it is definitely a strong result for the performance of the minimization algorithm. Future Works The information problem is a constant over the most DSP algorithms, especially over the LPC characterization method. Generally The method could be seen as plain LPC without the Autocorrelation process or LPC with the Autocorrelation process, the first one has a less temporal complexity than the other one; because the Levinson-Durbin process can t be used in the sig- nal context, (The signal is not fragmented) and it would described as the same LPC iteration shown previously. However in all the samples of the signal, that complexity of the LPC iteration algorithm stage would be p 2. The problem of plain algorithm is the poor accuracy and significance of information, this isn t recommended for the classification process. On the other hand, the LPC with the autocorrelation process adds a little time to the process, but the difference lies in the using of Toeptlitz matrix and the division of the complete signal;this aspect delivers a better information about the signal, risking more complexity. (p 2 N), The problem of this technique is the quantity of information. The formal solution for this problem is using the measures of central tendency like mean,median and mode depending on data distribution, one of these procedures of data reduction is PCA or Principal Component analysis. n J 0 (x 0 ) = x 0 x k (20) k=1 The PCA [16, 13] features are implemented following the expression number 20. This expression express the calculus of the minimum distance between the LPC vector elements; in other words, the element with a less distance between itself and the mean of the vector distribution. This compute process requires more instructions and more time than the mode and norm minimization process, because it realizes a sum of N s (N s = N/L) elements per each vector and per element of vector, this means that the temporal complexity would be kn s L, and the final result is keeping greater than the other one, because we would have L fundamental elements per com- 6

28 Method LPC Plain Method Accuracy and Significance LPC Autocorrelation LPC Auto+PCA LPC Auto+Mode and Norm process Temp Complexity [instructions] Memory Req [single float] O(n) = p 2 p + N Too Low O(n) = pl + N Good, but pl + p 2 too large information N s O(n) = L + N High, but KpL+p 2 highest N s complexity O(n) = N + 1 High similar dpl + p 2 to PCA for N s Lognormal distributions Table 1: Table with the comparison of the different LPC analysis plete signal, instead only one per signal. if we arrived at the same result,we should have employed the same process again. The Table 1, shows the difference in complexity and memory requirements contexts of possible LPC methods in our case; the LPC+ Autocorrelation method is the base of our DSP previous analysis.our main process is the LPC+Auto+Norm and Mode minimization algorithm with less temporal complexity and memory requirements.this would be the principal account for the future design of WSN platform and the network topology of the implementation taking into account, the little sensor motes memory, and the hardware limitations, such as clock frequency, system processor settings and hardware resources. Conclusions In this paper, we justified the usage of LPC algorithm and some other classification techniques as K-Means, for the evaluation of clustering process. We can see the different developments of DSP algorithms and techniques that involve LPC and Cepstral Background. Now with the inclusion of WSN, this application could have another color and distribute computational nuance. With the simulation data, we can infer that LPC implies a high performance and high numerical notation, but with the usage of Levinson Durbin strategy and Mode- Norm minimization process the spatial complexity would be reduced from 100 (Toeptliz Matrix) to 1 (Mode representative data)in magnitude order. It expresses a robust thrift on information quantity, with a low computational cost. The flexibility of WSN and LPC-Cepstral Analysis would be interesting. Meanwhile, the implementation could spreading around a complete network and it s could classify the phonemes of diverse species and human beings. The DSP background and WSN flexibility can consign a substantial results and different possibilities to explore in computational topologies or new hardware platforms. References [1] A. Mainwaring, D. Culler, J. Polastre, R. Szewczyk, and J. Anderson, Wireless sensor networks for habitat monitoring, in Proceedings of the 1st ACM international workshop on Wireless sensor networks and applications. ACM New York, NY, USA, 2002, pp [2] J. Polastre, Design and Implementation of Wireless Sensor Networks for Habitat Monitoring, Ph.D. dissertation, Department of Electrical Engineering and Computer Sciences, University of California. [3] A. Taylor, G. Watson, G. Grigg, and H. McCallum, Monitoring Frog Communities: An Application of Machine Learning, in PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTI- FICIAL INTELLIGENCE, 1996, pp [4] L. Rabiner and B. Juang, Fundamentals of speech recognition, [5] W. Hu, N. Bulusu, C. Chou, S. Jha, and A. Taylor, Design and evaluation of a hybrid sensor network for cane toad monitoring, [6] R. Udrea, S. Ciochina, and D. Vizireanu, Multiband Bark scale spectral over-subtraction for colored noise reduction, in Signals, Circuits and Systems, ISSCS International Symposium on, vol. 1, [7] T. Ravichandranand and K. Samy, Performance Enhancement on Voice using VAD Algorithm and Cepstral Analysis, Journal of Computer Science, vol. 2, no. 11, pp , [8] J. Ramirez, J. Segura, C. Benitez, A. de la Torre, and A. Rubio, An effective subband OSF-based VAD with noise reduction for robust speech recognition, IEEE Transactions on Speech and Audio Processing, vol. 13, no. 6, pp , [9] J. Rosca, R. Balan, N. Fan, C. Beaugeant, and V. Gilg, Multichannel voice detection in adverse environments, in Proceedings of EUSIPCO 2002, [10] B. Goode, Voice over Internet protocol (VoIP), Proceedings of the IEEE, vol. 90, no. 9, pp , [11] M. Crossbow, Users Manual, [12] M. Ilyas, I. Mahgoub, and E. Corporation, Handbook of sensor networks: compact wireless and wired sensing systems. CRC press Boca Raton, USA,

29 [13] T. Hastie, R. Tibshirani, J. Friedman, T. Hastie, J. Friedman, and R. Tibshirani, The elements of statistical learning. Springer New York, [14] W. Navidi, Statistics for engineers and scientists. McGraw-Hill New York, [15] S. Lotters, F. Glaw, J. Kohler, and F. Castro, On the geographic variation of the advertisement call of Dendrobates histrionicus Berthold, 1845 and related forms from north-western South America, Herpetozoa, vol. 12, pp , [16] R. Duda, P. Hart, and D. Stork, Pattern classification. Wiley New York,

30 Filtro Mediano Bidimensional Rápido Implementado con la Arquitectura SIMD Ricardo M. Sánchez y Paul A. Rodríguez Resumen El filtro mediano es una de la operaciones básicas en el procesamiento de imágenes; se utiliza para eliminar ruido impulsivo, sin embargo su costo computacional es elevado. Las soluciones tradicionales consisten en disminuir la complejidad del algoritmo del filtro mediano, y en vectorizar algoritmos ya existentes. Esta vectorización se realiza utilizando las unidades SIMD de los procesadores modernos, las cuales permiten realizar la misma operación a un conjunto de datos. En este documento se implementa el filtro mediano con el algoritmo vectorial propuesto por Kolte [1], el cual aprovecha las ventajas que ofrecen las unidades SIMD. La eficiencia computacional de la implementación realizada se compara con el algoritmo Filtro Mediano de complejidad O(1), recientemente propuesto en [2], concluyéndose que la implementación realizada es 75 y 18.5 veces mas rápida, para áreas de 3 3 y 5 5 respectivamente. I. INTRODUCCIÓN EL filtro mediano es un filtro no lineal propuesto por primera vez en 1974 [3] para su uso en arreglos bidimensionales. Su característica principal es la de eliminar el ruido del tipo impulsivo (comúnmente llamado ruido sal y pimienta) [4] sin alterar la información de la imagen de manera significativa. Debido a esta propiedad, se utiliza como parte del preprocesamiento de imágenes para análisis de mayor complejidad, como son el reconocimiento oṕtico de caracteres, la identificación de objetos y rostros, etc. Sin embargo, por ser un filtro no lineal, su costo computacional es elevado. El enfoque común para mejorar su desempeño consiste en disminuir su complejidad inicial de O(r 2 log r) [3], donde r es el radio del kernel. Las propuestas de mejora obtienen complejidades de O(r) [5], O(r 2 ) [6], O(log 2 r) [7], O(log r) [8] e incluso O(1) [2]. La mayoría de de los algoritmos antes mencionados tienen una formulación puramente escalar; tan sólo [2] menciona que una implementación vectorial del algoritmo O(1) es posible, sin embargo la misma no es explícitamente descrita. Actualmente los procesadores cuentan con unidades SIMD (Instrucción Única, Datos Múltiples - Single Instruction Multiple Data) [9] que les proveen la capacidad de realizar operaciones vectoriales. Los algoritmos escalares [2], [5], [6], [7], [8] no aprovechan la capacidad computacional que las unidades SIMD ofrecen. Algoritmos vectoriales (que utilizan la unidad SIMD), como [10] [1], minimizan el uso de comparaciones y reutilizan los resultados obtenidos para disminuir las operaciones necesarias para calcular la mediana. Adicionalmente, consiguen disminuir considerablemente el tiempo de Ambos autores pertenecen al Grupo de Procesamiento Digital de Señales e Imágenes de la Pontificia Universidad Católica del Perú, Lima, Perú. Tel: anexo procesamiento del filtro mediano al utilizar de manera eficiente el paralelismo de datos que ofrecen las unidades SIMD. En la sección 2 se describe el algoritmo vectorial de Kolte [1] para el filtro mediano. En la sección 3 se describen las capacidades SIMD de los procesadores Intel [11], y del procesador Cell Broadband Engine [12], para los cuales se realiza la implementación del algoritmo vectorial. En la sección 4 se muestran los resultados computacionales de las implementaciones realizadas; finalmente en la sección 5 se listan las conclusiones. II. ALGORITMO VECTORIAL DEL FILTRO MEDIANO La propuesta de este algoritmo se realizó en 1999 por Kolte et al. [1]. Su operación se enfoca a kernels pequeños (3 3 y 5 5) y obtiene una complejidad de O(r 2 ), donde r es radio del kernel. Su funcionamiento de basa en las redes de ordenamiento [13], las cuales son fácilmente vecotrizables. obteniendose una implementación eficiente en las unidades SIMD [14]. II-A. Redes de ordenamiento Una red de ordenamiento es una esquema que describe rutas y comparaciones que se deben realizar para que n entradas, x 0, x 1, x 2,..., x n, se permuten para obtener n salidas, y 0, y 1, y 2,..., y n, tal que y i < y i+1. Una red de 2 elementos se presenta como un bloque con dos entradas, I 0, I 1, y dos salidas O 0 = max(i 0, I 1 ), O 1 = min(i 0, I 1 ) (Fig. 1). Para obtener el diagrama de una red de ordenamiento de n elementos se utilizan bloques de redes de dos elementos ordenados de manera que se obtenga la salida deseada [13]. I 0 I 1 O 0 O 1 =max(i,i ) 0 1 =min(i,i ) 0 1 Figura 1. Dos formas de representar la red de ordenamiento I 0 I 1 O 0 O 1 =max(i,i ) 0 1 =min(i,i ) 0 1 Una de las ventajas que ofrecen estas redes, es que se utiliza una cantidad conocida de comparaciones para ordenar un conjunto de datos. Además, se pueden agrupar bloques que tengan como entradas elementos no relacionados, de modo que los datos se puedan procesar en paralelo. Ésta característica es aprovechada para lograr una implementación eficiente de las redes de ordenamiento en la unidad SIMD. La eficiencia de una red de ordenamiento esta en función del número total de comparaciones que se realizan y el número de niveles en los que las comparaciones se pueden realizar en paralelo. Una

31 red es mas rápida cuando requiere menos niveles para su ejecución [14]; por ejemplo en la Fig. 2 se muestran dos redes de ordenamiento de 10 elementos, donde el segundo esquema es mas rápido que el primero (utiliza dos niveles menos) a pesar de utilizar dos comparaciones adicionales. (a) 29 bloques, 9 niveles en el centro del arreglo, el cual es la mediana de éste. Este procedimiento se resume en el Algoritmo 1 (ver detalles en [1]). II-C. Funcionamiento vectorial El orden de las operaciones se elige de manera que permiten la reutilización de éstas y el paralelismo de datos. Para aplicar el filtro mediano, de kernel r r, a un bloque de datos B n, son necesarios los r 1 2 últimos datos de los vectores que conforman el bloque B n 1, y los r 1 2 primeros datos del bloque B n+1. Así, se obtiene un vector completo con el valor de la mediana de los datos correspondientes al bloque B n (Fig. 4). B n-1 B n B n+1 (b) 31 bloques, 7 niveles Figura 2. Dos diagramas de redes de ordenamiento para 10 elementos Para obtener la mediana de un conjunto de datos con una red de ordenamiento, no es necesario realizar todas las comparaciones de la red, tan solo aquellas necesarias para obtener el elemento buscado. En la Fig. 3 se muestra de modo gráfico cuales son las operaciones que se deben realizar para obtener el elemento central del ordenamiento, el cual equivale a la mediana de los datos, para redes de 5 y 9 elementos. (a) Mediana de 5 elementos (b) Mediana de 9 elementos Figura 3. Redes de ordenamiento de varios elementos II-B. Funcionamiento escalar Para un arreglo bidimensional de N N datos, se realiza un ordenamiento descendentes entre las columnas. Luego se ordena de manera descendente las filas del mismo arreglo. De esta manera los elementos de mayor valor se ubican en la esquina superior izquierda del arreglo, y los elementos de menor valor, en la esquina inferior derecha. A continuación, se realiza el ordenamiento de los elementos de las N (2 k) diagonales centrales del arreglo con pendiente k = 1, siendo N el lado del área de análisis. Esta operación se repite incrementando el valor de k hasta que solo quede un elemento Figura 4. Para operar el bloque B n son necesarios B n 1 y B n+1 Para la n-ésima iteración, en los registros de la unidad SIMD se encuentran almacenados los datos parcialmente ordenados de los bloques B n 1 y B n, correspondientes a las iteraciones n 2 y n 1 respectivamente. Se inicia la iteración al cargar los datos del nuevo bloque B n+1 y se ordenan las columnas de éste con el uso de una red de ordenamiento vectorial. Como siguiente paso es necesario ordenar las filas, para esto se realiza un ordenamiento en nuevos registros SIMD, para no alterar los valores ordenados y reutilizarlos en la siguiente iteración. Además, se busca que se ordenen todas las filas del bloque con una sola aplicación de la red de ordenamiento; En la Fig. 5 se esquematiza los desplazamientos entre vectores para lograr el resultado deseado. Una vez realizado el ordenamiento de las filas, se almacenan estos resultados y se reordenan los datos para operar las diagonales de los arreglos y obtener el valor de la mediana. Finalmente, se actualizan los valores de B n 1 y B n con los valores B n y B n+1, respectivamente, para la siguiente iteración. Algoritmo 1 Cálculo de la mediana de Kolte [1] 1: Función MEDIANA(A) A: arreglo de N N 2: M (N 1)/2 3: Para c = 0 hasta N 1 hacer 4: OrdenarColumna(c) A r 1,c <= A r,c 5: Fin Para 6: Para r = 0 hasta N 1 hacer 7: OrdenarFilas(r) A r,c 1 <= A r,c 8: Fin Para 9: Para k = 1 hasta M hacer 10: Para s = k (M + 1) hasta k (M 1) (N + 1) hacer 11: OrdenarLinea (k r + c = s) 12: Fin Para 13: Fin Para 14: ResultMedian A M,M 15: Fin Función

32 B n-1 B n B n Ordenar Figura 5. Movimiento de registro para ordenar por filas, k = 5. III. LA UNIDAD SIMD Los procesadores tradicionalmente cuentan con unidades de procesamiento escalar (Unidad Lógico Aritmética - ALU, Unidad de Coma Flotante - FPU). En los últimos años, los nuevos procesadores implementan unidades SIMD que les permiten realizar operaciones vectoriales, con lo que mejoran su capacidad computacional. Para realizar operaciones con ésta unidad, es necesario cargar los datos en los registros de la misma. Los datos deben estar alineados en memoria [15]. Al desarrollar aplicaciones que utilicen estas unidades se deben tener algunas consideraciones. Una de ellas es el número limitado de registros SIMD, el cual restringe la cantidad de vectores con los que se puede trabajar. Otro factor es la cantidad de datos que se pueden almacenar en un registro. III-A. Intel IA-32 y EM64T Intel implementa una unidad SIMD que cuenta con 8 registros SIMD para procesadores IA-32 y 16 registros para procesadores EM64T [11]. Cada registro tiene una capacidad de 128 bits, con lo que se puede almacenar 16 datos del tipo byte (8-bits). Las operaciones realizables son limitadas, en comparación a otras implementaciones SIMD. El procesador cuenta con memoria caché, por lo que el acceso ordenado a los datos implica un incremento en los aciertos de la memoria caché, con lo que se obtiene una mejora en el desempeño [16]. Estos procesadores son el estándar de facto en computadores personales, y por consiguiente de gran disponibilidad. III-B. El Procesador Cell Broadband Engine Diseñado en conjunto por Sony Computer Entretainment, Toshiba e IBM [17], este procesador esta dirigido hacia la computación de alto desempeño. Cuenta con un procesador PowerPC de doble núcleo, además de 8 procesadores auxiliares llamados Synergistic Procesor Unit (SPU), los cuales pueden ejecutar programas independientes y sus operaciones son SIMD. Cada SPU cuenta con 128 registros SIMD, de 128 bits de capacidad por registro y no cuenta con memoria caché. Estos procesadores están enfocados a las aplicaciones multimedia de alto desempeño, así como también a servidores de diversos tipo, por lo que su disponibilidad es limitada. IV-A. IV. RESULTADOS COMPUTACIONALES Mediana estadística Se compara el desempeño de diversos algoritmos de ordenamiento con los que se obtiene la mediana y el algoritmo vectorial propuesto. Las implementaciones se realizan en un procesador Intel Pentium Dual-Core T2330 de 1.60GHz, 1024KB de memoria caché y 1GB de memoria RAM. El sistema operativo es GNU/Linux, kernel x Para la prueba se generan 9 datos aleatorios de 8 bits y se obtiene la mediana de éstos. Esta operación se repite 2000 veces y los resultados (Tabla I) muestran que el algoritmo vectorial necesita 4.5 veces menos ciclos de reloj que el algoritmo más rápido [13]. Cabe resaltar que solo se realiza un ordenamiento a la vez con el algoritmo vectorial para esta prueba, ya que la implementación SIMD permite ordenar hasta 16 datos de 8 bits por iteración. Los valores máximos obtenidos obtenidos en la Tabla I son afectados por los otros procesos ejecutándose en el sistema. Tabla I RESULTADOS DE DE DIVERSOS ALGORITMOS DE ORDENAMIENTO. Ciclos de Reloj Algoritmo Mínimo Promedio Mediana Máximo Vectorial , BubbleSort , QuickSort , ShellSort , HeapSort , IV-B. Filtro mediano bidimensional en procesadores Intel EM64T La implementación del algoritmo vectorial se realiza para el procesador T2330, descrito en la prueba anterior. Este procesador pertenece a la familia de EM64T, los cuales tienen 16 registros SIMD de 128 bits de capacidad. Se utiliza el compilador GCC 4.4, los lenguajes de programación utilizados son C y Ensamblador. La implementación se realiza en forma de una librería, de modo que se puede utilizar en diversas aplicaciones. El algoritmo que se utiliza como referencia es el CTMF [2] (Filtro Mediano en Tiempo Constante - Constant Time Median Filter), el cual tiene complejidad O(1). Cabe resaltar que, si bien en la explicación del algoritmo CTMF no hace mención explícita del uso de las unidades SIMD, el código fuente de [2], que se puede obtener en contiene instrucciones SIMD para procesadores Intel (MMX y SSE2) y PowerPC (AltiVec). Las pruebas de consisten en aplicar el filtro mediano con kernels de 3 3 y 5 5 a imágenes en escala de grises y tamaños de , , , y pixeles. La aplicación del filtro se repite 1000 veces en cada caso. Para la implementación del filtro de 5 5 se utiliza un buffer para almacenar los datos ordenados de las iteraciones anteriores, pues los registros SIMD del procesador son insuficientes para realizar todas las operaciones necesarias. En las tablas II y III se muestran los resultados obtenidos de las pruebas. Para cada tamaño de imagen se realiza la cuenta

33 Tabla II RESULTADOS COMPUTACIONALES DEL PROCESADOR EM64T Y KERNEL 3 3 Ciclos Ciclos de Reloj P ixel Imagen Vector CTMF Vector CTMF ,07 136, ,95 150, ,94 155, ,06 166, ,10 154,63 Tabla III RESULTADOS COMPUTACIONALES DEL PROCESADOR EM64T Y KERNEL 5 5 Ciclos Ciclos de Reloj P ixel Imagen Vector CTMF Vector CTMF ,10 137, ,53 151, ,37 157, ,35 155, ,31 159,41 de los ciclos de reloj que son necesarios para el cálculo de la mediana utilizando el algoritmo vectorial y el algoritmo de referencia. Además, se muestra cuantos ciclos de reloj por píxel se utilizan en cada caso. Se puede observar que en promedio se utilizan 2,02 ciclos de reloj por píxel para aplicar el filtro mediano de 3 3, los cuales equivalen el 1, 3 % de los ciclos requeridos por el algoritmo de referencia. En el caso del kernel de 5 5, son necesarios un promedio de 8,53 ciclos de reloj, es decir, el 5, 6 % de los utilizados por el otro algoritmo. Los resultados obtenidos por Kolte en un procesador PowerPC [1] indican que su implementación requiere, en promedio, 1.15 ciclos de reloj por píxel para el filtro de kernel 3 3, y 6.6 ciclos para el filtro de kernel 5 5. Esta diferencia se debe a que un procesador PowerPC cuenta con 32 registros SIMD [18] e instrucciones que permiten una implementación mas rápida del algoritmo. IV-C. Filtro mediano bidimensional para el procesador Cell Broadband Engine La implementación del algoritmo se realiza con el lenguaje de programación C y se acceden a las operaciones SIMD por medio de las funciones intrínsecas. Se utiliza el compilador GCC [19], con las modificaciones necesarias para compilar programas para los núcleos centrales del procesador y para las unidades SPU. La ejecución se realiza por medio del Full-System Simulator de IBM [20]. Éste simula el procesador Cell Broadband Engine y permite realizar mediciones de desempeño de los programas que se ejecutan en las diversas unidades que el procesador posee. Ejecuta una versión mínima del sistema operativo Fedora 9 para PowerPC64, con el kernel linux La prueba de desempeño es similar a las utilizada para el procesador Intel EM64T. Se aplica el filtro mediano de 3 3 y 5 5 a imágenes de distintos tamaños y se contabilizan los ciclos de reloj utilizados. Debido a que el procesador Cell Broadband Engine cuenta con 128 registros SIMD, no es necesario utilizar el buffer para almacenar los resultados de las iteraciones anteriores para el caso de 5 5. Tabla IV RESULTADOS COMPUTACIONALES PARA EL PROCESADOR CELL BROADBAND ENGINE Ciclos Ciclos de Reloj P ixel Imagen ,80 316, ,35 320, ,85 308, ,72 308, ,98 308,13 De los resultados mostrados en la tabla IV se observa que los ciclos necesarios para operar un pixel se incrementa notablemente con respecto a la implementación del procesador EM64T. Ésto se debe principalmente por el modo de implementación y la plataforma de prueba. El lenguaje utilizado fue C para el procesador Cell, mientras que para el procesador Intel se utilizó Ensamblador, el cual otorga un mejor nivel de optimizaciones. Cabe resaltar la proporción de los ciclos por pixel necesarios para calcular el filtro mediano para kernels de 3 3 y 5 5. Para el procesador Intel EM64T se utilizan 4,22 veces más ciclos de reloj para el kernel de 5 5 que para el de 3 3, mientras que en para el procesador Cell Broadband Engine la diferencia es de 2,5 veces la cantidad de ciclos de reloj. V. CONCLUSIONES Se muestra que el uso de las unidades SIMD mejoran el desempeño computacional de algoritmos que pueden aprovechar el paralelismo de datos, como es el caso del filtro mediano de dos dimensiones. Además se presenta una nueva perspectiva de solución al problema del tiempo de ejecución del filtro mediano: acelerar el cálculo del filtro con las unidades SIMD al diseñar un algoritmo especial para éstas. Se puede apreciar el impacto que tiene la cantidad de registros de las unidades SIMD. Al implementar el filtro mediano para kernels de 5 5, en procesadores Intel EM64T con 16 registros SIMD, los ciclos necesarios para el cálculo se incrementan más de 4.22 veces en comparación al filtro con kernel 3 3. Esta cifra se reduce a 2,5 en el procesador Cell Broadband Engine, con 128 registros SIMD. Esta cantidad de registros permite la implementación del filtro para kernels de mayor tamaño en el procesador Cell Broadband Engine. Finalmente se concluye que la implementación propuesta disminuye de manera considerable el tiempo de ejecución del filtro mediano, para kernels de 3 3 y 5 5, en comparación a la del algoritmo CTMF [2]. La implementación propuesta del filtro podría ser utilizada en aplicaciones en tiempo real. REFERENCIAS [1] P. Kolte, R. Smith, y W. Su, A fast median filter using altivec, Motorola Inc., [2] S. Perreault y P. Hébert, Median filter in constant time, IEEE Trans. on Image Processing, vol. 16, no. 9, Sep [3] J. W. Tukey, Nonlinear (nonsuperimposable) methods for smoothing data, in Conf. Rec. (EASCON), [4] A. C. Bovik, Handbook of Image and Video Processing. Academic Press, 2000.

34 [5] T. Huang, G. Yang, y G. Tang, A fast two-dimensional median filter algorithm, IEEE Trans. Acoust., Speech, Signal Process., vol. 27, no. 2, pp , Feb [6] B. Chaudhuri, An efficent algoritm for running window pel gray level ranking 2-d images, Pattern Recognition Lett., vol. 11, no. 2, pp , [7] J. Gil, Computing 2-d min, median and max filters, IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 5, pp , May [8] B. Weiss, Fast median and bilateral filtering, ACM Trans. Graph., vol. 25, no. 3, pp , [9] M. Flynn, Some computer organizations and their effectiveness, IEEE Trans. Comput., vol. C, no. 22, p. 948, [10] A. S. Glassner, Graphic Gems. AP Professional, 1995, ch. III.4. [11] Intel 64 and IA-32 Architectures - Optimization Reference Manual, Intel Corporation, pp , Noviembre [12] Cell Broadband Engine Programming Guide, 1st ed., International Business Machines Corporation, Sony Computer Entertainment Inc., Toshiba Corporation 2006, 2009, [13] D. E. Knuth, Art of Computer Programming, Volume 3: Sorting and Searching. Addison-Wesley Professional, May 1998, vol. Vol 3 / Sort and Searching. [14] B. Parhami, Introduction to Parallel Processing Algorithms and Architectures. Kluwer Acadamic Publishers, [15] L.Ñull y J. Lobur, The Essentials of Computer Organization and Architecture. Jonas and Bartlett Publishers, [16] M. Kowarschik y C. Weiβ, An overview of cache optimization techniques and cache-aware numerical algorithms, Lecture Notes in Computer Science, pp , [17] Cell Broadband Engine Architecture, International Business Machines Corporation, Sony Computer Entertainment Inc., Toshiba Corporation 2005, 2006, October [18] Power ISA Version 2.06, International Business Machines Corporation, Enero [19] R. M. Stallman et al., Using the GNU Compiler Collection. GNU Press, [20] IBM Full-System Simulator User s Guide, International Business Machines Corporation, May 2009.

35 Detección Automática de Defectos en Telas Basado en la Demodulación AM-FM Alvaro E. Ulloa y Paul A. Rodriguez Resumen Un rollo de tela con defectos puede llegar a tener una depreciación de 45 a 65 % respecto al precio original [1]. Si bien existen algunas soluciones comerciales, la detección automática de defectos en telas sigue siendo un campo activo de desarrollo e investigación. El presente artículo propone un novedoso método de detección automática de defectos en telas basado en la demodulación AM- FM de dos dimensiones [2]. La eficiencia del método propuesto (detección de defectos) es comparada con otros dos métodos basados en la Transformada de Fourier [3], [4] y Patrones Locales Binarios [5]. Se concluye que el método propuesto ofrece un rendimiento competitivo respecto al estado del arte. I. INTRODUCCIÓN LA Industria Textil es el sector de la economía dedicado a la producción de fibra, hilo, tela, ropa y productos afines. Uno de los campos en los que se desarrolla esta industria es el proceso de tejidos, es decir, la conversión de hilo a tela. El control de calidad de este producto (tela) es de suma importancia y gran impacto económico: Un rollo de tela con defectos puede llegar a tener una depreciación de 45 a 65 % respecto del precio original [1]. Según [6], existen empresas textiles que realizan inspecciones visuales al producto luego de que una cantidad significativa de tela es producida y almacenada, sin embargo, esto trae como consecuencia que solo el 70 % de los defectos son detectados en una inspección post-fabricación de este tipo [7] aún con inspectores entrenados. También es común encontrar empresas que emplean una inspección visual en línea (mientras la tela está siendo generada por el telar), no obstante los operarios reconocen menos del 60 % de los defectos en una tela de dos metros de ancho a una velocidad de 0.5 m/s [1]. Ambos reportes confirman lo expuesto en [8], donde se describe la limitada capacidad del ser humano en procesos masivos de identificación y clasificación. Si bien no es posible evitar la ocurrencia de los defectos, sí es posible su detección temprana mediante una inspección visual del producto en fabricación. Esta problemática ha generado el interés de empresas especializadas en el tema, así como, una gran cantidad de publicaciones sobre métodos automáticos de detección de defectos en telas. Actualmente, a nivel mundial, se ofrecen soluciones integrales al problema de la inspección automática para industrias textiles. Entre las más representativas encontramos a BARCO Ambos autores pertenecen al Grupo de Procesamiento Digital de Señales e Imágenes de la Pontificia Universidad Católica del Perú, Lima, Perú. Tel: annex 4681 [9], USTER [10] y EVS [11], las cuales, resaltan la capacidad de sus sistemas en términos del ancho de la tela examinada y la velocidad que esta puede pasar a través de sus sensores. Por otro lado, en la literatura se ha propuesto una amplia variedad de soluciones para la detección de defectos basados en distintos conceptos, p.e.: Transformada de Fourier [3], [4], Wavelets [6], [12], Patrones Locales Binarios [5] entre otros. En el presente artículo se propone un novedoso método de detección automática de defectos en telas basado en la demodulación AM-FM de dos dimensiones [2, cap. 4.4]. La eficiencia del método propuesto, basada en el porcentaje de detección de defectos y costo computacional, es comparada con dos de los métodos antes mencionados [3], [4], [5]. Se concluye que el método propuesto ofrece un rendimiento competitivo respecto al estado del arte. La organización de este artículo es: en II se detalla el método propuesto para la detección de defectos y su implementación en telas simuladas y reales, en III se describen los métodos con los cuales se compara. En IV se listan los resultados computacionales y finalmente en V se exponen las conclusiones. II-A. II. DISEÑO DEL MÉTODO DE DETECCIÓN De-modulación AM-FM La representación AM-FM (Amplitud Modulada-Frecuencia Modulada) de imágenes permite modelar elementos no estacionarios presentes en la imagen en términos de funciones de amplitud y fase vía M I(ξ) = a n (ξ) cos(ϕ n (ξ)) (1) n=1 donde I(ξ) : R 2 R es la imagen analizada, ξ = (ξ 1, ξ 2 ) R 2, M N, a n : R 2 [0, ) y ϕ n : R 2 R. La interpretación de (1) sugiere que los M componentes AM-FM a n (ξ) cos(ϕ n (ξ)) capturan la información necesaria para modelar las estructuras no-estacionarias en la imagen: los componentes FM, cos(ϕ n (ξ)), capturan los cambios rápidos en los valores de intensidad de la imagen, mientras que los componentes AM, a n (ξ), brindan información acerca de la energía concentrada en una determinada región de la imagen. Dada la imagen I(ξ), el cómputo de los componentes AM- FM implica la estimación de la función amplitud instantánea, a n (ξ); la función fase instantánea, ϕ( n (ξ); y la función frecuencia instantánea, ω n (ξ) = ϕ(ξ) = ) ξ 1 ϕ n (ξ), ξ 2 ϕ n (ξ). En [13] se describe que a partir de las funciones a n (ξ), ϕ n (ξ) y ω n (ξ) se puede llevar a cabo el análisis de la

36 Una parte esencial del método propuesto es la determinación de un valor umbral, el cual servirá para clasificar una región como libre de defecto o defectuosa. Existen diversas métricas para cuantificar la distancia entre histogramas (ver [16] para un estudio detallado); en particular se utiliza el producto interno normalizado ([16, ec. (26)]) entre los histogramas acumulados. El umbral corresponde a la mayor distancia calculada (Ver Figura 2b). Figura 1: De-modulación AM-FM para imágenes: análisis de la componente dominante (DCA). componente dominante (DCA por sus siglas en inglés), el cual consiste en filtrar la imagen de entrada vía un banco de filtros y para cada píxel se selecciona las estimaciones provenientes del canal que posee la máxima amplitud (energía); es importante resaltar que dicha elección se realiza de modo adaptativo. En la Figura 1 se muestra el esquema del análisis de la componente dominante, del cual se deduce que la imagen de entrada es modelada como I(ξ) = a(ξ) cos(ϕ(ξ)) (2) donde a(ξ) = max {a n(ξ)}, κ(ξ) = argmax{a n (ξ)} y n [1, M] n [1, M] ϕ(ξ) = ϕ κ(ξ) (ξ). El banco de filtros, utilizado en el análisis DCA, esta compuesto por 4 canales resultantes del producto tensorial de dos filtros FIR de una dimensión. La respuesta en frecuencia normalizada de los filtros de una dimensión tienen soporte en [0; 0,25] y [0,25; 0,5] respectivamente (ver [14, Cap ]). Los filtros FIR fueron diseñados con 51 coeficientes vía el método equiripple óptimo. II-B. Procedimiento de detección de defectos La textura de una determinada región de una imagen puede ser caracterizada por el histograma de la frecuencia instantánea en dicha región [15]. En particular, secciones de tela con defectos tienen una textura diferente de aquellas que no presentan defectos; esta diferencia es notable en el histograma de las frecuencias instantáneas. Esta observación es la base del método propuesto. A continuación se detallan las etapas de entrenamiento y detección de defectos en telas basadas en la demodulación AM-FM. II-B1. Etapa de entrenamiento: En primer lugar, se extrae ω n (ξ), para n = 1, 2 (frecuencia instantánea en el eje vertical y horizontal) en una tela sin defectos. Luego, se procede a dividir el resultado de la demodulación en pequeñas ventanas, cuyo tamaño en píxeles esta relacionado con la resolución espacial, y a calcular el histograma de las frecuencias instantáneas, vertical y horizontal, en cada ventana. Luego se obtiene el promedio de todos los histogramas, horizontales y verticales (por separado), y se toman estos resultados como los dos histogramas ideales; es decir, aquellos que definen una sección de tela sin defectos (Ver Figura 2a). Figura 2: (a) Primera fase del entrenamiento del método diseñado, (b) Segunda fase del entrenamiento. (a) (b) II-B2. Etapa de detección de defectos: En esta etapa se procede a realizar el procedimiento descrito en II-B1, con la diferencia que se comparan las distancias con el umbral correspondiente, lo cual determina si la ventana analizada corresponde a un defecto. Es importante mencionar que no importa si el defecto es horizontal o vertical, del mismo modo, toda la ventana es marcada como defectuosa (Ver Figura 3). Figura 3: Diagrama de bloques del procedimiento para la detección de defectos del método basado en AM-FM.

37 (a) (b) (c) (d) Figura 4: Defectos detectados con el método diseñado basado en AM-FM, (a) falta de hilo horizontal, (b) falta de hilo vertical, (c) objeto extraño y (d) mancha (a) Figura 5: (a) Tela con defecto de 782x521 píxeles y (b) resultado de la detección con el método diseñado basado en AM FM. (b) II-C. Validación II-C1. Telas simuladas: Para la creación de telas simuladas, descritas en [4], se procede a repetir una imagen patrón de píxeles hasta generar otra de píxeles. Específicamente para las telas simuladas se determina una ventana de detección de píxeles con un traslape de 20 píxeles para obtener los resultados mostrados en la Figura 4. Las ventanas detectadas como defectuosas son marcadas con cuadrados blancos en la imagen original. II-C2. Telas reales: En la Figura 5a se muestra la imagen de una tela real tomada con una cámara digital de uso profesional, la cual guarda imágenes en formato CR2 de tamaño de píxeles (decodificadas vía DCRAW [17] y submuestreada con un factor de 5 corroborando que se retiene más del 98 % de la energía). La tela es de fibra de Yute y el defecto que posee fue forzado manualmente. La detección realizada con el método diseñado se muestra en la Figura 5b. Se utilizó una ventana de análisis de píxeles y traslape de 15 píxeles. III-A. III. OTROS MÉTODOS IMPLEMENTADOS Método basado en la Transformada de Fourier El primer método, basado en el análisis de la transformada de Fourier de la imagen, corresponde al propuesto en [3] y modificado en [4], donde, se rescatan propiedades de la transformada de Fourier de dos dimensiones para establecer su factibilidad como método de análisis de textura en telas. No obstante, la etapa de entrenamiento propuesta en [4] se basa en muestras aleatorias de la tela y no describe completamente las características de una tela sin defectos. Debido a esta deficiencia, los autores proponen dividir la imagen en ventanas para extraer sus características 1, luego, calcular el promedio de todos los vectores, el cual establecerá el vector ideal, es decir, la característica de una tela sin imperfecciones (Ver Figura 6). Para establecer un umbral de uniformidad (Ver Figura 7) se realiza comparaciones entre el vector ideal y el vector de características de todas las ventanas de una tela sin defectos vía el producto interno normalizado ([16, ec. (26)]). La mayor 1 Basadas en la frecuencia fundamental encontrada para las orientaciones vertical y horizontal de la ventana analizada. Mayor detalle en [3].

38 distancia define el umbral. Figura 6: Diagrama de bloques de la primera etapa de entrenamiento del método basado en la Transformada de Fourier. Figura 9: Diagrama de bloques del procedimiento para el entrenamiento del método basado en Patrones Locales Binarios modificado. Figura 7: Diagrama de bloques de la segunda etapa de entrenamiento del método basado en la Transformada de Fourier. III-B. Método basado en Patrones Locales Binarios El segundo método, basado en el concepto de Patrones Locales Binarios o LBP (por sus siglas en Inglés) [18], corresponde al propuesto en [5], el cual, establece un análisis estadístico del vecindario de cada píxel lo que genera histogramas que describen la textura a partir del comportamiento de los píxeles circundantes a uno central. La distancia entre histogramas es calculada vía la divergencia Kullback Leibler (KL). Esta comparación entre histogramas es también llamada estadística G en [19], la cual, está basada en el concepto de entropía (ver [16, ec. (48)]). Figura 8: Diagrama de bloques del procedimiento para el entrenamiento del método basado en Patrones Locales Binarios, propuesto en [5]. En [5] se propone una etapa de entrenamiento como se muestra en la Figura 8, donde, se aplica el operador LBP a cada ventana seccionada de la imagen de la tela. En contraste, los autores proponen que el operador LBP sea aplicado a toda la imagen y luego realizar la división por ventanas con el fin de incluir las estadísticas de los bordes (ver Figura 9). Además, experimentalmente se observó una mejora en el tiempo de ejecución, bajo el software MATLAB. IV. RESULTADOS Para medir la eficiencia de todos los métodos, se evalúan tres parámetros: el orden del número de operaciones aritméticas en punto flotante, el tiempo de ejecución de cada método implementado en MATLAB y el ratio de detección (propuesto en [5]) que mide el nivel de eficacia del método: ( ) Ncc + N dd η = 100, (3) N total donde N cc indica el número de ventanas detectadas como defectuosas que realmente poseen un defecto (verdaderos positivos) N dd es el número de ventanas detectadas como no defectuosas que no contienen defecto (verdaderos negativos) y N total es el número total de ventanas evaluadas. Las imágenes de prueba fueron tomadas a partir de un rollo de 5 metros de tela de Yute, en el cual se forzaron defectos de modo manual. En total fueron tomadas 38 imágenes de diferentes secciones de la tela con diferentes defectos. Con el objetivo de evaluar correctamente el nivel de eficacia (3) las 38 imágenes tomadas fueron segmentadas de modo manual y se generaron otras 38 imágenes binarias, las cuales indican la ubicación exacta de los defectos. Todos los métodos (Secciones II, III-A y III-B) fueron implementados en lenguaje MATLAB con una PC de dos procesadores Intel(R) Core(TM)2 Duo CPU E8400 3GHZ con 2GB de RAM, 32KB de caché L1 y 6MB de caché L2 bajo el sistema operativo Linux (Kernel 2.6). En la Tabla I se muestra el tiempo de ejecución de cada método para imágenes sintéticas de píxeles. En la Tabla II se resume el orden de complejidad matemática para el método propuesto y aquellos presentados en III-A y III-B para imágenes de M N píxeles. Tabla I: Tiempo de ejecución para una imagen de 480x640. Método basado en: Tiempo (segundos) Transformada de Fourier 0,1986 Patrones Locales Binarios 1,6140 Demodulación AM-FM 2,2353 En la Figura 10 se muestra la escalabilidad computacional del método propuesto y de aquellos presentados en III-A y III-B. Donde, se observa que todas las implementaciones computacionales tienen un uso adecuado de la memoria caché [20].

39 Tabla II: Orden del número de operaciones aritméticas en punto flotante., para imágenes de M N píxeles con ventanas de análisis de F C (filas columnas). P y L representan el número de canales y orden del filtro paso-banda para la demodulación AM-FM (ver Figura 1). Método basado en: Orden Transformada de Fourier O (M N (log 2 (F ) + log 2 (C))) Patrones Locales Binarios O (M N) Demodulación AM-FM O (M N L P ) Figura 10: Escalabilidad computacional del método propuesto y de los presentados en III-A y III-B. Se muestra el ratio entre el tiempo de ejecución y el orden de la complejidad matemática (ver Tabla II) de cada método para imágenes de [3906, 2602]/(8 k) píxeles, donde k = 2, 3, 4, 5, 6 Por último, de las 38 imágenes se seleccionó a la primera como no defectuosa y con esta imagen fueron entrenados todos los métodos. Luego, se procedió a calcular el ratio de detección (3) que se muestra a modo de histograma para imágenes de [3906, 2602]/5 píxeles (Ver Figura 11), donde, se demuestra que el método propuesto tiene un alto ratio de eficiencia y es comparable con los métodos descritos en III-A y III-B. Figura 11: Se muestra el histograma de la eficacia de detección de defectos de 38 imágenes en formato RAW submuestreadas con factor de 5 para el método basado en la Transformada de Fourier (FFT), Patrones Locales Binarios (PBL) y el propuesto (AM-FM). V. CONCLUSIONES El método propuesto, basado en la demodulación AM-FM, muestra tener un ratio de detección de defectos alrededor del 95 %, muy similar a los otros métodos evaluados, sin embargo, su tiempo de ejecución es el mayor. Actualmente los autores están trabajando en la selección de un banco de filtros (ver Figura 1) que permita reducir el costo computacional del método propuesto, sin que ello tenga un impacto negativo en su ratio de detección. Finalmente, se debe recalcar que dados los tiempos de ejecución (Tabla I), sería posible implementar un sistema en tiempo real (incluso con código MATLAB) con una velocidad máxima de la tela bajo la cámara de 3.9 m/s bajo el método basado en Transformada de Fourier, 0.43 m/s bajo el método basado en Patrones Locales Binarios y 0.31 m/s con el nuevo método propuesto. Si dicho método propuesto es implementado en un sistema especializado, se espera una mejora significativa (factor de 10) en el tiempo de ejecución. REFERENCIAS [1] C.-S. Cho, B.-M. Chung, y M.-J. Park, Development of real-time vision-based fabric inspection system, IEEE Transactions on Industrial Electronics, vol. 52, pp , [2] A. Bovik, Handbook of Image and Video Processing. Academic Press, [3] C. ho Chan y G. Pang, Fabric defect detection by fourier analysis, IEEE Trans. on Industry Applications, vol. 36, no. 5, pp , [4] M. Tunák y A. Linka, Simulation and recognition of common fabric defects, in 13. International Conference on Structure and Structural Mechanics of Textiles (STRUTEX 2006), Liberec, , [5] F. Tajeripour, E. Kabir, y A. Sheikhi, Fabric defect detection using modified local binary patterns, EURASIP Journal on Advances in Signal Processing, vol. 2008, p. 12, [6] S. Kim, H. Bae, S.-P. Cheon, y K.-B. Kim, On-line fabric-defects detection based on wavelet analysis, ICCSA, vol. 3483, pp , [7] M. Tunák y A. Linka, Directional defects in fabrics, Reseach Journal of Textile and Apparel, vol. Vol 12 No. 2, pp , [8] R. Sokal, Classification: Purposes, principles, progress, prospects, Science, vol. 185, pp , [9] BARCO, Cyclops automatic on loom inspection, BARCO, Reporte Técnico, [10] USTER, Fabriscan on-loom the on-loom fabric inspection system, USTER, Reporte Técnico, [11] EVS, Iq-tex, Elbit Vision Systems, Reporte Técnico, [Online]. Available: [12] X. Yang, G. Pang, y N. Yung, Robust fabric defect detection and classification using multiple adaptive wavelets, IEE Proceedings - Vision, Image & Signal Process, vol. 152, pp , [13] J. P. Havlicek, J. Tang, S. T. Acton, R. Antonucci, y F. N. Quandji, Modulation domain texture retrieval for cbir in digital libraries, in in Proc. 37th IEEE Asilomar Conf Signals, Syst., Comput., [14] P. Rodriguez, Fast and accurate am-fm demodulation of digital images with applications, Tesis Ph.D., The University of New Mexico, [15] C. Christodoulou, C. S. Pattichis, V. Murray, M. S. Pattichis, y A. Nicolaides, Am-fm representations for the characterization of carotid plaque ultrasound images, in 4th European Conference of the International Federation for Medical and Biological Engineering. Springer Berlin Heidelberg, 2008, pp [16] S.-H. Cha, Comprehensive survey on distance/similarity measures between probability density functions, in International Journal of Mathematical Models and Methods in Applied Sciences, vol. 1, no. 1, [17] D. J. Coffin, Decoding digital photos in linux, [18] T. Ojala, M. Pietika, y T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp , [19] R. Sokal y F. Rohlf, Biometry. W.H. Freeman, [20] M. Kowarschik y C. Weiß, An Overview of Cache Optimization Techniques and Cache-Aware Numerical Algorithms, ser. Lecture Notes in Computer Science. Springer, 2002, pp

Advances in Telematics

Advances in Telematics Advances in Telematics Research in Computing Science Series Editorial Board Editors-in-Chief: Grigori Sidorov (Mexico) Gerhard Ritter (USA) Jean Serra (France) Ulises Cortés (Spain) Associate Editors:

Más detalles

Analysis of SoftToken: a Coordinated Medium Access Control (MAC) for IEEE 802.11 based Wireless Mesh Networks

Analysis of SoftToken: a Coordinated Medium Access Control (MAC) for IEEE 802.11 based Wireless Mesh Networks UNIVERSIDAD CARLOS III DE MADRID Escuela Politécnica Superior - Leganés INGENIERÍA DE TELECOMUNICACIÓN PROYECTO FIN DE CARRERA Analysis of SoftToken: a Coordinated Medium Access Control (MAC) for IEEE

Más detalles

XI Jornadas de Tiempo Real

XI Jornadas de Tiempo Real XI Jornadas de Tiempo Real Palma de Mallorca, 7 y 8 de febrero de 2008 XI Jornadas de Tiempo Real Palma de Mallorca, 7 y 8 de febrero de 2008 Editores: Albert Llemosí Julián Proenza Comité Organizador:

Más detalles



Más detalles

Trabajo Fin de Máster

Trabajo Fin de Máster Trabajo Fin de Máster Integración dinámica de entornos de computación heterogéneos para la ejecución de workflows científicos Autor Sergio Hernández de Mesa Director Pedro Álvarez Pérez-Aradros Escuela

Más detalles


VIDEO STREAMING MODELIG OVER OPTICAL NETWORKS VIDEO STREAMING MODELIG OVER OPTICAL NETWORKS Author: Marcos Esgueva Martínez Head of department: Algumantas Kajackas Supervisor: Dr. Sarunas Paulikas Supervisor UC3M: Carmen Vázquez Academic Coordinator

Más detalles

Editorial S. Schnitzer E. Torres Orue S. Pérez Lovelle G. Vargas-Solar N. Das H. Ordoñez

Editorial S. Schnitzer E. Torres Orue S. Pérez Lovelle G. Vargas-Solar N. Das H. Ordoñez Editorial T HIS issue of Polibits includes ten papers by authors from nine different countries: Brazil, Colombia, Cuba, France, Germany, India, Mexico, Portugal, and Spain. The majority of the papers included

Más detalles



Más detalles

A reference book regarding the main legal aspects related to the deployment and use of IPv6

A reference book regarding the main legal aspects related to the deployment and use of IPv6 A reference book regarding the main legal aspects related to the deployment and use of IPv6 Legal Aspects of the New Internet Protocol IPv6 Aspectos Legales del Nuevo Protocolo de Internet Manual de de

Más detalles

WEAP and LEAP Training Modules. Table of Contents:

WEAP and LEAP Training Modules. Table of Contents: Pan-American Advanced Studies Institute (PASI) 2013: Training Institute on Adaptive Water-Energy Management in the Arid Americas La Serena, Chile 23 June - 3 July 2013 WEAP and LEAP Training Modules Table

Más detalles

Optimizing Multi-Core Algorithms for Pattern Search

Optimizing Multi-Core Algorithms for Pattern Search Optimizing Multi-Core Algorithms for Pattern Search Veronica Gil-Costa 1,2, Cesar Ochoa 1 and Marcela Printista 1,2 1 LIDIC, Universidad Nacional de San Luis, Ejercito de los Andes 950, San Luis, Argentina

Más detalles

Multicore and GPU Programming

Multicore and GPU Programming Multicore and GPU Programming EDITORS Miguel A. Vega-Rodríguez Manuel I. Capel-Tuñón Antonio J. Tomeu-Hardasmal Alberto G. Salguero-Hidalgo 2015 Title: Multicore and GPU Programming Editors: Miguel A.

Más detalles

UNIVERSIDAD COMPLUTENSE DE MADRID FACULTAD DE INFORMATICA Departamento de Arquitectura de Computadoras y Automática


Más detalles

Actas del WSRFAI 2013

Actas del WSRFAI 2013 Actas del WSRFAI 2013 Luis Baumela (Editor), Universidad Politécnica de Madrid September 2013 ISBN 978-84-695-8332-6 ii Invited Speakers: Organization Christian Theobalt Max Planck Institut / Saarland

Más detalles



Más detalles



Más detalles



Más detalles



Más detalles



Más detalles

Ingeniería de Telecomunicación

Ingeniería de Telecomunicación Universidad Autónoma de Madrid Escuela politécnica superior Proyecto fin de carrera A NON-INTRUSIVE APPLIANCE LOAD MONITORING SYSTEM FOR IDENTIFYING KITCHEN ACTIVITIES Ingeniería de Telecomunicación María

Más detalles

Convergence Time Reduction in the BGP4 Routing Protocol Using the Ghost-Flushing Technique and Other Proposals. Gonzalo P.

Convergence Time Reduction in the BGP4 Routing Protocol Using the Ghost-Flushing Technique and Other Proposals. Gonzalo P. Convergence Time Reduction in the BGP4 Routing Protocol Using the Ghost-Flushing Technique and Other Proposals Gonzalo P. Rodrigo Álvarez Luleå, July, 2004 Soy un ciudadano del mundo entero. I am a citizen

Más detalles

Methodological guide Emerging Sustainable Cities Initiative

Methodological guide Emerging Sustainable Cities Initiative Methodological guide Emerging Sustainable Cities Initiative First Edition 2012 Methodological guide Emerging Sustainable Cities Initiative First edition June 2012 Inter-American Development Bank Inter-American

Más detalles

DELIVERABLE 6.9. Contract Descriptions at Semantic Level for Dynamic Service Composition

DELIVERABLE 6.9. Contract Descriptions at Semantic Level for Dynamic Service Composition OPAALS PROJECT Contract n IST-034824 WP6: Socio-Economic Constructivism & Language DELIVERABLE 6.9 Contract Descriptions at Semantic Level for Dynamic Service Composition Project funded by the European

Más detalles

Estimación de prestaciones para. Exploración de Diseño en Sistemas. Embebidos Complejos HW/SW

Estimación de prestaciones para. Exploración de Diseño en Sistemas. Embebidos Complejos HW/SW Grupo de Ingeniería Microelectrónica Dpto. de Tecnología Electrónica, Ing. de Sistemas y Automática, E.T.S.I.I.T. Memoria de tesis para la obtención del título de doctor Estimación de prestaciones para

Más detalles

UTRECHT UNIVERSITY. Master Thesis Business Informatics. Gaining insight from professional online social networks to support business decisions

UTRECHT UNIVERSITY. Master Thesis Business Informatics. Gaining insight from professional online social networks to support business decisions i UTRECHT UNIVERSITY Master Thesis Business Informatics Gaining insight from professional online social networks to support business decisions August 29, 2014 Version V1 Author: A. Melendez

Más detalles

Deployment of Computational Electromagnetics Applications On Large Scale Architectures

Deployment of Computational Electromagnetics Applications On Large Scale Architectures ORALES Deployment of Computational Electromagnetics Applications On Large Scale Architectures Carlos-Jaime BARRIOS-HERNÁNDEZ (1,2), Fadi KHALIL (3), Yves DENNEULIN (2), Hervé AUBERT (3), Fabio COCCETTI

Más detalles

WP6: Socio-Economic Constructivism & Language MILESTONE 6.2. December 1 st 2007 - April 30 th 2008

WP6: Socio-Economic Constructivism & Language MILESTONE 6.2. December 1 st 2007 - April 30 th 2008 OPAALS PROJECT Contract n IST-034824 WP6: Socio-Economic Constructivism & Language MILESTONE 6.2 December 1 st 2007 - April 30 th 2008 Contracting and Negotiation Processes in the IST Sector (Region of

Más detalles



Más detalles

A Multivariate Analysis Approach to Forecasts Combination. Application to Foreign Exchange (FX) Markets

A Multivariate Analysis Approach to Forecasts Combination. Application to Foreign Exchange (FX) Markets Revista Colombiana de Estadística Junio 2011, volumen 34, no. 2, pp. 347 a 375 A Multivariate Analysis Approach to Forecasts Combination. Application to Foreign Exchange (FX) Markets Una aproximación a

Más detalles

Development and implementation of a risk model and contingency estimation for the Panama Canal Expansion Program. Final Report CONFIDENCIAL

Development and implementation of a risk model and contingency estimation for the Panama Canal Expansion Program. Final Report CONFIDENCIAL Development and implementation of a risk model and contingency estimation for the Panama Canal Expansion Program Final Report CONFIDENCIAL Prepared by: Angie Hanily (ODP), Patricia Alvarado (FMXR) and

Más detalles