application. In context of multimedia applications, the CPU scheduler determines quality of service rendered. The more CPU cycles scheduled to a process, the more data can be produced faster, which results in a better quality, more reliable output. Many Researchers have tried to implement fuzzy logic to schedule the processes. A fuzzy-based CPUschedulingalgorithm is proposed by Shata J. Kadhim et. al. Round robin scheduling using neuro fuzzy approach is proposed by Mr. Jeegar A Trivedi et. al. Soft real-timefuzzy task scheduling for multiprocessor systems is proposed by Mahdi Hamzeh et. al. Efficient soft real-time processing is proposed by C. Lin et. al. An Improved fuzzy-based CPUScheduling(IFCS)algorithm for realtimesystems is proposed by H.S. Behera.
In the cloud computing environment, there are data-center system that is assumed to collect and save the information. The collectors in data-center in the each work-node are responsible for collecting the static and dynamic information of resources and tasks. Some key static information such as: physical memory storage space, virtual memory storage space, disk storage space, and etc are collected, and some dynamic information such as: the load average of the node itself, the number of the running tasks, the current running tasks’ number of threads, and the status of these tasks, CPU usage and etc, are captured periodically or based on the polling strategy or others, and are sent to the Data Receiver of the master node through the communication component. These data are updated frequently, and in real-time form .
The scheduler is the core part of the operatingsystems, which orders the assignment of CPU and the resources to the tasks in a multitasking environment. The function of the schedulingalgorithm is to determine, for a given task set, a sequence of task step executions (a schedule). The tasks can be classified according to their arrival style: periodic tasks and aperiodic tasks. The task set can be composed of a set of independent tasks if their executions are not synchronized, or the task can be composed of a set of dependent tasks, if their executions are synchronized. If a task set can be scheduled to meet given pre-conditions, the task set is termed as a feasible one. A typical pre-condition for hard real-time periodic processes is that they must always meet their deadlines. An optimal scheduler is able to produce a feasible schedule for all feasible task sets confirming to a given pre-condition. For a particular task set an optimal schedule is the best possible schedule according to some pre-defined criteria.
Abstract: High performance clusters are being configured specially to give data centers that require extreme performance and the processing power they need. When the data is accessed across clusters the data latency time has significant impact on the performance. In the literature it is given that memory and I/O have become the new bottleneck, instead of processing power in achieving efficient load balance at higher performance for cluster computer systems. Initial job placement and load balancing are the key aspects affecting the performance. The proposed technique combines data access patterns, memory and CPU utilization and locality of memory to consider as load metric in the load balancing aspect across cluster. A schedulingalgorithm based on this metric has been proposed to dynamically balance the load in the cluster. Initial job placement for a job in the cluster considers data access patterns and for load balance aspect metric constitutes CPU, memory utilization including locality of memory. Experimental results shown performance improvement to considerable levels with the implementation of the concept, specifically when the cost of data access from other clusters is higher and is proportionate to the amount of data.
As railway optimization problem is complicated and it is usually considered as a mix integer linear programming problem, how to find solutions for this problem is widely discussed in the past research. Caprara (2015) classified the railway planning problems to timetabling and assignment problems. He modeled them as mix integer linear programming, and discussed the solution methods and modelling issues. Most of the previous work is concerned with the use of linear programming and other optimization approaches using heuristics and computational intelligence methods. For example, He et al. (2000) developed a fuzzy dispatching model and genetic algorithm to assist the coordination among multi-objective decisions in rail yards dispatching. Vromons and Kroon (2004) described a stochastic timetable optimization model, providing a linear programming model with minimal average delay under certain disruptions. Huisman et al. (2005) gave an overview of operations research models and techniques used in passenger railway transportation, dividing the planning problems into strategic, tactical and operational phases. They pointed out that heuristic approaches are required for short-term railway scheduling problem and real-time control of the passenger railways. Niu (2011) formulated a nonlinear programming model for the skip-station scheduling problem for a congested transit line. Schindl and Zufferey (2015) considered a refueling problem in a railway network and decomposed it in two optimization levels. They proposed a learning tabu search method to solve this problem and the results show good performance of learning tabu search.
It is necessary to compare the performance of the proposed slack distribution technique mentioned as modified FCS against other existing slack distribution techniques. For our study, we have considered the most commonly used slack management scheme Greedy slack management with other components of the algorithm remaining the same. This algorithm is denoted as FCS. The performance metric used for comparison is percentage of power consumption. For each set of tasks the number of processors is kept constant and the energy consumption for a minimum of ten DAGs are noted. The average values of all those DAGs were calculated. Each point in the above graphs is the average value of such DAGs. This method was repeated by changing the number of processors (varied between 2 to 10) and comparisons were made between the existing and proposed algorithms. A few sample results of those comparisons are shown in the graphs below. It can be seen from the results that the proposedalgorithm had less power consumption than the existing one, thus leading to more energy efficiency.
The objective of the proposed work is to use an optimal schedulingalgorithm for real-time application. A grid is considered to be an infrastructure that bonds and unifies globally remote and diverse resources in order to provide computing support for a wide range of applications. Realtime applications in an industrialized technological infrastructure such as telecommunication systems, factories, defense systems, aircraft and space stations pose relatively rigid requirements on their performance. Aircraft scheduling represents the best example of real-time applications. The main focus of this work is to check the time taken for turn-around activities which comprises of taxi in, load/unload baggage, deboarding, water fueling, cleaning, catering, boarding, de-icing, take off processes, thus relating in the lowest flight delays and shortest waiting time. The optimal schedulingalgorithm is used for aircraft take-offs. The penalties are associated with proper scheduling but delayed turn around activities, improper scheduling and early/late takeoffs.
T HE Round Robin (RR) CPUschedulingalgorithm, referred to hereafter as Standard RR, is commonly used in time sharing and realtimeoperatingsystems , , , ,  because it keeps response time low  and gives each process a fair share of time to use the CPU. Despite these advantages, it is well known that the Standard RR algorithm suffers from several disadvantages; those being low throughput, high turnaround time, high waiting time, and a high amount of context switches , , . Other researchers have proposed improved RR algorithms to minimize these shortcomings. These algorithms have been compared to the Standard RR algorithm to prove that they produce better results than it, and occasionally they are compared to one or two other improved RR algorithms. Rarely have larger numbers of improved RR algorithms been compared at the same time to see how each of these improved RR algorithms compare to the others.
Automatic document processing, which includes processing of cheques, tax forms, ballot papers, examination answer sheets, newspaper subscription payment forms, postal mails etc, has gained momentum in the last two decades among various sectors of industry. As these documents are processed in huge quantities daily, automation will bring enormous advantage to the industry. Sectors such as banking, utility companies, postal services and newspaper companies use automatic form processing to minimize processing cost, increase efficiency, reduce processing time and minimize manual intervention. In this paper, a system is presented that will automate the detection of the presence of handwritten changes in address blocks of reply forms. The reply form refers to newspaper subscription or utility billing forms.
Nature inspired some evolutionary optimisation algorithms suitable for global optimisation of even non-linear, high-dimensional, multimodal, and discontinuous problems. The original genetic algorithm (GA) was developed by Holland (Holland, 1992) and was based on the process of evolution of biological organisms. Recently, approaches like genetic programming and bacterial evolutionary algorithm present an alternative to the former algorithms. GP optimisation uses the same operators as GA, though it requires an expression tree for gene representation as a combination of functions. The BEA is a simpler algorithm, and its operations were inspired by the microbial evolution phenomenon. The current paper focuses on a comparison between the applicability of GP for BNN design and BEA for the optimisation of the fuzzy rule base.
Abstract —Volunteer grid computing comprises of volunteer resources which are unpredictable in nature and as such the scheduling of jobs among these resources could be very uncertain. It is also difficult to ensure the successful completion of submit- ted jobs on volunteer resources as these resources may opt to withdraw from the grid system anytime or there might be a re- source failure, which requires job reassignments. However, a careful consideration of future jobs can make scheduling of jobs more reliable on volunteer resources. There are two possibilities; either to forecast the future jobs or to forecast the resource availability by studying the history events. In this paper an attempt has been made to utilize the future job forecasting in improving the job scheduling experience with volunteer grid re- sources. A scheduling approach is proposed that uses container stowage to allocate the volunteer grid resources based on the jobs submitted. The proposedscheduling approach optimizes the number of resources actively used. The approach presents online container stowage adaptability for scheduling jobs using volun- teer grid resources. The performance has been evaluated by making comparison to other scheduling algorithms adopted in volunteer grid. The simulation results have shown that the pro- posed approach performs better in terms of average turnaround and waiting time in comparison with existing scheduling algo- rithms. The job load forecast also reduced the number of job reassignments.
The system proposed is a real-time system. It takes input image through a web camera continuously till the system is shutdown. The captured image are then cropped by the Face Detection module and saves only the facial information in JPEG format of 100 x 100 matrix size. This is a colored image matrix having three layers. The layers are for red, green, and blue color in the image. The images are saved in a sequence of their occurrence time. That is, the face which is detected first is saved first in the database and the next is saved at the next place in database. The name of the face image is simply the numbers with extension .jpg. These numbers are the sequence number generated at the time of capturing. There are two factors for having file name as the number name. First is that it clearly indicates the sequence of the person they have come in-front of the camera. And the second factor is, at the time of training the system sequentially takes the training dataset of face images. It’s very easy to create database of egienface using this method as any for loop is capable to increase the sequence number till the end of file. While if the file name is something, say text, then this would have been difficult to do. After creating the database the system is trained itself by calculating the face space. This is done by using the principal component analysis algorithm followed by linear disciminant analysis algorithm. These two algorithms are explained above. They reduce the dimension of the face space. These face space keeps on changing after each modification made to the TRAININGDATABASE. The image which is detected by the web camera are saved in another file/folder called TESTDATABASE, they are also in number.jpg format, e.g. 1.jpg, 194.jpg. number.jpg format, e.g. 1.jpg, 194.jpg.
Due to the complexity of scheduling flexible manufacturing systems, the generation of production schedules requires an intelligent technique. Many artificial intelligence techniques such as fuzzy logic, genetic algorithms and neural networks have been successfully applied to the scheduling of advanced manufacturing systems. One such system is Robotic Flexible Assembly Cells (RFACs). Few studies have addressed the problem of scheduling RFACs. The major limitation is that these studies are limited to the assembly of only one product type. The objective of this study is to propose a new intelligent model of scheduling RFACs in a multi-product assembly environment, using fuzzy logic.
Triangular Algorithm: In triangular algorithm, each face is divided into some triangles and makes a triangular mesh (Fig. 1). It means that each edge should shared by just two triangles. To determine which edge is silhouette, it is need to know each aged is shared between faces toward the light source and faces away to the light source. One of the ways to this is CPU cycle hungry. Figure 1 illustrates how to divide one side of a box, which is consisting of four triangles. To determine silhouette we need just outline of the box.
This paper also combines the adaptive and robust approaches in the proposed tracking controller architecture. A fuzzy logic-based function approximator is used for the purpose of learning system uncertainties. The robustifying term of the controller compensates for the modeling inaccuracies. We derive a bound on the steady-state tracking error as a function of the controller’s gain. Specifically, We give an expression for an uncertainty region of the tracking error. The proposed controller guarantees the uniform ultimate boundedness of the tracking error. We use other fuzzy logic systems that are different from those used in ( and ). This form of the fuzzy logic system is called the Takagi-Sugeno-Kang type. And the consequent part of the fuzzy rules is based on the locally linear feedback control theory. In Section 2, the considered problem formulation is shown. In Section 3, the utilized fuzzy logic system is described. The main result is presented is Section 5. In Section 6, the proposed design steps of the direct adaptive fuzzy controller are used to control an inverted pendulum system. Some conclusions of this paper are given in last Section.
In this paper time control issue for binary control output with logic 0 or logic 1 for a specific determined time is proposed. The output state and its time are based on the linguistic rules applied for this new system. We proposed the membership function for the timing of output state. The output crisp values determined by defuzzifier are compared with certain values and changed into binary form for logic 1 or logic 0. These logic levels can be used as the control outputs to activate the plant components (or valves) ON or OFF for a specific time determined by the linguistic values of inputs and fuzzy inference system. The new time control fuzzy system is called fuzzy logic time control system (FLTCS).A time control model is presented for fuzzy logic system and implementation techniques are discussed.
In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise filtering of image sequences. Our proposedalgorithm uses adaptive weights based on a triangular membership function: Symmetrical, Continuous Function. In this algorithm median filter is used to suppress noise. Experimental results show when the images are corrupted by high- density Salt and Pepper noise, our fuzzy based algorithm for noise filtering of image sequences, is much more effective in suppressing noise and preserving edges than the previously reported algorithms such as [1-13]. Indeed, assigned weights to noisy pixels are very adaptive so that it well makes use of correlation of pixels. On the other hand, the motion estimation methods are erroneous and in high-density noise they may degrade the filter performance. Therefore, our proposedfuzzyalgorithm doesn’t need any estimation of motion trajectory. The proposedalgorithm admissibly removes noise without having any knowledge of Salt and Pepper noise density.
The Advancement in design methodologies and semiconductor process technologies has led to the development of systems with excessive functionality implemented on a single die, called system-on-chip. A set of predesigned and pre-verified design modules in the form of hard, soft or firm cores brought from either are integrated into a system using user-defined logic (UDL) and interconnects. We can implement complex systems having digital, analog and mixed signal components. The urgent time to market requirement poses many challenges for the design and test engineers. The associated test cost has become the major bottleneck in the reduction of overall cost of system. Testing cost have made IC testing more difficult. ITRS semiconductor roadmap  represents that there will be a need of hundred of processors for the future generation of SoC designs which will further increase the test cost. Testing of SoC is costly due to large data volume introduced due to increase in the integration and interconnection intricacies, huge power dissipation during test, expensive test generation procedures , heterogeneous mix of cores and their long test application times. Many techniques have been proposed to reduce the cost by test scheduling, reducing test data volume and optimizing test design mechanism. Test generation can either be done off-chip by employing ATPG (Automatic test pattern generation) algorithms running on expensive automatic test equipments or on-chip using a built-in hardware called BIST (Built In Self Test) . BIST offers the benefit in case if on-chip TAM availability is less. However BIST ready cores are not always available, also the multi
the subsequent one. Many parallel programming languages thus support run-time primitives for rearranging the array distribution of a program. The data redistribution problem has been widely studied in the literature. In general, data redistribution can be classified into two categories: the regular data redistribution [1,5,6,7,9,11,13,15,18] and the irregular data redistribution [4,8,22-24]. The regular data redistribution decomposes data of equal sizes into processors. There are three types of this data redistribution, called BLOCK, CYCLIC, and BLOCK-CYCLIC(n). The irregular data distribution employs user-defined functions to specify data distribution unevenly. High Performance FORTRAN 2 (HPF-2) provides GEN_BLOCK functionality and makes it possible to handle different processors dealing with appropriate data size according to their computation capability. Previous works emphasized the minimal steps of data redistribution and scheduled the ordering of messages with minimal total transmission size. In the regular array redistribution,  proposed an Optimal Processor Mapping (OPM) scheme to minimize the data transmission cost for general BLOCK-CYCLIC regular data realignment. Optimal Processor Mapping (OPM) utilized the maximum matching of realignment logical processors to achieve the maximum data hits for reducing the amount of data exchange transmission cost. In the irregular array redistribution problem, [22, 23] proposed a greedy algorithm to utilize the Divide-and-Conquer technique to obtain near optimal scheduling while attempting to minimize the size of total communication messages as well as the number of steps.
These planners are of interest only for insignificant disturbances, which is not the case with real problems, which need complex and detailed problems. Seeing as the outputs of the system are not detected with inputs, if the problem field is unstable input-output or internal , then it will never be able to be stabilized in open-loop planning. Open-loop planning absolutely imposes an exact amount of knowledge of the problem field. Since this cannot be obtained, any disturbance may be catastrophic. Open-loop planners present advantages due to their simplicity. )f the problem field is stable and the disturbances are insignificant, then these planners may be useful. They also have the advantage of reduced costs, not being necessary measurements of states and outputs related to the problem field. Closed loop planning systems are analogous to closed loop conventional control systems and most of the time, do not use situation evaluation. The monitoring of the execution as well as the re-planning are thus permitted. The planning system examines the difference between the current output situation and the desired goal, with a view to executing certain actions. The error is not so easily obtained, compared to the classics systems of control, seeing as the distance between the fuzzy states is much harder to quantify. Ever more difficult is the evaluation of the similarities between states, based on a given criteria in the case of our fuzzy expert system. These aspects will clearly come into the design of the )KMS. Designing an )KMS as a planning system is a difficult problem, through the variety of the models used satisfactory from a computational perspective , which must be a reflection of the complexity of the environment in which they operate Berglund and Karlun, ; Chen, .