When developing traditional business applications and systems, “devices” are standardized and readily available off-the-shelf computers, making a significant part of their value chain quite irrelevant to the development process. However, in embedded-systems design and development, a similar level of maturity or commoditization, if you like, of “devices” has not yet been achieved in the industry. As a result, development tool chains, processes, and methodologies are often proprietary and established for a specific goal. The vast majority of the industry in embedded development is using open-source software and custom development tools provided by the hardware vendors (SV, IHV) and open source communities. Time scales for the development of hardware and software at all levels in the device value chain can therefore be long as it requires proprietary or scarce skills and knowledge to put solutions together. In some cases, where DES and real- time scenarios are involved, especially in safety-critical situations, these time scales can easily span a decade. There are interesting offerings from the large operating systems and software tools vendors such as Microsoft that hold the promise of providing a more productive and “traditional” development experience for embedded- systems professionals (including academics and hobbyists) in many key solution scenarios. The hope is that eventually tool chains along the entire device value chain will become mostly standardized and interoperable enabling the embedded-systems industry to scale development as seen in traditional software development. Figure 3 shows some aspects of the embedded-systems development life cycle—which is strongly tied to the device value chain—and the Microsoft platform technologies that may be applied through the life cycle.
FCs are devices that use electrodes and electrolytic materials to accomplish the electrochemical production of electricity. They do not store chemical energy, but rather, convert the chemical energy of a fuel to electricity. Unlike batteries, FC does not need to be charged for the consumed materials during the electrochemical process since these materials are continuously supplied . Figure 2-6 shows the basic components of a FC, which is integrated by an anode, a cathode, and an electrolyte. Fuel is supplied to the anode, and it is electrochemically oxidized while oxidant is electrochemically reduced on the cathode. Hydrogen, as a fuel, passes through the anode whereas Oxygen passes through cathode. FC technology is based on an electrochemical process in which hydrogen and oxygen are combined to produce electricity with no combustion. The catalyst splits the hydrogen atom into a proton and an electron. The proton passes through the electrolyte. However, electrons create a separate current that can be utilized before they return to the cathode to be demodulated with the hydrogen and oxygen in a molecule of water , .
2.12 In  J.L.Kim and T.Park had presented a new efficient synchronized checkpointing protocol which exploits the dependency relation between processes in distributedsystems. In their protocol, a process takes a checkpoint when it knows that all processes on which it computationally depends took their checkpoints, and hence the process need not always wait for the decision made by the checkpointing coordinator as in the conventional synchronized protocols. As a result, the checkpointing coordination time is substantially reduced and the possibility of total abort of the checkpointing coordination is reduced. By doing so the second phase of the checkpointing coordination may be removed. When multiple checkpointing co-ordinations are overlapped. Time under their protocol can also be saved if it is possible to use the decision of one checkpointing coordination for other coordination. The checkpointing commitment decision can be made locally so that the total abort of checkpointing is avoid i.e. when a process involved in a checkpointing coordination fails, the processes not affected by failed one can make their decision, while the protocols following the straightforward two-phase mechanism abort the whole checkpointing activity. Even if the checkpointing and rollback coordination overlap, the processes which are involved in checkpointing coordination but not involved in the rollback coordination can successfully make their decisions.
Leader Election Algorithm , not only in distributedsystems but in any communication network, is an essential matter for discussion. Tremendous amount of work are happening in the research community on this Election, because many network protocols are in need of a coordinator process for the smooth running of the system. These socalled Leader or Coordinator processes are responsible for the synchronization of the system. If there is no synchronization, then the entire system would become inconsistent which intern makes the system to lose its reliability. Since all the processes need to interact with the leader process, they all must agree upon who the present leader is. Furthermore, if the leader process crashes, the new leader process should take the charge as early as possible. New leader is one among the currently running processes with the highest process id. In this paper we have presented a modified version of ring algorithm. Our work involves substantial modifications of the existing ring election algorithm and the comparison of message complexity with the original algorithm. Simulation results show that our algorithmminimizes the number of messages being exchanged in electing the coordinator..
Strategy is a macro program for reaching a unique aim. This term originates from wars planning. CRM strategy defines itself as a major program for gaining the goal protecting and improving it in an organization. Each organization in the deal world should have a strategy for CRM. The most important factor in success of different organizations is customer‟s satisfaction. Balanced scorecard (BSC) is one of tools for analyzing according to financial norms that organizations use them for studying and estimating customer‟s satisfaction. If necessary information about customers doesn‟t exist or it is seldom, we can make two concise groups for customers before performing CRM strategy. If in an organization there is such a condition, before performing CRM strategy it should be a study on customers satisfaction phase.
The implementation of the tasks involved in embedded system development can be undertaken based on the state space either directly or indirectly. While the direct implementation is based on the state chart components, indirect implementation is based on previous translation of the start chart into state space. Two of the tools “Statemate” and “Rhapsody” recommended by David Harel et al.  can be used to directly implement the state charts which are based on the translation of state chart components. The direct implementation of state chart components can be carried using the switch statements, classes (state Chart), Class hierarchy (state chart Hierarchy), Table (Runtime object structure) as defined in the UML.
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributedsystems, particularly in service-oriented systems, is scarce. Distributedsystems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
Abstract: The main objective of this paper is to develop a distributed model using grid environment through which on-line monitoring of multi-area power systems can be carried out continuously. Grid computing is a viable solution in order to exploit the enormous amount of computing power available across Internet to solve large interconnected power system problems. A grid service model is proposed for on-line monitoring of multi-area power systems, which provides solutions at specific intervals of time. The proposed model is designed in such a way that any node in the grid can provide the service, which can obtain the power system data from other client grid nodes and responds with solution. Hence the proposed model is highly distributed and has inherent features of scalability and reliability implicitly.
Figure 2 – New Control Approach Architecture The multi-agent systems are suitable to the distributed manufacturing environment, since the manufacturing applications presents characteristics like modular, decentralised, changeable, ill-structured and complex, for what the agents are best suited to solve . Analysing the benefits of multi-agent technology it is possible to conclude that fulfil some of requirements defined in section 2: autonomy (an agent can operate without the direct intervention of external entities, and has some kind of control over their behaviour), cooperation (the agents interact with other agents, in order to achieve a common goal), reactivity and proactivity (the agents perceive their environment and response quickly to changes that occur on it. In other hand, agents do not simply act in response to their environment, but are able to taking the initiative, controlling its behaviour) and adaptation and decentralisation (the agents can be organised in a decentralised structure, and easily can reorganised into different organisational structures).
Middleware Distributedsystems often use middleware platforms to hide the complexity of the inter- actions among their parts. Some of the most currently used middleware platforms are Web services, the Common Object Request Broker Architecture (CORBA) and Remote Procedure Calls (RPC). These platforms are based on standardized specifications that guarantee interoperability across implementa- tions from different vendors. As middleware mediates all communication among the parts of a system, vulnerabilities in this component effectively compromise the integrity of applications. Therefore, using diverse middleware implementations in the components of a system allow this system to tolerate faults in some of them. A representative example of how this axis of diversity can be easily obtained is the CORBA platform. The Object Management Group (OMG) defines a series of CORBA specifications which are independently implemented by several organizations. Considering only the Java and C++ pro- gramming languages, there are at least four high quality, free implementations of this standard (JacORB, OpenORB, TAO, and MICO).
Nowadays, motor control is being a vast market and the motor control industry is being a strong aggressive sector. To remain competitive, the industry has to develop sophisticated control systems which are often composed of standard processors (µP, µC, DSP) [1,2] and specific hardware components (ASSP, FPGA, ASIC) [3-6] . Design of these systems represents a difficult and studious task. Traditionally, engineers work on implementation of new control algorithms directly on an existing control device. Such projects typically require 6 to 12 man-months and are composed mainly of: (i) implementation of the developed algorithm on the processor (after coding); (ii) configuration and programming of the processor peripherals for I/O operations; (iii) development, if needed, of other interface circuits; (iv) test and debugging of the obtained control system (usually by using emulators); and finally, (v) validation of the system by experimentation. All these tasks needed in order to adapt the new algorithm to a given control board are usually done manually. As a result, neither the system performance nor the design time is significantly optimized.
The alternative algorithm is based on the notion of the lowest common ancestor. The lowest common ancestor of two leaves A, B in the hierarchy tree is an internal node C such that the subtree rooted at C contains both A and B and there is no smaller subtree with this property. The algorithm (LCA in the following) is based on the following idea: given a system of CI automata, we need to decide whether a transition is enabled in the current configuration or not. In the case of input, output or internal transition inherited from a primitive automaton we just follow the path between the primitive automaton and the root in the hierarchy tree and check if the label is allowed in all the composite automata on the path.
Abstract—In this paper a novel antenna subset selection algorithm is proposed using a distributed approach. It is assumed that each base station in a group of base stations is linked to an associated terminal as a receiver-transmitter pair. These receiver-transmitter pairs reuse channel resources, such that each mobile terminal represents a source of other-cell interference (also referred to as multi-user interference or MUI) for other mobile terminals in neighboring cells that are reusing all or some of the same channel resources. Accordingly, the base stations implement a gaming-based algorithm to mitigate MUI for the multiple-input-multiple-output (MIMO) uplink signals received from their associated mobile terminals. Simulation results show that the proposed algorithm has a good performance in terms of average error probability consisting of a solution concept based on Nash equilibrium (NE) points.
It has been shown in  that Atomic Broadcast and Consensus are equivalent problems in asynchronous systems prone to process crash (no-recovery) failures. The Consensus problem is defined in the following way: each process proposes an initial value to the others, and, despite failures, all correct processes have to agree on a common value (called decision value), which has to be one of the proposed values. Unfortunately, this apparently simple problem has no deter- ministic solution in asynchronous distributedsystems that are subject to even a single process crash failure: this is the so-called Fischer-Lynch-Paterson’s (FLP) impossibility result . The FLP impossibility result has motivated researchers to find a set of minimal assumptions that, when satisfied by a distributed sys- tem, makes Consensus solvable in this system. The concept of unreliable failure detector introduced by Chandra and Toueg constitutes an answer to this chal- lenge . From a practical point of view, an unreliable failure detector can be seen as a set of oracles: each oracle is attached to a process and provides it with information regarding the status of other processes. An oracle can make mistakes, for instance, by not suspecting a failed process or by suspecting a not failed one. Although failure detectors were originally defined for asynchronous systems where processes can crash but never recover, the concept has been ex- tended to the crash-recovery model [1, 4, 11, 14]. The reader should be aware that the existing definitions of failure detectors in the former model do have quite significant differences.
Coordination between base stations (BSs) is a promising solution for cellular wireless systems to mitigate intercell interference, improving system fairness, and increasing capacity in the years to come. The aim of this manuscript is to propose a new distributed power allocation scheme for the downlink of distributed precoded multicell MISO- OFDM systems. By treating the multicell system as a superposition of single cell systems we define the average virtual bit error rate (BER) of one single-cell system, allowing us to compute the power allocation in a distributed manner at each BS. The precoders are designed in two phases: first the precoder vectors are computed in a distributed manner at each BS considering two criteria, distributed zero-forcing and virtual signal-to-interference noise ratio; then the system is optimized through distributed power allocation with per-BS power constraint. The proposed power allocation scheme minimizes the average virtual BER over all user terminals and the available subcarriers. Both the precoder vectors and the power allocation are computed by assuming that the BSs have only knowledge of local channel state information. The performance of the proposed scheme is compared against other power allocation schemes that have recently been proposed for precoded multicell systems based on LTE
For actions we consider the output ¯ a, the input a and the internal computation τ. For processes, we consider the smallest fragment of CCS featuring some form of concurrency, thus we have inaction nil, parallel composition P | Q, and action preﬁxing α.P . On top of this, we introduce a notion of distribution by locating pro- cesses P inside sites of the form [P ], anonymous for simplicity, and by adding the migration capability go .P to processes which, since sites are not natively named, al- lows processes to non-deterministically migrate to other sites. A distributed system is thus represented by a network consisting of a collection of sites spread in space, by means of spatial composition N | M, which we will abbreviate using j∈J P j for a J-fold collection of sites. 0 stands for the empty network. We use fn(N) to denote the set of free names of a network N, deﬁned as usual. The operational semantics of our calculus follows, captured by the relations of structural congruence and reduction.
During project execution, DM can be adjusted in order to include new constraints. Of course, a suitable DM helps decreasing the effort placed on critical project’s phases [MARKO 2001]. The DM recursively uses other steps from the proposed method in order to provide the following information: if the target OS is suitable for the target hardware-platform; all hardware-elements lacking OS support; and all software-elements needing modifications for the new hardware-platform.
Adams Dunkels presented two proposal based on the first approach for this problem which called UIP  and IwIP . His proposals are based on redesigning TCP/IP as lightweight separate software that is targeting tiny 8 bit microcontrollers. There are three weak points in Dunkels's proposals. First, there is no software model. Since communication protocols are complex software, modeling is necessary process. Software modeling and analysis can improve system maintenance, flexibility, extensibility and ease of system understanding. Second, both UIP and IwIP are targeting tiny embeddedsystems that make it difficult to difficult to use the solution for anther microcontroller architecture. Lastly, there is no modularity in which makes it hard to customize the functionality of system. It could be clearer if the authors of the system presented the architecture.