Paravirtualization: Paravirtualization is based on a hypervisor layer, which manages the interface completely with the material, and on which you can install different operating systems. Paravirtualizationoffers to an operating system a generic special system, which therefore requires special interfaces integrated guest systems in the form of drivers. Paravirtualization is a virtualization technique of lower level than the insulation. It shares with it the need to use a modified OS. More specifically, paravirtualization is no longer only the host OS to be changed but also the OS required to run on virtual environments. The heart of paravirtualization hypervisor is running close to the hardware, and provides an interface that allows multiple hosts to concurrently access to resources. Each virtual machine must be modified to use this interface to access the hardware. Unlike the isolation of several different families OS can run on a single physical server. It is possible to run Linux, NetWare, Solaris (and others) simultaneously on the same machine. Each OS will have access to their own storage devices, its own memory, her or its own network interfaces or its own processors; each virtualized hardware resource is shared with other environments. The need of small changes to the guest operating system does not support closed systems, especially Microsoft Windows.
Even after incorporation of LL the integration of the mod- els presented in the form other than .exe files, PowerSim files, MapBasic files or performing data exchange with the meth- ods other than text files will require the changes to be done in LIANA source code or writing of the wrapper (.exe) applica- tion around the corresponding model. This problem can be solved by incorporating to LIANA the interfaces to the most common integration standards emerging or to be emerging in the modern EDSS (Argent, 2004; Blind and Gregersen, 2004). The interfaces can be developed either by extend- ing of the C++ framework with new classes (in similarity with introducing of the MapBasic® – models integration in MOIRASF) or “proxy” .exe applications integrated with the system (in similarity with the Powersim® – models integra- tion in MOIRASF).
The first step in building any complex system is to formalize the reasons for which this system must be built. However, specifying goals over the power distribution system operation can be a slippery task. In fact, despite achieving acceptable states of affair, the goals should agree with the mission of the power distribution utility as an enterprize, respect standards and regulations, follow inner policies, foster sustainability, and protect the interests of customers and stakeholders. All these features vary with the business administration and life cycle of the power distribution utility. Therefore, it is not reasonable (and useful) to our purposes to stipulating as an objective the approach of all possible goals of a power distribution utility. Assuming this task is possible, most certaintly some of the stipulated goals would not fit in the inner processes and infrastructure of certain power distribution utilities and the research would decay into producing unsuitable generalizations, feeding the architecturedesign with poor directives. On the other hand, the choice of a particular power distribution utility to devise a case study would lead to excessive particularizations, where some of the concepts we intended to approach about the smart/modern grid paradigm would be distorted by particular views or current intentions of a business administration. Since one of the main research purposes behind building the system lies in showing how the trends quoted in the beginning of this chapter can be approached using the notion of intelligence provided by a block-oriented agent-based architecture, goals were obtained, featured and refined by focusing on these trends and their resulting implications. Following this reasoning, we present how a goal mapping can ultimately originate agent capabilities and plans to be deployed aiming at supporting the power distribution system operation. Hence, the interested designer might customize his application to his goals and plans of interest, or even develop his own goal mappings to his particular interests following an analogous procedure.
III. C ONTROL PATH AND COPROCESSOR ARCHITECTURE Any formal representation of the functionality of the commands shown in table II assumes a given hardware architecture, bringing into evidence the corresponding control and data flow operations. The latter will determine the blocks required in the coprocessor data path, which will be implemented with regular sequential circuits. There is a higher degree of freedom in what concerns the implementation of the control path, which can be hardwired or microprogrammed, and the formal representation of each test command will be influenced by the decision of implementing the control path as either a Moore or Mealy state machine. The former will only update its outputs (the control signals to the data path) upon the rising edge of the system clock, while the latter can update its outputs at any moment.
Abstract. This study concerns the development of an embeddedsystem with low computational resources and low power consumption. It uses the NXP LPC2106 with ARM7 processor architecture, for acquiring, processing and classifying images. This embeddedsystem is design to detect and recognize traffic signs. Taking into account the processor capabilities and the desired features for the embeddedsystem, a set of algorithms was developed that require low computational resources and memory. These features were accomplished using a modified Freeman Method in conjunction with a new algorithm "ear pull" proposed in this work. Each of these algorithms was tested with static images, using code developed for MATLAB and for the CMUcam3. The road environment was simulated and experimental tests were performed to measure traffic signs recognition rate on real environment. The technical limitations imposed by the embeddedsystem led to an increased complexity of the project, however the final results provide a recognition rate of 77% on road tests. Thus, the embeddedsystem features overcome the initial expectations and highlight the potentialities of both algorithms that were developed.
There are several manners to validate the new architecture. We can look at the validation of the work from two different perspectives: Information Systems Perspective and Contribution Perspective. Validation of contribution perspective of research is the one where we should validate that the work that is being performed brings some new, valuable contribution to the field of investigation. That is not just a hack, meaning that we were doing something and then find out something new, without understanding whether is useful or not. But signify that a proper research was made with an already predicted result and that during the work results were achieved. This point is being assured by the work justification step, where we make a justification of why that work is useful for modern society and why the architecture proposed here will be solving a set of known problems of modern Information Systems Architecture field. The design science will allow us to build and follow the plan, according to well-defined steps from the paper of Hevner at al, and to guarantee that the research was performed correctly and that the solution being proposed at the end of research is actually a valid solution for the problems presented here.
In the most basic cloud-service model & according to the IETF, providers of IaaS offer computers – physical or virtual machines and other resources IaaS clouds often offer additional resources such as a virtual- machine disk image library, raw block storage, and file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on- demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds to deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure . In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.
Separation kernels are also often used in high assurance systems. Rockwell Collins and the U.S. Department of Defense presented in  a formal security policy for a sep- aration kernel aimed for a Multiple Independent Levels of Security (MILS) architecture. A security policy is an important requirement for Separation Kernel Protection Profile (SKPP)  and is basically a formal specification of what is allowed in the system. The properties proved were based on previous defined theorems about exfiltration, mediation and infiltration [64, Sect. 3]. The authors use Common Lisp programming language to describe the security policy and the ACL2  for proving the theorems expressed and the aforementioned security properties. The work also shows an example of a firewall which relies on the separation kernel security policy. Rockwell Collins AAMP7 microprocessor also implements in hardware the properties of a separation kernel  in order to achieve Common Criteria EAL7 certification. The proof is achieved trough a high-level model that implements partitioning and a low-level designof the implementation micro processor code. The latter is translated manually and then executed in the ACL2 theorem prover.
OO languages, like C++, are useful for OS development, but the OO paradigm does not provide specific support for porting OS [FROHICH, 2001] [POLPETA 2005]. Family-Oriented Model. In this case, a set of programs is considered to be part of a family if they share sufficiently common features to the point that it is more useful studying their common features than their differences [PARNAS 1976][POLPETA 2005]. For instance, the source-code set involved in task-control is usually called task manager. A great deal of concepts derived from this model is present in modern OS designs (e.g. Linux). However, this paradigm only provides support for OS design and development, and it does not address how to port an OS into a new platform.
Abstract. The paper deals with designof a web-based system for Computer-Aided Manufacturing (CAM). Remote applications and databases located in the "private cloud" are proposed to be the basis of such system. The suggested approach contains: service - oriented architecture, using web applications and web services as modules, multi-agent technologies for implementation of information exchange functions between the components of the system and the usage of PDM - system for managing technology projects within the CAM. The proposed architecture involves CAM conversion into the corporate information system that will provide coordinated functioning of subsystems based on a common information space, as well as parallelize collective work on technology projects and be able to provide effective control of production planning. A system has been developed within this architecture which gives the possibility for a rather simple technological subsystems connect to the system and implementation of their interaction. The system makes it possible to produce CAM configuration for a particular company on the set of developed subsystems and databases specifying appropriate access rights for employees of the company. The proposed approach simplifies maintenance of software and information support for CAM subsystems due to their central location in the data center. The results can be used as a basis for CAM design and testing within the learning process for development and modernization of the system algorithms, and then can be tested in the extended enterprise. Keywords: production planning, multi-agent technologies, PDM - system, web - services, cloud computing, web – based system, CAM architecture, web based CAM.
Abstract: Problem statement: The Software Communications Architecture (SCA) was developed to improve software reuse and interoperability in Software Defined Radios (SDR). There had been performance concerns since its conception. Arguably, the majority of the problems and inefficiencies associated with the SCA can be attributed to the assumption of modular distributed platforms relying on General Purpose Processors (GPPs) to perform all signal processing. Approach: Significant improvements in cost and power consumption can be obtained by utilizing specialized and more efficient platforms. Digital Signal Processors (DSPs) present such a platform and have been widely used in the communications industry. Improvements in development tools and middleware technology opened the possibility of fully integrating DSPs into the SCA. This approach takes advantage of the exceptional power, cost and performance characteristics of DSPs, while still enjoying the flexibility and portability of the SCA. Results: This study presents the design and implementation of an SCA Core Framework (CF) for a TI TMS320C6416 DSP. The framework is deployed on a C6416 Device Cycle Accurate Simulator and TI C6416 Development board. The SCA CF is implemented by leveraging OSSIE, an open-source implementation of the SCA, to support the DSP platform. OIS’s ORBExpress DSP and DSP/BIOS are used as the middleware and operating system, respectively. A sample waveform was developed to demonstrate the framework’s functionality. Benchmark results for the framework and sample applications are provided. Conclusion: Benchmark results show that, using OIS ORBExpress DSP ORB middleware has an impact for decreasing the Software Memory Footprint and increasing the System Performance compared with PrismTech's e*ORB middleware.
Adams Dunkels presented two proposal based on the first approach for this problem which called UIP  and IwIP . His proposals are based on redesigning TCP/IP as lightweight separate software that is targeting tiny 8 bit microcontrollers. There are three weak points in Dunkels's proposals. First, there is no software model. Since communication protocols are complex software, modeling is necessary process. Software modeling and analysis can improve system maintenance, flexibility, extensibility and ease ofsystem understanding. Second, both UIP and IwIP are targeting tiny embedded systems that make it difficult to difficult to use the solution for anther microcontroller architecture. Lastly, there is no modularity in which makes it hard to customize the functionality ofsystem. It could be clearer if the authors of the system presented the architecture.
In this study, we provided a new approach focused on the concept of product as a key element (Product in the heart of PLM): the product implicitly embeds the information about itself. For this, it is necessary to add a new dimension to the PLM system in which information flow is horizontally and vertically closed. Starting from the emerging technologies such as product embedded device identification and mobile agent; we proposed an architecture that close the loop product lifecycle management by integrating the end of life phase. For this, we extended the EOL to the use phase. In other terms, this widening aim to implicate the final users in PLM process (involvement of customer). Although this involvement we aim to minimize the lunch phase based on the ‘Intelligent Product’ feedback: tacit and explicit. Furthermore, in the product use phase, to gather product life cycle data during this phase, the PEID such as RFID has been introduced. Moreover, necessary software components (product information counterpart) and their relations have been addressed: The mobile agent architecture to satisfy the new requirement of extended enterprise. In addition, to streamline several products life cycle operations based on the proposed architecture, the general scenario for creating the knowledge based on the End of Life phase have been introduced.
In a 2007 article, Barroso and Hölzle  proposed the notion of energy-proportional computing and maintained that it should be a primary design goal for computer design. They argue that datacenters are designed to keep server workload between 10-50% of their maximum utilization levels. This is done to ensure that throughput and latency service-level agreements will be met and that there will be room for handling component failures and planned maintenance. The problem is that the region under 50% utilization is the lowest energy-eﬃcient region of servers, due to static power consumption (depicted in Figure 2.8a). As seen, memory/storage components consume both dynamic power (used to change memory contents) and static power (used for data retention and component availability). Static power used by SRAM and DRAM is used mostly for refreshes due to power leakage; in disks, most static power is used to keep the disk spinning and ready to answer requests with low latencies. As Barroso and Hölzle put it, “essentially, even an energy-eﬃcient server still consumes about half of its full power when doing virtually no work” . This problem is aggravated by the fact that the costs are not reflected only in the server electricity bill, but also in energy for cooling the datacenter and infrastructure provisioning costs, that increase proportionally to the server energy budget.
The next articles introduce a new subject: fracture. A group of investigators developed a numerical model in order to evaluate the fracture characterisation parameters to a fractured gear tooth (Sfakiotakis et al., 1997). The influence of heat convection characteristics and mechanical loading conditions are examined. Stress intensity factors are evaluated for a wide range of op- erating parameters useful for design purposes. Another paper was written by Hu and his col- leagues (2009) about a transient coupled finite element model which is developed to compute the temperature and stress field in cast billets, so as to predict the defects of the I-type billets made from magnesium alloy. It is also intended to find the causes and solutions for surface cracks and shrinkages during the casting process. The simulation is performed in the software ANSYS and the main goal is to optimise the parameters. The next study was completed by a group of researchers who modelled a thermo-mechanical interface for failure analysis of con- crete subjected to high temperature (Caggiano & Etse, 2015). This model is an extension of a fracture energy-based interface formulation which now includes thermal damage induced by high temperature and/or fire. It is also suggested by some analysts a model similar to the previ- ous for thermally-induced rock damage based on the particle simulation method (Xia, 2015). The mechanism of surface failure due to temperature rise is a very important problem in gear design. So, the author Atan (2005) considered that this subject is not fully analysed and consid- ered also the mechanisms of thermal stresses and the thermal cycling in contact zone, during the gear mesh. The point is to predict the design criteria for modifying the contact stresses due to thermal stresses. The effect of the material, oil film thickness, surface roughness and geo- metric operating parameters are illustrated. Also the effects of a load on the temperature rise and the modification parameters are evaluated.
Server virtualization has become popular in data centers since it provides an easy mechanism to cleanly partition physical resources, allowing multiple applications to run in isolation on a single server . Virtualization helps with server consolidation and provides flexible resource management mechanisms  in DCNs particularly. We quickly add that Virtualization is not a new technology, but it has regained popularity in recent years because of the promise of improved resource utilization through server consolidation. According to , a Data Center is the consolidation point for provisioning multiple services that drive an Enterprise business. In , the authors enlist the data center hardware and software components. The hardware components are: firewalls, Intrusion Detection Systems, contents switches, access switches and core switches. The software components are: IPSec and VPN, antivirus software, network management systems and access control server. However, for effective security implementation in a virtualized DCN, this work goes further to propose a more secured data center design that is programmable, secured with strong isolation, and flexible using the OFSDN approach in our context.
are embedded usually in a larger system for some specific purpose other than to provide general purpose computing. Embedded systems are often developed with off-the-shelf microprocessors/digital signal processors (DSPs)/microcontrollers and ASICs/ FPGAs for minimizing the cost and development time. In these systems the hardware, software and associated components are optimized for the given application or given set of tasks under the prevailing operating condition keeping the size, cost and performance requirements in view. In the recent years, the design and development of heterogeneous embedded systems has gained tremendous importance and great challenges for its wide and diverse areas of application ranging from home appliances and communication to real-time and distributed control systems in defense/ aerospace missions [1,2] .
The complexity of an embeddedsystem can change from product to product, depending on the task that they must perform. Therefore, embeddedsystem designers must have knowledge in different areas that, sometimes, are separately handled. Hardware project requires knowledge related with digital and/or analog electronics, and, at the same time, with electromagnetic compatibility issues that cannot be forgotten in high frequency operation or in products that must work in very restrictive environments as the ones found in hospitals. In turn, the designer must project the software, required by the hardware, allowing it to work as expected. Subjects like
conditioning circuit is incorporated to convert signal into voltage or in current. Signal conditioning circuit is normally connected with sensor for the case when the sensor output is not compatible with DAQ card. The signal conditioning circuit offers diverse functions like linearization, attenuation, amplification, filtration, isolation, and excitation . Upper limits and lower limits ranges can be set for input parameters and output signal in order to have notification of any abnormality. For the purpose of monitoring of input parameters and output of the generated signal, special types of software can be used.
In Ghana, the maintenance of traffic signals is a fundamental aspect of the Department of Urban Roads responsible for the operation, maintenance, improvement and repair of all of the traffic signal installations.. At present there are 89 junctions and 4 pedestrian crossings controlled by traffic signals within the city of Accra. There are two main contractors employed to assist with the maintenance, Facol Roads Limited and Signal Limited. They both deal with technical faults such as replacing of fused bulbs in signal heads and push buttons, alignment of signal heads and poles, resetting of controller in times of malfunction, tracing and repairing of cables in manholes, signal heads and pole compartments. It was observed that the existing systemof routine maintenance is very expensive over the years, time consuming and inherently inefficient. This problem calls for a highly efficient telemetric control system which will monitor all the traffic lights intersections and also establish some control over traffic lights intersections from a base station. This paper is intended for the designof a telemetry system through the selection of hardware systemarchitecture and the appropriate software for the designof a human machine interface to monitor the state of traffic light intersections and taking actions to resolve faulty traffic lights intersections.