ABSTRACT: “Cloud” computing a comparatively term, stands on decades of research & analysis in virtualization, analytical distributed computing, utility computing, and more recently computer networking, web technology and software services. Cloud computing represents a shift away from computing as a product that is purchased, to computing as a service that is delivered to consumers over the internet from large-scale data centers – or “clouds”. Whilst cloud computing is obtaining growing popularity in the IT industry, academic appeared to be lagging behind the developments in this field. It also implies a serviceoriented designed architecture, reduced information technology overhead for the end-user, good flexibility, reduced total cost of private ownership, on- demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today.
The choice of hardware infrastructure that will be used for virtualization is left for the implementer to choose — virtual machines (VM) can be hosted on literally every modern hardware platform. Such platforms can be either cluster installations, with hardware virtualization support, or multiple lower end nodes connected using a fast and reliable network. When considering the second option, VM load balancing and migration can be crucial for ensuring the eﬃcient use of hardware resources. Luckily, most commercial and open source cloud management software provides VM migration support.
Coming also from DMTF, CIM [DMTF, 2003] is an open standard that defines how systems, networks, applications and services can be represented as a common set of objects and relationships between them. This specification is intended to support con- sistent management and exchange of semantically rich elements, independent of their manufacturer or provider, throughout the network. CIM also provides the means to ac- tively control and manage these same elements. CIM comprises CIM Schema and CIM Infrastructure Specification, both used in WBEM architecture. The CIM Infrastructure Specification defines an UML-based architecture and concepts, including a language by which the CIM Schema and its extensions are defined, and a method for mapping CIM to other information models, such as SNMP. The CIM Schema defines a collection of ob- jects and relationships between these, representing a standard base for the elements to be managed. The CIM Schema tries to cover the majority of traditional elements in an ICT environment, such as computer systems, operating systems, networks, services and storage. Due to product and vendor specificity, CIM Schema is possible to be extended to allow them to represent their specific features seamlessly together with the common base foundation of CIM Schema specification. The capability to use generic information defined by CIM models covering a broad range of generic applications that can interop- erate with each other was the main reason why original Object Linking and Embedding (OLE) for Process Control (OPC) specifications were a success from the start. Since the main objective of OPC standards is to ensure the consistent exchange of data between all OPC-enabled automation components and the control system, the management aspect is also included. This way, management communication is also dealt following previous premises: components data are defined using known CIM models and data is set or re- trieved. The use of OPC communications standards on management domain tends to be comparable to SNMP approach. In [Westerinen and Bumpus, 2003] a longer version of this part of device management history is presented. Also in this work, the authors envi- sion a future of distributed management based on web services protocols and common semantics and models.
In order to standardize the variation of the response times, the simple moving average (SMA) was applied in 100 observations (Cito et al., 2015). Figure 4 shows 6 graphs and compares the performance of the SMART’s architecture during the first experiment (with 18 users) and the second one (with 36 users). Graphs 4 (a), 4 (b) and 4 (c) display the raw data of the response times (y-axis) of each submission (x-axis) and the number of failures (exceptions) occurred during the experiment for teleconsulting, telediagnosis and tele-education services respectively. The graphs clearly show that no exceptions were detected in a total of 19800 requests. Graphs 4 (d), 4 (e) and 4 (f) display the response and processing time (y-axis) in a range of 100 requests (y-axis) for the same services. It is evident that when the number of simultaneous users was doubled, RT increased approximately in the same proportion for teleconsulting and telediagnosis, but it practically quadrupled for tele-education. However, the PT values remained practically the same when compared to the RT in the graphs Figures 4d, e and f of each activity. The data of the second experiments is more volatile, probably caused by the network layer. Around the time points between 1500 and 2000 requests, there is a small increase in the PT, visually noticeable for teleconsulting. That increase is not alarming enough to be considered a significant change.
Vehicle-to-vehicle communication networks are, as its designation suggests, networks formed by several vehicles equipped with wireless communication devices that can communicate with each other. In V2V networks, each vehicle analyses, within a certain radius, other vehicles that are in range, and can inform its position, velocity, direction and other characteristics. This kind of communication has been one of the fields of interest in telecommunications that grew very quickly lately. Thus, vehicles with such capabilities can form a special type of mobile ad-hoc network with particular applications, known as Vehicular Ad-hoc Networks (VANET). VANET are a special type of Mobile Ad-hoc Networks (MANET) that supports communications between vehicles. According to , VANET inherit some characteristics from the MANET, but also improve the former with new features, which differentiate it from other ad-hoc mobile networks. These characteristics include high mobility, open network with dynamic topology, limited connectivity, potential to achieve larger scale, all nodes are providers, forwarders and consumers of data and the wireless transmission can suffer from much noise and interferences.
The first category was originally of little interest to us, since these applications do not involve communication. However, the main idea underlying our architecture is to transparently support resource-constrained mobile devices by powerful proxy servers. We are therefore currently exploring how to generalize this idea to support standalone applications as well. Applications in the second category will be used on multiple platforms: a user will have a version of his/her favorite word processor executing on a laptop as well as on the more powerful desktop in the office. This requires the exchange and synchronization of documents between the machines. Depending on the prevailing view of available network connectivity, two possible approaches are imaginable. Windows CE and MS Office exemplify a first solution. To facilitate access to the Internet, only the client side of the application can be adapted to function well in the dynamic and resource constrained mobile environment. The architecture proposed below is intended for applications in this category. Vertically integrated business applications are often structured as client-server applications. Furthermore, the backends (servers) have to support both existing wired desktops and wireless mobile devices. One example is a bank, where the back office has to support account managers in branch offices as well as mobile customer service representatives.
Em Maha Abousharkh e Mouftah (2011a), foi proposto o desenvolvimento de um middleware para Wireless Body Area Network (WBAN) baseado em SOA. A arquitetura apresentada no artigo é composta por vários nós de ponta, um nó central e uma unidade central. O middleware proposto compõe a interface de comunicação entre os sensores. O usuário-alvo dessa solução é o paciente em tratamento domiciliar que precisa de acompanhamento contínuo, ou até mesmo pessoas idosas. Nesses casos, a WBAN detectaria situações de emergência e enviaria as informações do paciente à equipe médica responsável. Os autores fizeram uso de Web Services como uma solução potencial para garantir a interoperabilidade e resolver os desafios de uso e configuração dos equipamentos. A intenção do artigo foi atrair mais desenvolvedores para essa plataforma.
In this paper we have proposed a design pattern that provides the composition of the web services by the support of serviceOrientedArchitecture (SOA) . Here we are using two design patterns both of them results to an amalgamation of our proposed design pattern. Initially when the user requests for the service the client machine will generate the HTTP GET request to all the service providing servers in the network to send the respective WSDL (Web Service Description Language). Then after that the Case-Based Reasoning Design Pattern  at the Client side machine will use the WSDLs that it have received from different servers in the network and also the service request that was given as input to the client by the user to decide the Best perfect matched web Service provider to the user's request. All the above steps of decision making will be performed on the bases of some cases or statement based decision making, so that the WSDL that matched the service request that is requested by the user can be used to fulfil the service. Then once a perfect WSDL has been selected based on the condition that the input parameters of the service requested have to match the input parameters in the WSDL XML file, then the SOAP message is used to send the request to the Server.
We evaluated the proposed system according to two different approaches: a qualitative and a quantitative. The goal of the qualitative evaluation was to implement the main building blocks of the middleware architecture and to analyze the interaction between applications and the system. We also analyzed the impacts of adopting the SOAP/XML representation. In the quantitative evaluation we performed simulations to show that the adoption of a network protocol that meets application requirements can improve the WSN overall performance. The WSN performance can be evaluated according to different metrics, such as average delay, average dissipated energy and event delivery ratio . In this first stage of work we are interested in measuring the average dissipated energy, which is a ratio of the total dissipated energy per node in the network to the number of distinct events delivered to the sink node, and it is directly related to the WSN lifetime .
simulation results conducted in the previous research work cannot be followed as a standard guidelines for commercial usage as the user might have dynamic scenario of implementation in real-time, which might not have been considered in simulation test bed. Once the programmer designs the application, then at the time of evaluation in practical scenario, various dynamic scenarios are often found missing at the time of coding process like fluctuation of battery, optimal performance of the laptops, which cannot be predicted about their performance, thereby posing a great challenge in implementing hybrid protocols in real time test bed. For these reasons it takes significantly more effort to create an ad hoc routing protocol implementation than a simulation. The wireless business as seen on mobile commerce networks seems most appropriate to fully online resource exchange and also to online launch trading in local groups where entities could easily meet to transfer digital contents and payment as agreed. But as seen in this research journal, in order to support such types of mobile commerce is more challenging as compared to wireless commerce within provider networks. This research journal highlights entities which will act as guidelines for serving the understanding for better quality efficient, secure, and reliable m-commerce system. There can be further work in terms of experimenting the communication in with respect to vehicular adhoc network, by estimating the average delay, packet drop, and packet delivery ratio, which will be our focus in future work. In summary, adhoc networks have the potential to become a serious part of tomorrows 4G communications networks. They can open up new business opportunities for network operators and service providers.
SOA facilitates the development of distributed systems, spread across multiple machines connected over the internet. Although distributing a processing task is usually seen as a method to increase performance, slow connections and reduce parallelism in SOA usually leads to higher response times as information is sent over the network. Additional performance overhead is introduced by the need to use data serialization and deserialization techniques. Higher response time can make SOA unsuitable for certain real-time applications . In order to assure a high degree of interoperability several compromises have been made that negatively affect SOA applications performance. Using standard XML for web services enables interoperability between different platforms and operating systems, but also increases application response time. XML text-based messages can be up to twenty times larger than equivalent binary messages and also require operations such as validation and parsing before the contained data can be used. A possible approach to this problem is using binary XML as described in .
From 2005 to 2010, the number of service-related journals more than dou- bled (from 9 to 20) according to Moussa (2010). Furthermore, the attention by practitioner and several academic institutions around the world have increased. Industrial leaders like Hewlett Packard, Oracle, Accenture, Electronic Data Sys- tems, and British Telecom have also set up their own service science agenda. A growing number of university-affiliated service centers and academic-oriented networks have being established around the world (e.g., California Center for Ser- vice Science, Center for Service Management at Loughborough University, and the Latin American Service Research Network) ( OSTROM et al. , 2015 ).
A speciﬁc storage index (SI) per DHT is also preserved. For each partition locally stored, it contains its PID, the addressing reference (AR) to its addressing service and a storage counter (SC). Conveniently, if the AS subsystem is allowed to inspect the SI indexes of the SS subsystem, a routing chain may end sooner. Figure 4 magniﬁes the SS subsystem of s 4 , already visible in ﬁgure 3; it shows the SI index for d 2 and d 1 , including some AR back-references (also to s 4 ).
A hard constraint of our architecture is the simultaneous support of mobility and QoS. We developed an innovative usage of QoS Brokers (QoSB), incorporated in a traditional DiffServ approach to be able to control and manage available resources in an efficient way, even for mobile users. The QoS Broker interacts with the AAAC system during the (required) user registration phase receiving from it all relevant user- specific information for QoS provisioning, the Network View of the User Profile (NVUP). This NVUP contains information describing the services subscribed by the user in the MN. The QoS Broker may then perform SAC (Service Admission Control) decisions on every service request done by the user’s terminal based on the NVUP and on the network state. For that, the QoS Broker also interacts with the ARs in its QoS domain. These interactions are required both for AR’s QoS configuration and for service authorisation.
Business process management has become an increasingly present activity in organiza- tions. In this context, business process descriptions are considered as a useful artifact in both identifying business processes and complementing business process documentation. However, organizations do not always create business process descriptions to document their business processes. In addition, business process descriptions may not follow a spe- cific format, which may lead to contain ambiguous or non-recurring sentences that make it difficult to understand the process. Thus, this dissertation aims to develop an approach that enables the generation of business process-oriented texts. In the context of this work, the business process-oriented text is defined as a text that is structured, able to maintain the maximum information related to the business process, and able to check the quality of the process in relation to the BPMN 2.0 and in relation to soundness. In order to achieve this goal, was performed an analysis in the literature and in 64 business process descrip- tions in order to define how business process-oriented texts should be. In addition, an SOA-based architecture was developed in which the steps required to generate the busi- ness process-oriented text are addressed to the individual services. The analysis made it possible to find 101 recurring sentence templates in business processes descriptions, of which 13 were considered to have ambiguity issues based on the adopted criteria. Further- more, a prototype was developed and the process description produced by the approach was compared to the original process model through a process similarity technique. The findings made in order to define how the business process-oriented texts should be can support other approaches in generating business process descriptions more suitable for process analysts and domain experts. Finally, the architecture can be enhanced with other services capable of providing other functionalities that contribute to the creation and man- agement of business processes descriptions in organizations.
FSM-based testing is a topic studied for several decades (Moore, 1956; Gill, 1962), yet with recent advances (Simao and Petrenko, 2010b,a; Dorofeeva et al., 2010; Hierons and Ural, 2010; Pedrosa and Moura, 2012). Great effort has been spent on the development of methods that gener- ate effective test suites, i.e., methods that detect as many faults as possible. A so-called complete test suite, capable of revealing all faults from a given fault domain in an implementation, can be generated from a specification if some assumptions are made. One of them is to assume that the maximum number of states in the implementation is known. When both specification and imple- mentation have the same number of states, the generated test suite is called n-complete (Dorofeeva et al., 2005b). There are several methods in the literature that generate n-complete test suites, such as W, HSI, H, SPY, and P. These methods produce test suites with different characteristics that can only be experimentally compared. Few experimental studies on comparing different FSM-based test methods can be found in the literature (Dorofeeva et al., 2005a; Simao et al., 2009a; Dorofeeva et al., 2010). Moreover, they fall short when it comes down to considering recent contributions and analyzing different aspects relevant to the practical application of these methods in service-oriented applications.
After executing all three services the bindings for the translation and printer ser- vices are broken, the Web Services currently executing translation and printing oper- ations are terminated. Invoking these services again will result in failure, lines 21-26 on Appendix A.6 of TestTranslator class show that when a service invocation failure occurs, the index of service instances list is stored and a boolean variable is set to true. The former is used in the middleware during service replacement to replace the fail- ing instance, in the ServiceAppProperties class, with the new replacement. The latter is used in line 27 to check if a failure occurred, in order to trigger the service bind- ing replacement process (lines 30-31). A new binding is established for the translation service and the service is executed as if no error occurred for the user.
application is executing a method that belongs to an instance of a class that needs to be reloaded. In this case, the application is forced to finish executing the method before the new version of the class can be loaded. If the new version does not define the method or if it has a new version of the method, the old version is executed anyways. However, those problems called the schema evolution problem can be bypassed easily by adopting a modular approach and organizing the software into different class loaders . The following example will illustrate that fact. A user subscribes to a service by adding an instance of the Service class in his profile. A first approach is to define the Service class as a whole class containing the data of the service and the specific methods for executing the service. Instead, another approach (the one we use here) organizes the service into a modular entity made of two different classes: a Method class containing the methods for executing the service and a Data class containing the specific data of the service. In this way, we only need to reload the Data class if the user modifies the data (preferences) of the service instead of reloading the whole class, therefore avoiding the potential problems associated with reloading an active class.
The evolution of manufacturing systems and the emergence of decentralised control require flexibility at various levels of their lifecycle. New emerging methods, such as multi-agent and service-oriented systems are major research topics in the sense of revitalizing the traditional production procedures. This paper takes an overview of the service- oriented approach in terms of platform and engineering tools, from the perspective of automation and production systems. From the basic foundation to the more complex interactions, service-oriented architectures and its implementation in form of web services provide diverse and quality proved features that are welcome to different states of the production systems’ life-cycle. Key elements are the concepts of modelling and collaboration, which enhance the automatic binding and synchronisation of individual low-value services to more complex and meaningful structures. Such interactions can be specified by Petri nets, a mathematically well founded tool with features that enhance towards the modelling of systems. The right application of different methodologies together should motivate the development of service-oriented manufacturing systems that embrace the vision of collaborative automation.
With this in mind, it seemed obvious the advantages of developing a specific architecture for a more restricted context and entirely based on open source technologies, standardized mechanisms, protocols and audio codecs. Thus, the aim of the project was to define such an architecture and develop a modular audio distribution system with similar capabilities that could be implemented on top of common IP networks. More specifically, the proposed solution is based on the internet standard network management protocol in order to attain, not only communication between their entities but also an abstract interface syntax to manage audio file collections. It will also allow the use of open source mechanisms and technologies for audio streaming from the server to the playing devices.