Selecting a Template (Heading 2) The SLA has been used since the late 1980s by telecom operators as a part of the contract with their customers. After that, the SLA becomes a standard protocol of business applications and Web Services . Generally, There are two main specifications are designed to describe the SLA; 1) The Service Negotiation and Acquisition Protocol (SNAP) which support reliable management of remote SLAs and describe the negotiating process in the system . 2) The conceptual SLA frame work for CloudComputing that describes the main characteristics of SLAs in CloudComputing and explains the SLA parameters specified by metrics for the four types of cloud services (i.e., IaaS, SaaS, PaaS, Storage as a Service) . Chenkang,W.,et. al  have suggested different implementation structure of SLAs parameters by adding another dimension related to both of the providers and the users. Concerning the IaaS, the SLAs represent the metrics of the performance for the provider, and the experience for the users. While in the PaaS, both providers and users need more guarantees for the integration's ability and scalability. Finally, for SaaS, the providers need to offer some specialized functions like multi-terminal supporting and customization, and the users should care about the stable usability.
The paper  presents a novel load balancing approach in a heterogeneous distributed environment. The scheduler takes into account the threshold value, basedon the ratio of service rates, along with the queue length to determine whether it is beneficial to migrate a given local task to another node in the system or not. Markov process model is used to describe the behavior of the heterogeneous distributed system under the proposed policies. Kumar also proposes a Load balancing algorithm for fair scheduling, and compares it to other scheduling schemes such as the Earliest Deadline First, Simple Fair Task order, Adjusted Fair Task Order and Max Min Fair Scheduling for a computational grid. It addresses the fairness issues by using mean waiting time. It scheduled the task by using fair completion time and rescheduled by using mean waiting time of each task to obtain load balance. This algorithm scheme tries to provide optimal solution so that it reduces the execution time and expected price for the execution of all the jobs in the grid system is minimized . In , Anandharajan and Bhagyaveni propose to find the best EFFICIENT cloud resource by Co-operative Power aware Scheduled Load Balancing solution to the Cloud load balancing problem. The algorithm developed combines the inherent efficiency of the centralized approach, energy efficient and the fault-tolerant nature of the distributed environment like Cloud. Shahu Chatrapati et al.  propose Competitive Equilibrium Scheme (CES) that simultaneously minimizes mean response time of all jobs, and the response time of each job individually. Ruay-Shiung Chang et al.  propose an Adaptive Scoring Job Scheduling algorithm (ASJS) for a distributed grid environment to reduce the completion
Abstract—The aim of our article is to propose an online collaborative learning process composed of three iterative steps which mix the advantages of hybrid cloudcomputing, Web 2.0 tools and the online questionnaire to improve the quality of the educational content of the course in the classroom. Indeed, it consists of transporting the course contents generated in the classroom managed by a private cloudcomputing towards its learning community managed by a public cloudcomputing in order to improve them by the learning community using the collaborative web 2.0 tools and the online questionnaire. The contents will be returned later to the private cloudcomputing of the classroom as tested and revised educational supports, which will allow their re-evaluation and reactivation according to the best proposals and opinions expressed by the learning community. In this context, we ask each user, in the public Cloudcomputing to complete one or more questionnaires according to the targeted need as counterparty for his access to the submitted documents. This approach will allow benefiting from the collaborative as well as communal intelligence offered by the Web 2.0 and will integrate the class within its community allowing its learners and teachers to create personal and professional social relations, and thus maximizing the sharing, collaboration and wide dissemination of the contents in the internet network.
Obscenity detection from images and videos are now crucial due to social and ethical reasons. It has been two decades the research on this field started. Most of the works are basedon skin color detection, which are not suitable for finding obscenity. The reason for this is that, there are many skins like objects such as beach photos, human skin like animal’s fur, skin colored painting that enables false positive and negative rate. In addition all works performed well on some particular set of images or video data. In this research some aspects of obscenity detection is described delineating strength, weakness and possible extensions of prior works. Introducing some new features and incorporation of multiple classifiers and transfer learning will lead the work more robust. Moreover, traditional multimedia cloudcomputing has been investigated in this study and proposed some new research ideas.
Today mostly people use internet services for their work. Cloudcomputing is a general term for anything that involves delivering hosted services over the Internet. It is a system whereby software programs and storage space can be accessed via the Internet. We can save our data on internet with the help of CloudComputing. With help of CloudComputing user can access number of server and as much as application.it provide a fast access platform to user. Cloudcomputing evolved from the knowledge and experience of managed services, Internet services, application service providers. End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and user's data are stored on servers at aremote location. Cloudcomputing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, Servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction
Considerable amount of research has been carried out for the protection of data in cloud storage. CloudProof  is a system that can be built on top of existing cloud storages like Amazon S3 and Azure blob to ensure data integrity and confidentiality using encryption. To secure data in cloud storage attributed based encryption can be used to encrypt data with a specific access control policy before storage. Therefore, only the users with access attributes and keys can access the data . Another technique to protect data in cloud involves using scalable and fine grained data access control . In this scheme, access policies are defined basedon the data attributes. Moreover, to overcome the computational overhead caused by fine grained access control, most computation tasks can be handed over to untrusted commodity cloud with disclosing data. This is achieved by combining techniques of attribute- based encryption, proxy re-encryption, and lazy re-encryption. 2) Protection from Data Loss: To prevent data loss in cloud different security measures can be adopted. One of the most important measure is maintain backup of all data in cloud which can be accessed in case of data loss. However, data backup must also be protected to maintain the security prop- erties of data such as integrity and confidentiality. Different data loss prevention (DLP) mechanisms have been proposed in research and academics for the prevention of data loss in network, processing, and storage. Many companies including Symantec, McAfee, and Cisco have also developed solutions to implement data loss prevention across storage systems, networks and end points.
system, transportation etc. The authority should specify the budgeted expenditures and revenues in the field of waste management in the short-term period of the plan distributed on relevant entries. If information is available, other economic aspects of the plan may be considered. As mentioned economic indicators can provide useful information about the economic efficiency and make it possible to compare the costs of different systems. Other calculations may also be of importance to assess the efficiency and performance of the system as well. When the plan has been finalized and disseminated, the implementation of the plan begins. Besides implementing the measures indicated in the action plan, regularly reviews of the implementation and monitoring of the performance of the waste management system is needed if the plan is to be an effective and active instrument in waste management and not just another dusty document on a shelf. The time and activity schedule for the plan provides a framework for the implementation phase as it shows the tasks to be carried out, the persons or organizations responsible for the implementation and a time schedule for the implementation of the initiative. As such implementation is simply about conducting and carrying out the measures in the plan. Implementation is, however, not as simple as that and before carrying out the measures more planning may be needed. Each of the measures can be considered as a task involving planning to some extent, which means that some of the recommendations given in this document also can be applied to more limited planning tasks. Depending on the size and complexity of the task relevant parties for its implementation e.g. have to be pointed out and involved, and terms of reference including time and activity schedules may have to be prepared. Monitoring and review of the waste management performance is an important element in measuring the influence and success of the plan and will
Under the intrusion or abnormal attacks, how to supply system service dependability autonomously, without being degraded, is the essential requirement to network system service. Autonomic Computing can overcome the heterogeneity and complexity of computingsystem, has been regarded as a novel and effective approach to implementing autonomous systems to address system security issues. To cope with the problem of declining network service dependability caused by safety threats, we proposed an autonomic method for optimizing system service performance basedon Q-learning from the perspective of autonomic computing. First, we get the operations by utilizing the nonlinear mapping relations of the feedforward neural network. Then, we obtain the executive action by perceiving the state parameter changes of the network system in the service performance. Thirdly, we calculate the environment-rewarded function value integrated the changes of the system service performance and the service availability. Finally, we use the self- learning characteristics and prediction ability of the Q-learning to make the system service to achieve optimal performance. Simulation results show that this method is effective for optimizing the overall dependability and service utility of a system.
Cloudcomputing in higher studies open avenues for better research, discussion and collaboration. It also provides a software desktop environment, which minimizes hardware problems. Cloudcomputing also enables classes to be run on remote locations. Many institutes have moved their resources online with libraries filled with hundreds of thousands of books that students can access at any time. It will expand a lot within the coming few years. Some problems such as platform security, technical standards, regulatory and other services are not well resolved yet in practice, pending further research and exploration. Either way, e-learning application model basedoncloudcomputing will not stop its pace to proceed. As the cloudcomputing technologies become more sophisticated and the applications of cloudcomputing become increasingly widespread, e-learning will certainly usher in a new era of cloudcomputing.
The key to a SOA framework that supportsworkflows is monetization of its services,an ability to support a range of couplings amongworkflow building blocks, fault-tolerance in itsdata- and process-aware service-based delivery,and an ability to audit processes, data and results,i.e., collect and use provenance information. Component-based approach is characterized by [13, 28] reusability (elements can be re-usedin other workflows), substitutability alternative implementations are easy to insert, very precisely specified interfaces are available, run time component replacement mechanisms exist, there is ability to verify and validate substitutions,etc.), extensibility and scalability (ability to readily extend system component pool andto scale it, increase capabilities of individual components, have an extensible and scalable architecture that can automatically discover new functionalities and resources, etc.), customizability (ability to customize generic features tothe needs of a particular scientific domain and problem), and composability (easy construction of ore complex functional solutions using basic components, reasoning about such compositions, etc.). There are other characteristics thatalso are very important. Those include reliabilityand availability of the components andservices, the cost of the services, security, totalcost of ownership, economy of scale, and so on.In the context of cloudcomputing we distinguishmany categories of components: from differentiated and undifferentiated hardware, togeneral purpose and specialized software and Applications, to real and virtual “images”, to environments, to no-root differentiated resources, to workflow-based environments and collectionsof services, and so on. They are discussedlater in the paper.
Several measures had been taken to ameliorate security technology at campuses using latest technologies available. Radio Frequency Identification (RFID) technology in campuses is mainly used for recording attendance which is further extended to activity monitoring and individual security. This research paper proposes a framework of RFID integrated solution for enhanced campus IT security including human, campus IT asset traceability, student’s valuables tracking, exam papers leakage security and issue authentic certificates. In order to meet the growing requirements of campus security system, the use of RFID technology in existing campus application has to be expanded where RFID technology certain performance issues may arise. Designing RFID enabled scalable and reliable applications is difficult due to managing and handling data banks generated by RFID tags, which further requires enhancing the entire IT infrastructure, inconvenient for Small and Medium Enterprise (SME) such as academic institutions having budgetary constraints. Therefore in order to take RFID campus implementation to the next level and create internet of things based applications, campuses have to embrace the concept of using the leading edge technology of cloudcomputing due to its both technological and economic benefits for supporting RFID technology.
Abstract —The integration of information and communication technologies in education according to the global trend occupied a great interest in the Arab world through E-Learning techniques and put it into the form of services within Services Oriented Architecture Technique (SOA), and mixing its input and outputs within the components of the Education Business Intelligence (EBI) and enhance it to simulate reality by educational virtual worlds.This paper presents a creative environment derived from both virtual and personal learning environments basedoncloudcomputing which contains variety of tools and techniques to enhance the educational process. The proposed environment focuses on designing and monitoring educational environment basedon reusing the existing web tools, techniques, and services to provide Browser-based-Application.
While it seems as if the low cost of computing today is taken for granted now, it’s important to note the contri- butions that cloudcomputing provided to have a full per- spective. In an overview of this technology made in 2009, Qian et al.  defined this loose term as a large, but scalable conglomerate of several cheap networked computers that pro- vide IT services at a low cost, with a shared and elastic pool of resources through dynamic resource scheduling. Cloudcomputing can be categorised by different terms depending on its application, which are: Infrastructure as a Service (IaaS, for developers), Platform as a Service (PaaS, for developers) and Software as a Service (SaaS, for end users). Cloud is a key enabler of integration, still, its application to network operation optimisations is rare. Configuration parameters, key performance indicators (KPIs) and other formats are not standardised, which can create difficulties for network operators who rely on equipment and software from multiple vendors to have a holistic view of their network.
research institutes and small/medium size enterprises, reducing the IT cost is especially important. For example, in the traditional school lab, because of software license and hardware constraints, many useful application software and platforms are not accessible to students “anytime and anywhere”. This problem may be solved using PaaS in Cloudcomputing. Through virtualization and other resource sharing mechanisms, Cloudcomputing can dramatically reduces user costs and meet large-scale applications’ demands. Using virtualization techniques, it is possible to open a few platforms in a single physical machine (Windows, Linux or others) so that resources can be shared better and more users can be served. Most of Cloudcomputing platform is basedon virtualized environments. In a virtualized Cloudcomputing lab, there are four major parts: software and hardware platforms provided from real and virtualized servers (narrowly speaking, PaaS resources); resource management node; database servers and users who access resources through Internet or Intranet. Generally speaking, above mentioned platforms and users can all be called resources in the Cloud. In the following sections, we consider a framework of design and implementation of PaaS in the Cloud, especially focusing on the resource management. Section 3 discusses the design architecture and major modules in the system; section 4 introduces the implementation technologies and operational environment; Related work in the literature are introduced in Section 5; finally a conclusion is provided in section 6.
Abstract—Cloudcomputing became a reality, and many com- panies are now moving their data-centers to the cloud. A concept that is often linked with cloudcomputing is Infrastructure as a Service (IaaS): the computational infrastructure of a company can now be seen as a monthly cost instead of a number of different factors. Recently, a large number of organizations started to replace their relational databases with hybrid solutions (NoSQL DBs, Search Engines, ORDBs). These changes are motivated by (i) performance improvements on the overall performance of the applications and (ii) inability to a RDBMS to provide the same performance of a hybrid solution given a fixed monthly infrastructure cost. However, not always the companies can exactly measure beforehand the future impact on the performance on their services by making this sort of technological changes (replace RDBMS by another solution). The goal of this system- atic mapping study is to investigate the use of Service-Level- Agreements (SLAs) on database-transitioning scenarios and to verify how SLAs can be used in this processes.
Abstract—The rapid advance of Information Communication Technology [ICT] has enabled Higher Education Institutions to reach out and educate students transcending the barriers of time and space. This technology supports structured, web-basedlearning activities, and provides diverse multilingual and multicultural settings and also facilities for self assessment. Structured collaboration, in the conventional education system, has proven itself a successful and powerful learning method. Online learners do not enjoy the same collaborative benefits as face-to-face learners because the technology provides no guidance or direction during the online discussion sessions. This paper presents a Web BasedLearning Environment [WBLE] from the perspective of social computing to bring collaborative learning benefits to online learners. The paper also highlights how the deployment of social computing tools can support the creation of an open and socially shared information space for better collaboration among the learners. With Social Network Analysis [SNA] techniques, collaboration among twenty learners is explored and metrics such as in-degree, out-degree and Betweenness and collaborative network mapping are presented.
As mentioned before, according to Goudarzi et al. , there is a problem has been considered of resource allocation and that is to optimise the total profits gained from multi dimensional for multi tire application. Their aimed is to apply resource consolidation techniques to consolidate resources determining the active servers. There are some approaches that basedon statical solutions of resource allocation and some are basedon dynamics. Statical solutions help in some issues like applications saleability and loading balance of the web server. Otherwise, the dynamical solutions have not discussed these issues because they need a minimum response time and high level of reliability from web applications . In addition, dynamical solutions have discussed the cloud virtualisation and reducing the time cost .
Although Cloudcomputing keeps growing, its users are aware and cite security guaran- tees as their main concern when adopting such services (Columbus, 2017a; Meeker, 2017). These concerns are not without a cause; in recent years, several cases of privacy breaches have surfaced. Some of these are intended company policies, such as using data assumed private for advertising purposes (Rushe, 2013); other breaches may also be the result of government-ordered surveillance programs (Cook, 2016; Greenwald and MacAskill, 2013). These kind of leaks aﬀect not only companies, but also individual users (Hough, 2010; Lewis, 2014; Turner, 2016). Particularly sensitive information, like health records, have also been the subject of several attacks (HCA News, 2018; O’Hara, 2017; Roston, 2017); with the recent growth in personal health apps usage (Khalaf, 2014; Meeker, 2017), the need for data security increases. In light of these concerns and news, Cloud services present themselves as a double-edged sword for its users: how to leverage the advantages of the Cloud without compromising privacy and security?
In fact, SOAP and REST philosophies are very different. While SOAP is a protocol for dis- tributed computing, REST adheres to the Web-based design . Some authors, like Pautasso et al. in , recommend using REST for ad-hoc integration and SOAP/WS-* for enterprise-level applications, where reliability and transactions are mandatory. However, in a similar evalua- tion, Richardson and Ruby in  show how to implement transactions and messages reliability and security mechanisms in REST, relying on HTTP. Thus, it shows that REST disadvantages can be tackled efficiently, even when considering enterprise-level applications. For security requirements, while SOAP has the WS-Security framework , RESTful services can rely on common long and successfully employed HTTP security mechanisms. For example, it is possible to rely on HTTPS  to address confidentiality and in HTTP basic and digest-based authen- tication  or service key-pairs to address authentication. Furthermore, it is also possible to address message-level security through cryptography and timestamps to improve confiden- tiality and prevent replay attacks, as Amazon does in its S3 API . Also, some performance tests  have shown REST as a good solution for high availability and scalability, handling many requests with minimal degradation of response times when compared to SOAP.
The use of cloudcomputing is growing as for example Rackspace has shown. The human mind has no physical limits. Imagination and creativity run as far as a person wills them to go. Computer software can doubtless go toward far greater use of it, too, frees itself from space and time. Therein lies the promise of cloudcomputing: Computational power that is infinitely vast and ubiquitous. Forward-thinking organizations and businesses have begun exploring cloudcomputing in earnest, and many more appear poised to join them in the year ahead. They and we can hope for great things to follow.