In this paper, we show that Clouds and Grids share a lot commonality in their vision, architecture and technology, but they also differ in various aspects such as security, programming model, business model, compute model, data model, applications, and abstractions. We also identify challenges and opportunities in both fields. We believe a close comparison such as this can help the two communities understand, share and evolve infrastructure and technology within and across, and accelerate CloudComputing from early prototypes to production systems. What does the future hold? We will hazard a few predictions, based on our beliefs that the economics of computing will look more and more like those of energy. Neither the energy nor the computing grids of tomorrow will look like yesterday’s electric power grid. “Cloud” or “Grid”, we will need to support on-demand provisioning and configuration of integrated “virtual systems” providing the precise capabilities needed by an end-user. We will need to define protocols that allow users and service providers to discover and hand off demands to other providers, to monitor and manage their reservations, and arrange payment. We will need tools for managing both the underlying resources and the resulting distributed computations. We will need the centralized scale of today’s Cloud utilities, and the distribution and interoperability of today’s Grid facilities. Unfortunately, at least to date, the methods used to achieve these goals in today’s commercial clouds have not been open and general purpose, but instead been mostly Company.
There are many different educational environments that serve the educational process based on computer and its technologies. For example Web 2.0 technologies which provide teachers with new ways to engage students, and help student to participate on a global level by using the network as a platform for information sharing, interoperability, user-centered design and collaboration on the World Wide Web.
comprehensive and commonly accepted set of standards. As a result, many standard development organizations were established in order to research and develop the specifi cations. Organizations like Cloud Security Alliance, European Network and Information Security Agency, Cloud Standards Customer Council, etc. have developed best practices regulations and recommendations. Other establishments, like Distributed Management Task Force, The European Telecommunications Standards Institute, Open Grid Forum, Open Cloud Consortium, National Institute of Standards and Technology, Storage Networking Industry Association etc., centered their activity on the development of working standards for different aspects of the cloud technology. The excitement around cloud has created a fl urry of standards and open source activity leading to market confusion. That is why certain working groups like Cloud Standards Coordination, TM Forum, etc. act to improve collaboration, coordination, information and resource sharing between the organizations acting in this research fi eld.
Cloudcomputing is a computing paradigm in which the various tasks are assigned to a combination of connections, software and services that can be accessed over the network. The computing resources and services can be efficiently delivered and utilized, making the vision of computing utility realizable. In various applications, execution of services with more number of tasks has to perform with minimum intertask communication. The applications are more likely to exhibit different patterns and levels and the distributed resources organize into various topologies for information and query dissemination. In a distributed system the resource discovery is a significant process for finding appropriate nodes. The earlier resource discovery mechanism in cloud system relies on the recent observations. In this study, resource usage distribution for a group of nodes with identical resource usages patterns are identified and kept as a cluster and is named as resource clustering approach. The resource clustering approach is modeled using CloudSim, a toolkit for modeling and simulating cloudcomputingenvironments and the evaluation improves the performance of the system in the usage of the resources. Results show that resource clusters are able to provide high accuracy for resource discovery.
research institutes and small/medium size enterprises, reducing the IT cost is especially important. For example, in the traditional school lab, because of software license and hardware constraints, many useful application software and platforms are not accessible to students “anytime and anywhere”. This problem may be solved using PaaS in Cloudcomputing. Through virtualization and other resource sharing mechanisms, Cloudcomputing can dramatically reduces user costs and meet large-scale applications’ demands. Using virtualization techniques, it is possible to open a few platforms in a single physical machine (Windows, Linux or others) so that resources can be shared better and more users can be served. Most of Cloudcomputing platform is based on virtualized environments. In a virtualized Cloudcomputing lab, there are four major parts: software and hardware platforms provided from real and virtualized servers (narrowly speaking, PaaS resources); resource management node; database servers and users who access resources through Internet or Intranet. Generally speaking, above mentioned platforms and users can all be called resources in the Cloud. In the following sections, we consider a framework of design and implementation of PaaS in the Cloud, especially focusing on the resource management. Section 3 discusses the design architecture and major modules in the system; section 4 introduces the implementation technologies and operational environment; Related work in the literature are introduced in Section 5; finally a conclusion is provided in section 6.
The services supplied by clouds are basically encapsulated by one of the three main ser- vice delivery models: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). They are the building blocks for unfolding Anything-as-a-Ser- vice (XaaS) solutions speci cally customized to customer requirements. IaaS mixes novel virtu- alization techniques with current technologies that allow to run Operating Systems (OSes) or even build entire virtual data centers. PaaS allows to develop applications in a consistent man- ner via the cloud platforms to run remotely, while SaaS enables enjoying pre-built software with little control over the application ow. These models can run on public or private clouds, or on a hybrid version of the two. Adopting the public model means accessing the subscribed services through the Internet from anywhere in the globe. IaaS clouds usually have some management interface to control Virtual Machines (VMs) and to arrange a virtual data center, while VMs are accessed via standard remote connection protocols. The authentication to those interface is, therefore, of utmost importance, mostly because they are exposed to Internet dangers, contrar- ily to traditional management tools that are deeply within the trusted perimeter of a company on conventional networks. This dissertation rst identi es such problems by reviewing authen- tication approaches and pointing out their advantages and weaknesses. For example, a single compromised cloud account constitutes an inherently more dangerous threat when compared to traditional website accounts, because an attacker gains control over VMs and potentially over security-related con gurations as well. This can result in data and money losses for both cus- tomers and providers, since the malicious attacker can terminate VM instances running crucial business applications.
Community Cloud: A community cloud aggregates the sustainability from Green Com- puting, the distributed resource provision from GridComputing, the control from Digital Ecosystems and the self-management from Autonomic Computing. It com- petes with vendor clouds and makes use of its user resources to form a cloud with the nodes taking the roles of consumer, producer and coordinator. This concept removes the dependence of cloud suppliers. This cloud is a social structure due to the community taking ownership over the cloud. As the nodes act in self-interest having one control center would be unpractical, so it is nuclear to reward the nodes in order to redirect their computing power to the cloud and not themselves. In a cloud community each user would have an identity that supplies a service or web- site to the community. Since this cloud model isn’t owned by a company it is not attached to that company lifespan, therefore it ends up being more resilient and robust to failures and immune to system cascading failures. The biggest challenge faced by this model is the QoS since it is a heterogeneous system the nodes will most likely need to reach critical mass to satisfy all the QoS requirement from different systems .
Abstract— With the growing demand of higher education in IT services, institutions are adopting cloudcomputing technology to meet their needs. In traditional computing, we install software programs on computer, update the hardware as per our requirements. Documents that we create or save are stored in our computer. Documents are accessible on our own network, but they can’t be accessed by computers outside the network. Using of cloudcomputing, the software programs aren’t run from one’s personal computer, but are rather stored on servers accessed via the Internet. This cloudcomputing is going to prove to be of immense benefit to students as well teachers due to its flexibility and pay-as-you-go cost structure. CloudComputing provides resources and capabilities of Information Technology (e.g., applications, storages, communication, collaboration, infrastructure) via services offered by CSP (cloud service provider). Public cloudcomputing—delivering infrastructure, services, and software on demand through the network—offers attractive advantages to higher education. This computing approach is based on a number of existing technologies, e.g., the Internet, virtualization and gridcomputing. Higher Education Institutions have various departments and many students with up- to-date Hardware and software requirements ,cloudcomputing has the capacity of scaling and elasticity which is perfect for such an environment.
Within the “Information Era”, the world has become increasingly dependent on information exchanged through digital media and it is clear that the paradigm of traditional IT solutions is evolving rapidly to emerging areas (and more and more convincing) like CloudComputing. This is due not only to the differentiating characteristics of CloudComputing services (e.g. on-demand self-service, ubiquity, rapid elasticity, flexibility, among others) and to the greater effectiveness in the management and use of IT resources, but also, from a business point of view, it is due to the inherent commercial benefits, such as cost containment (both capital and operational costs), which fit perfectly into the constantly changing business needs of organizations. Although the advantages of using CloudComputing services are easily identified from a business point of view, many potential consumers are reluctant to use these services to host their information assets due to the fact that, at least at the first stage, they will have to deal with the unknown (their being used to traditional computingenvironments), as well as due to the risks and security threats inherent to these environments resulting from the high degree of exposure to the Internet. In this context, the CloudComputing model has particularities that distinguish it from the traditional computing models insofar as the risks are different for each service model in the Cloud (IaaS, PaaS, SaaS) as well as for each implementation model (Private, Public, Community, Hybrid). Based on that, there is a need for a methodology which, from an IT and information security perspective, not only supports the decision-making of the organizations that consume cloud services with regards to the implementation of appropriate risk management and mitigation mechanisms, but also which enables the organizations to assess its maturity level regarding the implemented controls (that will help mitigate cloud security risks) and, consequently, the forecasting of security areas that should be improved. This, in turn, helps the organizations to achieve a satisfactory mature state that enables the use of cloud services in a more proper and secure way (“security readiness”).
Dr. Neeraj Kumar received his Ph.D. in CSE from SMVD University, Katra (J & K), India, and was a postdoctoral research fellow in Coventry University, Coventry, UK. He is working as an Associate Professor in the Department of Computer Science and Engineering, Thapar University, Patiala (Pb.), India since 2014. Dr. Neeraj is an interna- tionally renowned researcher in the areas of VANET & CPS Smart Grid & IoT Mobile Cloudcomputing & Big Data and Cryptography. He has published more than 150 techni- cal research papers in leading journals and conferences from IEEE, Elsevier, Springer, John Wiley. His paper has been published in some of the high impact factors journals such as-IEEE Transactions on Industrial Informatics, IEEE Transactions on Industrial Electronics, IEEE Transactions on Information Forensics and Security, IEEE Transactions on Dependable and Secure Computing, IEEE Trans- actions on Power Systems, IEEE Transactions on Vehicular Technology, IEEE Systems Journal, IEEE Wireless Communication Magazine, IEEE Vehicular Technology Mag- azine, IEEE Communication Magazine, IEEE Networks Magazine etc. Apart from the journals conferences, he has also published papers in some of the core conferences of his area of specialization such as-IEEE Globecom, IEEE ICC, IEEE Greencom, IEEE CSCWD. He has guided many research scholars leading to Ph.D. and M.E./M.Tech. His research is supported by funding from TCS, CSIT, UGC and UGC in the area of Smart grid, energy management, VANETs, and Cloudcomputing. He is member of the cyber-physical systems and security research group. He has research funding from DST, CSIR, UGC, and TCS. He has total research funding from these agencies of more than 2 crores. He has got International research project under Indo-Poland and Indo-Austria joint research collaborations in which teams from both the countries will visit to Thapar University, Patiala and Warsaw University, Poland, University of Innsburg, Austria respectively. He has h-index of 25 (according to Google scholar, March 2017) with 2500 citations to his credit. He is editorial board members of International Journal of Communication Systems, Wiley, and Journal of Networks and Computer Applications, Elsevier. He has visited many countries mainly for the academic purposes. He is a visiting research fellow at Coventry University, Coven- try, UK. He has many research collaboration with premier institutions in India and different universities across the globe. He is a member of IEEE.
Computation and data intensive geoscience analytics are becoming prevalent. To improve scal- ability and performance, parallelization technologies are essential . Traditionally, most par- allel applications achieve fine grained parallelism using message passing infrastructures such as PVM  and MPI  executed on computer clusters, super computers, or grid infrastruc- tures . While these infrastructures are efficient in performing computing intensive parallel applications, when the volumes of data increase, the overall performance decreases due to the inevitable data movement. This hampers the usage of MPI-based infrastructure in processing big geoscience data. In addition, these infrastructures normally have poor scalability and allo- cating resources is constrained by computational infrastructure.
Bertino et al  proposed the temporal-RBAC (TRBAC) model that enables and disables a role at run-time depending on user requests. In , the authors argue that in some applications, certain roles need to be static and stay enabled all the time, while it is only the users and permissions that are dynamically assigned. In this context, they proposed a generalized TRBAC (GTRBAC) model that advocates for role activation instead of role enabling. A role is said to be activated if at least one user assumes that role. GTRBAC supports the enabling and disabling of constraints on the maximum active duration allowed to a user and the maximum number of activations of a role by a single user within a particular interval of time. In , the authors present an XML-based RBAC policy specification framework to enforce access control in dynamic XML-based web services. However, both GTRBAC and X-RBAC cannot provide trust and context-aware access control (critical for dynamic web services, characteristic of cloudcomputingenvironments), and rely solely on identity or capability-based access control. In , the authors propose an enhanced hybrid version of the X-RBAC and GTRBAC models, called the X-GTRBAC model. X-GTRBAC relies on the certification provided by trusted third parties (such as any PKI Certification Authority) to assign the roles to users. X-GTRBAC also considers the context (such as time, location, or environmental state at the time the access requests are made) to directly affect the level of trust associated with a user (as part of user profile), and incorporates it in its access control decisions. The access privileges for a user/role are based on the threshold (i.e. the trust level) established based on the requestor’s access patterns; if the user appears to deviate from his/her usual profile, then the trust level for the user is automatically reduced to prevent potential abuse of privileges. Such a real-time feature of X-GTRBAC suits to the web-based cloudcomputingenvironments with diverse customer activity profiles.
, IMVU 13 , OpenSimulator 14 and Open Wonderland 15 are the most popular. How- ever, there are other possible alternatives, for example Virtual MTV 16 , Kaneva 17 , Active Worlds 18 , Lively 19 and There 20 . For an extensive list of currently available 3D application server see the joakaydia wiki 21 or the work of Freitas  (which presents a comparison be- tween alternatives). In some 3D application servers it is possible to own and develop land (e.g. Active Worlds or OpenSimulator). In others it is necessary to pay to own land (e.g. Sec- ond Life™). Most virtual worlds provide facilities to chat, walk or play online games. Some of the means of simulating the real world include a market using virtual currency. Second Life™ and IMVU are examples of such systems. Other applications are focused on education and support learning objectives (e.g. Media Grid 22 or project Wonderland 23 ). Another differ- ence between existing virtual worlds is the possibility of access to their source code. Open Source Metaverse Project 24 (OSMP), OpenSimulator and project Wonderland are examples of open source virtual world applications. Second Life™ and There on the other hand do not provide access to source code. One additional distinction that separates all these applications is the capability of users to run their own server in a local network. This allows them to main- tain their world and provide access to other users. OpenSimulator and OSMP are application servers that offer this feature. Other relevant desirable features include modularity, flexibility and extensibility (e.g. OSMP). These are very important because new functionalities can
Presently, the Cloudcomputing has been emerging as a good hot topic since the late of 2007. Industry and academia are starting projects related to Cloudcomputing. For example, Microsoft has published its Cloudcomputing system- Windows Azure Platform . Amazon Elastic Compute Cloud ; IBM’s Blue Cloud ; HP, Intel Corporation and Yahoo! Inc. recently announced the creation of a global, multi-data center, open source Cloudcomputing test bed for industry, research and education . In the last few years, virtualization has introduced some novel system techniques so that the cloud-provider can transparently satisfy its cloud customers’ requirements without impacting their own system utilization. Cloudcomputing differs from gridcomputing in this regard, it can run in conjunction with the original business workloads. Moreover, novel virtualization technologies, e.g. live- migration and pause-resume give rapid and transparent solutions, interference may not occur between the original systems and the cloud workloads . Consequently,
The proposed paper first has presented a resource allocation model for green cloudcomputingenvironments and has proposed the optimal resource allocation method, considering that both resource processing capacity and bandwidth are distributed concurrently for each request and returned out on an hourly basis. The distributed resources are assigned to each service request. It has been established by simulation evaluation that the proposed optimal resource allocation technique (ORAT) could decrease the request loss possibility and as a consequence, decrease the total sum of resource utilized, contrast with an existing ITTPS. Then, this paper has proposed basic measures for attaining fair resource distribution amongst several users in a cloudcomputing environment, which endeavors to assign resources in part to the estimated quantity of resources demanded by each user. The proposed optimal resource allocation technique (ORAT) enables the resource allocation amongst multiple users without a huge beg off in resource efficiency, contrast with an existing ITTPS method which does not judge the fair allocation of resources.
In the following sections, we list advantages and disad- vantages of grid and cloud concepts, which affected our re- search most (see Table 1 for a brief overview). Criteria are extracted from literature, most notably Foster et al. (2008) containing a general comparison with all vital issues, Mell and Grance (2011), Hamdaqa and Tahvildari (2012) and Fos- ter and Kesselman (2003). The discussed issue of security of sensitive and valuable data did not apply to our research and operational setting. However, for big and advanced op- erational weather forecasting this might be an issue due to its monetary value. Because the hardware and network is completely out of the end user’s control, possible security breaches are harder or even impossible to detect. If security is a concern, detailed discussions can be found in Cody et al. (2008) for gridcomputing, and Catteddu (2010) and Feng et al. (2011) for cloudcomputing.
The key to a SOA framework that supportsworkflows is monetization of its services,an ability to support a range of couplings amongworkflow building blocks, fault-tolerance in itsdata- and process-aware service-based delivery,and an ability to audit processes, data and results,i.e., collect and use provenance information. Component-based approach is characterized by [13, 28] reusability (elements can be re-usedin other workflows), substitutability alternative implementations are easy to insert, very precisely specified interfaces are available, run time component replacement mechanisms exist, there is ability to verify and validate substitutions,etc.), extensibility and scalability (ability to readily extend system component pool andto scale it, increase capabilities of individual components, have an extensible and scalable architecture that can automatically discover new functionalities and resources, etc.), customizability (ability to customize generic features tothe needs of a particular scientific domain and problem), and composability (easy construction of ore complex functional solutions using basic components, reasoning about such compositions, etc.). There are other characteristics thatalso are very important. Those include reliabilityand availability of the components andservices, the cost of the services, security, totalcost of ownership, economy of scale, and so on.In the context of cloudcomputing we distinguishmany categories of components: from differentiated and undifferentiated hardware, togeneral purpose and specialized software and Applications, to real and virtual “images”, to environments, to no-root differentiated resources, to workflow-based environments and collectionsof services, and so on. They are discussedlater in the paper.
The three main aspects of cloudcomputing are software as a service, platform as a service and infrastructure as a service. A SaaS provider typically hosts and manages a given application in their own data centre and makes it available to multiple tenants and users over the Web. Some SaaS providers run on another cloud provider’s PaaS or IaaS service offerings. Oracle CRM On Demand, Salesforce.com, and Netsuite are some of the well known SaaS examples. Platform as a Service (PaaS) is an application development and deployment platform delivered as a service to developers over the Web. It facilitates development and deployment of applications without the cost and complexity of buying and managing the underlying infrastructure, providing all of the facilities required to support the complete life cycle of building and delivering web applications and services entirely available from the Internet. This platform consists of infrastructure software, and typically includes a database, middleware and development tools. A virtualized and clustered gridcomputing architecture is often the basis for this infrastructure software. Some PaaS offerings have a specific programming language or API. For example, Google AppEngine is a PaaS offering where developers write in Python or Java. EngineYard is Ruby on Rails. Sometimes PaaS providers have proprietary languages like force.com from Salesforce.com and Coghead, now owned by SAP. Infrastructure as a Service (IaaS) is the delivery of hardware (server, storage and network), and associated software (operating systems virtualization technology, file system), as a service. It is an evolution of traditional hosting that does not require any long term commitment and allows users to provision resources on demand. Unlike PaaS services, the IaaS provider does very little management other than keep the data centre operational and users must deploy and manage the software services themselves--just the way they would in their own data centre. Amazon Web Services Elastic Compute Cloud (EC2) and Secure Storage Service (S3) are examples of IaaS offerings. 
sessions from end-users dynamically, and allocates a desktop session on-demand for the end-user’s request. The fraction of the resources to be chosen is, determined through the dynamic generation of the performance model for the requested remote desk-top session. The dynamic generation takes place using pre-generated application performance models for the applications that would execute within the requested remote desktop session. Actually the strength of the Grid technologies is based on the scheduling and access control of resources with the goal of increasing the utilization and agility of enterprise infrastructure.
Cloud Computng system checks user behaviour everyday and decreases risk point if user uses cloudcomputing service more than one hour.so many people use CloudComputing service so the huge logs arises from transaction between systems,user information update,mass data processing and so on therefore it is very difficult to analyse using log in emergency.to make analysing log better i proposed the method that devides log priority according to security level.