pricing and leave some consumer surplus to the customers, in order to be more attractive. Examining customers’ point of view, usage-based pricing was also found more attractive because of higher consumer surplus. Reference  explored cloud provider pricing models using cluster analysis and found common business models; one cluster includes niche providers who use fix pricing, and another cluster includes mass players using pay-per-use pricing models. A possible explanation of using fix prices is lock-in situations prevalent among niche players' products. Reference  who researched costing schemes offers a decision model which calculates financial trade-off between private clouds and public clouds with respect to the workloads. The model takes in consideration cloud bursting as a third option of the two costing options. Cloud bursting is a deployment model which enables vendors to manage varying demands to resources, to supply stable quality ofservices according to pricing schemes. Several researchers studied pricing models wishing to explain anomalies in consumer decisions. Reference  found that consumers wish to maximize their usage while minimizing their costs. The researchers also identified biased decisions of two kinds: cases of fixed-prices-bias in which consumers prefer a fixed price model although they would pay less on a pay-per-use tariff, and cases of pay-per-use bias, in which consumers prefer a pay-per-use tariff although they would pay less on fixed-price tariffs. Reference  states that possible cause for the fixed-price bias is an insurance effect leading consumers to pay more for their budget confidence. Reference  who surveyed pricing models, found that a fixed-price bias was found among half of consumers of the survey and among one quarter of consumers was found a pay- per-use bias. Those researchers state that the insurance effect has significant influence on the flat rate bias while the pay- per-use bias is influenced by the flexibility effects.
This means that the more the city deploys sensor/actuator networks, the more rings that will appear, resulting in more accurate models to analyse the behaviours and dynamics of the city. That is the reason to have a good cloud strategy in order to scale the KPIs, Indicators and Actions management . Hence, the Smart City project in Guadalajara, following the principles of Metrics based in the Cohen Wheel KPIs, requires an architecture to migrate the metropolitan area of Guadalajara to the cloud. This is the main problem and challenge presented in this paper. A new issue to introduce is that the metropolitan area of Guadalajara, and for every city that is composed of interconnected municipalities, each one has autonomous infrastructure and budgets. Since all municipalities are interconnected, a challenge is to connect all data centers respecting their autonomy them. A proposed solution is to create a private cloud to support the three types ofcloudservices. As a use case to create a methodology to estimate the performance and costof the private cloud integration among the interconnected municipalities, we identified sensors, open data and processing requirements as an example that can be used as reference for all KPIs of the Smart City in Guadalajara.
The three main aspects ofcloudcomputing are software as a service, platform as a service and infrastructure as a service. A SaaS provider typically hosts and manages a given application in their own data centre and makes it available to multiple tenants and users over the Web. Some SaaS providers run on another cloud provider’s PaaS or IaaS service offerings. Oracle CRM On Demand, Salesforce.com, and Netsuite are some of the well known SaaS examples. Platform as a Service (PaaS) is an application development and deployment platform delivered as a service to developers over the Web. It facilitates development and deployment of applications without the cost and complexity of buying and managing the underlying infrastructure, providing all of the facilities required to support the complete life cycle of building and delivering web applications and services entirely available from the Internet. This platform consists of infrastructure software, and typically includes a database, middleware and development tools. A virtualized and clustered grid computing architecture is often the basis for this infrastructure software. Some PaaS offerings have a specific programming language or API. For example, Google AppEngine is a PaaS offering where developers write in Python or Java. EngineYard is Ruby on Rails. Sometimes PaaS providers have proprietary languages like force.com from Salesforce.com and Coghead, now owned by SAP. Infrastructure as a Service (IaaS) is the delivery of hardware (server, storage and network), and associated software (operating systems virtualization technology, file system), as a service. It is an evolution of traditional hosting that does not require any long term commitment and allows users to provision resources on demand. Unlike PaaS services, the IaaS provider does very little management other than keep the data centre operational and users must deploy and manage the software services themselves--just the way they would in their own data centre. Amazon Web Services Elastic Compute Cloud (EC2) and Secure Storage Service (S3) are examples of IaaS offerings. 
2 CloudComputing combines some features such as virtualization, high potential, low cost and service oriented (Zhang et al., 2010a) that incentives the development of a new model for processing, storing and sharing information. The GI is also included in the type of information suitable for the CloudComputingenvironment. The best known geographic application on CloudComputing is Google maps (Velte et al., 2010) which is used by thousands of people every day. The OGC standard Keyhole Markup Language (KML) is being used to share geographic information, and its expansion requires the implementation of applications that support it; but, the amount of data generated is a problem for geoprocessing. The development of technologies that process information in Internet is needed to avoid the data problems. There are several types of generators of GI such as GPS, sensors, weather stations, and others that require geoprocessing on line. Usually, GI is related with continuous variables that require specialized software applications or techniques like Geostatistics.
Cloudcomputing was first introduced by Terso Solutions in 1998 after being inspired by salesforce.com. In 2011, two other companies hired cloudcomputing. It provides asset management, data security and cost reduction. Cloudcomputing has been gaining increased attention these years in the IT circles. It efficiently allows the data and applications to be managed by the server, reducing the burden of the server management that employs visualization technology . It is arranged for a price through the internet. The RFID coalesced with cloudcomputing mainly concentrate on rendering a more secure and efficient system [10, 11]. RFID acts an entry point for linking cloud to support mobile systems 
The popularization of the term can be traced to 2006 when Amazon.com introduced the Elastic Compute Cloud.  In Early,1950s, The underlying concept ofcloudcomputing dates to the 1950s, when large-scale mainframe computers were seen as the future ofcomputing, and became available in academia and corporations, accessible via thin clients/terminal computers, often referred to as "dumb terminals", because they were used for communications but had no internal processing capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloudcomputing extends this boundary to cover all servers as well as the network infrastructure.  As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time-sharing. They experimented with algorithms to optimize the infrastructure, platform, and applications to prioritize CPUs and increase efficiency for end users. 
be loaded on “bare-metal”, or intoan operating system/application virtual environmentof choice. When a user has the right tocreate an image, that user usually starts with a “NoApp” or a base-line image (e.g., Win XP or Linux) without any except most basic applications that come with the operating system, and extends it with his/her applications. Similarly, when an author constructs composite images (aggregates of two or more images we call environments that are loaded synchronously), theuser extends service capabilities of VCL. Anauthor can program an image for sole use onone or more hardware units, if that is desired, or for sharing of the resources with other users.Scalability is achieved through a combination of multi-user service hosting, application virtualization, and both time and CPU multiplexing and load balancing. Authors must be component (base-line image and applications) experts and must have good understanding of the needs of the user categories above them in the Figure2 triangle. Some of the functionalities acloud framework must provide for them are image creation tools, image and service management tools, service brokers, service registration and discovery tools, security tools, provenance collection tools, cloud component aggregations tools, resource mapping tools, license management tools, fault- tolerance and fail-over mechanisms, and so on . It is important to note that the authors, for themost part, will not be cloud framework experts, and thus the authoring tools and interfaces mustbe appliances: easy-to-learn and easy-to-useand they must allow the authors to concentrateon the “image” and service development rather than struggle with the cloud infrastructure intricacies.2.3.3. Service Composition Similarly, services integration and provisioning experts should be able to focus on creationof composite and orchestrated solutions neededfor an end-user. They sample and combine existing services and images, customize them, updateexisting services and images, and developnew composites. They may also be the front for
Cloudcomputing has some attributes that are shared, standard service, solution‐packaged, self‐service, elastic scaling and usage‐based pricing. Cloud has three different service models. They are (i) Software as a Service (SaaS) which uses provider’s application over a network. Instead of purchasing the software, cloud user rents the software for use on a pay per use model.(ii)Platform as a Service(PaaS):- It deploys user applications incloud. The Cloud provider gives an environment to application developers, who develop applications and offer those services through the provider’s platform.(iii)Infrastructure as a Service(IaaS):- It deals with rent processing, storage and network capacity. The basic idea is to offer the computingservices like processing power, disk space etc., based on the usage.
We chose to use Jyaguchi in our experiment because there is a rapid development ofcloudcomputing technology as well as diverse methods of obtaining data. Consequently, the requirement for data mining in this infrastructural environment has dramatically increased. At present, however, there are few data analysis tools to process large-scale data that float in the cloud service environment. Also, data mining technology is gradually emerging in the backdrop of such circumstances. In fact, massive data mining over cloudservices could be a very important guide to scientific research and business decision making. In order to propose a prototype data mining technique incloudservices utilizing sequential pattern mining, we have made use of the Jyaguchi architecture, in which the authors have substantial experiences.
A major bottleneck in biological discovery is now emerging at the computational level. Cloudcomputing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloudcomputingservices, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The costof running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms ofcost and wall-clock time. Our findings confirm that cloudcomputing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.
Economically, “The most interesting thing about the cloudcomputing is not the technology, but the new evolving social standards and business models, and the ramifications of egalitarianism on a global scale’’. Everything he thinks is as a Service. Based on our understanding of the essence of what Clouds are promising to be, we propose that Cloudcomputing is a usage model in which resources are deliveries and, it means to provide resources, such as hardware, software and applications as scalable and "on demand" services via public network in a multi-tenant environment. The providing resource network is called ‘Cloud’. All resources in the ‘Cloud’ are used whenever as utility and scalable infinitely.
The cloudcomputing is still in it infancy.this is an emerging technology which will bring about innovations in terms of businessmodels and applications.the widespread penetration of smartphones will be a major factor in driving the adoption of cloude computing.however, cloudcomputing faces challenges related to privacy and security. Due to varied degree of security features and management schemes within the cloud entities security in the cloud is challenging. Security issues ranging from system misconfiguration, lack of proper updates, or unwise user behaviour from remote data storage that can expose user ̳s private data and information to unwanted access can plague a CloudComputing. The intent of this paper is to investigate the security related issues and challenges inCloudcomputingenvironment . We also proposed a security scheme for protecting services keeping in view the issues and challenges faced by cloudcomputing.
Abstract—Nowadays, Cloudcomputing is booming in most of the IT industry. Most of the organizations are moving to cloudcomputing due to various reasons. It provide elastic architecture accessible through internet and also it eliminate the setting up of high costcomputing infrastructure for the IT based solutions and services. Cloudcomputing is pay-per-use model, on-demand network access to a shared pool of configurable computing resources like Application-as a service, Platform as a services and infrastructure as a services. In this paper, survey of security issues at different levels such as application level, host level and network level is presented.
In the most basic cloud-service model & according to the IETF, providers of IaaS offer computers – physical or virtual machines and other resources IaaS clouds often offer additional resources such as a virtual- machine disk image library, raw block storage, and file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on- demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds to deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure . In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.
cloud’s economies of scale and fl exibility are both a friend and a foe from a security point of view. The management of security risk involves users, the technology itself, the cloud service providers, and the legal aspects of the data and services being used. The massive concentrations of resources and data present a more attractive target to attackers, but cloud-based defenses can be more robust, scalable and cost-effective than traditional ones. To help reduce the threat, cloudcomputing stakeholders should invest in implementing security measures to ensure that the data is being kept secure and private throughout its lifecycle.
Cloud security covers several categories. Reference  surveyed the research publications on cloud security issues, addressing vulnerabilities, threats, and attacks. In order to understand security risks, the authors identify the basic concepts underlying vulnerabilities and threats, and classify them as follows: virtualization elements, multi-tenancy, cloud platform and software, data outsourcing, data storage security and standardization and trust. The authors then address the security risks and topics involved in managing risks of each category. Reference  states that cloud threats are due to the complex virtualized infrastructure and dynamic nature of the cloud and they can be categorized to three kinds: (1) Multiple Users – A virtualized cloud layer such as IaaS can hold up various virtual machines and can provide multiple access to different users from around the globe, this kind of sharing is responsible for information leakage. (2) Minimal Control – Users of the cloud are not aware of the location of the physical server, as all these physical servers belong to the data centers of the providers hence the users are not aware of the location of their VMs and the provider is not aware of the contents of the VM or its applications hence giving a way to the security threats. (3) Single Point of control - All the virtualized servers are connected to one or limited number of network interface cards (NIC). This in turn causes more vulnerabilities in the virtual environment, any compromise to the security of the VMs or the physical server will lead to the compromise of either the VM or the physical server and will enable the hacker to gain access to either physical server. Reference  presents the results of a case study identifying real-world information security documentation issues for a Global Fortune 500 organization, should the organization decide to implement cloudcomputingservicesin the future. According to  security risks can be categorized to the following domains: Governance and Enterprise Risk Management; Legal Issues; Compliance and Audit Management; Information Management and Data Security; Interoperability and Portability; Traditional Security, Business Continuity and Disaster Recovery; Data Centre Operations; Incident Response; Application Security; Encryption and Key Management; Identity, Entitlement and Access Management; Virtualization. CSA's experts identified nine critical threats, ranked in descending order of severity: Data Breaches, Data Loss, Account Hijacking, Insecure APIs, Denial of Service, Malicious Insiders, Abuse ofCloudServices, Insufficient Due Diligence, and Shared Technology Issues. This list of threats
research institutes and small/medium size enterprises, reducing the IT cost is especially important. For example, in the traditional school lab, because of software license and hardware constraints, many useful application software and platforms are not accessible to students “anytime and anywhere”. This problem may be solved using PaaS inCloudcomputing. Through virtualization and other resource sharing mechanisms, Cloudcomputing can dramatically reduces user costs and meet large-scale applications’ demands. Using virtualization techniques, it is possible to open a few platforms in a single physical machine (Windows, Linux or others) so that resources can be shared better and more users can be served. Most ofCloudcomputing platform is based on virtualized environments. In a virtualized Cloudcomputing lab, there are four major parts: software and hardware platforms provided from real and virtualized servers (narrowly speaking, PaaS resources); resource management node; database servers and users who access resources through Internet or Intranet. Generally speaking, above mentioned platforms and users can all be called resources in the Cloud. In the following sections, we consider a framework of design and implementation of PaaS in the Cloud, especially focusing on the resource management. Section 3 discusses the design architecture and major modules in the system; section 4 introduces the implementation technologies and operational environment; Related work in the literature are introduced in Section 5; finally a conclusion is provided in section 6.
The results of studies on the use of FeSi5%Mg magnesium alloy in modern cored wire injection method for production of nodular and vermicular graphite cast irons were described. The injection of Mg cored wire length is a treatment method which can be used to process The injection of Mg cored wire length is a treatment method which can be used to process iron melted in an electric induction furnace. This paper describes the results of using a high-magnesium ferrosilicon alloy in cored wire (Mg recovery 47-70%) for the production of vermicular and nodular graphite cast irons at in at least 13 foundries. The results of calculations and experiments have indicated the length of the cored wire to be injected basing on the initial sulfur content and weight of the treated melt. The results of numerous trials have shown that the magnesium cored wire process can produce high quality nodular and vermicular graphite irons under the specific industrial conditions of the above mentioned foundries. It has also been proved that in the manufacture of nodular graphite iron, the costof the nodulariser in the form of elastic cored wire is lower than the costof the FeSiMg5 master alloys.
The data which is collected using above qualitative data collection methods is nothing but just the rough materials that researchers gathering from different aspects of world related to their research problems and questions. Qualitative data is collected in different forms like objects, photos, video recordings of behaviors, choices patterns in computer materials. But words are frequently are raw materials which are further analyzed by qualitative researchers using the different techniques of data analysis. There many methods are available for the researchers to analyze the qualitative data depending on qualitative researcher basic philosophical approach. According to Huber man and Mile, the process of qualitative data analysis is made up of three parallel flows of activities such as data display, data reduction, as well as conclusion verification or drawing. Hence most of the qualitative analysis researchers use the technique data reduction method for the analysis of collected data in order to seek the correct meaning of it for particular research.  
The increasing demand of coal for meeting the requirement of the country paved to exploit the coal by opencast mechanized mining at ever increasing stripping ratio. While shovel has been the most widely used equipment, increasing use of dragline is being made in view of high capacity and lower operating costof a dragline. The replacement of the teeth in shovel and dragline has been a matter of serious concern for the mine operators due to its associated cost and the idling of these equipment. Every Ground Engaging Tool (GET) manufacturing company claims good life of teeth, but such data relates to ideal condition and generally far from the actual condition of the mine. In actual conditions the life of a tooth is much less than what the manufacturing companies claim. Bucket teeth require replacement under the following two conditions:- i) Breakage of tooth