In  the author says that the US National Institute of Standards and Technology (NIST), an agency of the Commerce Department Technology Administration, has created a cloud computing security group. This group considers its role as promoting the effective and secure use of the technology within government and industry by providing technical guidance and promoting standards NIST has recently released its draft wide to adopting and using the Security Content Automation Protocol which identifies a quite of specifications for organizing and expressing security-related information in standard ways, as well as related data, such as identifiers for software flaws and security configuration issues. Its application includes maintaining enterprise systems security. In addition to NIST efforts, the industry itself can affect an enterprise approach to cloud security. If there is application of due diligence and development of a policy of self-regulation to ensure that security is effectively implemented among all clouds, then this policy can also help in facilitating law-making. By combining industry best practices with the oversight NIST and other entities are still developing, we can effectively address cloud computing future security needs.
This privacy-preserving image recovery service in TISR that we propose to explore is also akin to the literature of secure computation outsourcing _, , , , which aims to protects both input and output privacy of the outsourced computations. With the breakthrough on fully homomorphic encryption (FHE), a recent work by Gennaro et al.  shows that a theoretical solution has already been feasible. The idea is to represent any computation via a garbled combinational circuit  and then evaluate it using encrypted input based on FHE. However, such a theoretical approach is still far from being practical, especially when applied in the contexts of image sensing and reconstruction contexts. Both the extremely large circuit and the huge operation complexity of FHE make the general solution impossible to be handled in practice, at least in a foreseeable future. Researchers have also been working on specific designs for securely outsourcing specialized computation tasks, like scientific computations, sequence comparisons, matrix multiplications, modular exponentiations,
RFID, primitively used for automatic identification and tracking has now been extensively used for remote storage and retrieval of the data. RFID is similar to barcode but are far more efficient, easy to manufacture, and can be used in environments hostile to barcodes [1, 2]. RFID comprises of a tag ; usually sequestered on the object to be identified, an antenna that emits Radio Waves to excite the tag, a control unit governing the read/write commands between the RFID readers and the tags [3, 4] .Tags conserving fixed format data also constitutes of a wireless communication IC equal to the size of sesame. The newer to the system called signpost activates tags at the vicinity of about 123 KHz enabling identification of the tags at specific locations. The tags consist of the transmitter that sends out the carrier and receiver the picks up the backscattered signals.
We chose to use Jyaguchi in our experiment because there is a rapid development of cloud computing technology as well as diverse methods of obtaining data. Consequently, the requirement for data mining in this infrastructural environment has dramatically increased. At present, however, there are few data analysis tools to process large-scale data that float in the cloud service environment. Also, data mining technology is gradually emerging in the backdrop of such circumstances. In fact, massive data mining over cloud services could be a very important guide to scientific research and business decision making. In order to propose a prototype data mining technique incloud services utilizing sequential pattern mining, we have made use of the Jyaguchi architecture, in which the authors have substantial experiences.
after the deployment will be more complicated, expensive and risky. This paper mainly concentrates on the above said security concerns of cloud computing, also discusses the integrity issues that arises here. Keeping this in mind, an eficient framework called as User Accountability Framework [UAF] has been introduced which produces accountability, integrity and security the data that has been stored in the cloud. Accountability is provided by keeping the data usage trackable and transparent. Moreover, one of the main inventive features of the proposed work lies in its ability of handling powerful and lightweight accountability, which combines the aspects of usage control, access control and authentication. That is, the data owner can track not only the service level agreements, but also imposes the usage and access control rules as needed. Allied with the accountability feature, two distinct modes are developed for auditing: push mode and pull mode. In push mode, logs are being periodically sent to the data owner, while the pull mode represents an alternative approach whereby the user or some other authorized party can regain the logs as needed. Integrity veriication is done to verify te correctness of the data.
habits, searching habits, etc.Number of cases show that even after a large number of harmless data is collected, personal privacy will be exposed. In fact, the security implications of big data are more widely,the threat people faced, is not limited to leak of personal privacy, like other information,big data is facing many security risks during storage, processing, transmission, etc. and it needs data security and privacy protection. But data security and privacy protection in big data era is more difficult than in the past(such as data security incloud computing, etc.). In the cloud computing, the service providers can control the storage and operation of data.
We applied the developing software architecture to demonstrate the global morphometric characteristics of the Earth and the Moon as well as parameters of the other terrestrial planets (Florinsky, 2008) in forms of the virtual globes. For these purposes the morphometric parameters of the Earth, Mars and the Moon were computed using the 15'-gridded global digital elevation models (DEMs) as the initial data (Florinsky and Filippov, 2015). The digital terrain models (DTMs) with morphometric attributes derived from the DEMs were produced by the method for spheroidal equal angular grids (Florinsky, 1998; Florinsky, 2012). To estimate linear sizes of spheroidal trapezoidal windows in DTMs calculation and smoothing, standard values of the major and minor semi-axes of the Krasovsky ellipsoid was used for Earth (Fig. 2); the Moon was considered as a sphere (Fig. 3).
The authors of  found that even under an average workload the condition that was met more commonly was the latter. Their results also shown that most of these iterations where wasteful and only increased the migration time and bandwidth used. To address this, they proposed a different heuristic to determine when to end the iterative page copying stage. Their proposal tracks the number of pages remaining to be sent in a short history. If there are fewer pages to send than any entry recorded in the history, they enter the final iteration (in order to prevent their optimization from performing poorly, if there is an increasing trend in the number of remaining pages the migration is terminated early, should such trend be detected). In addition, aiming to adapt to the lower bandwidth scenario, CloudNet uses content based redundancy using a block based scheme. Simply put, this divides the content (RAM and disk) in fixed size blocks and calculates a hash for every block. When sending data over the network if both the source and destination have the hash for this block in their caches it means they can send a small 32 bit index for the cache entry instead of sending the actual block, saving bandwidth.
Internal TCP code queues up the data it sends, then only when enough data is in the queue sends the packet to the other machine. This can be a problem if data amounts are very small, which is likely since embedded devices send essential information (very small amounts data ) in order to be energy efficient. This mean data might be held if no more data need to be sent. There is an option that can be set that fix this behavior called TCP NODELAY. This makes the protocol no to wait for enough data to be queued and send it immediately as it is written. Another problem is the control flow of packets, since it orders all packets that mean that if a packet fails to arrive it needs to wait for its arrival in order to deliver the rest of the data packets. This might cause a delay in receiving data.
Cloud solution embeds a state-of-the-art PIR protocol in a distributed environment, by utilizing a novel striping technique. Cloud can retrieve arbitrarily large blocks of information with a single query. We present a comprehensive solution that includes a data placement policy, result retrieval, and authentication mechanisms, which confirm the effectiveness and practicality of our scheme. Specifically, compared to the traditional client/server architecture, pCloud drops the query response time by orders of magnitude, and its performance improves linearly with the number of peers. Whether you call it Cloud Computing or utility computing, the omnipresent power of high-speed internet connections and linkages to databases, applications and processing power will change the way we work in communications. We may use Cloud Computing tools without being aware of what they actually are. We may learn our way into Cloud Computing applications that seem strange today. We will find entirely new applications in the Cloud to help send messages more effectively to target audiences. Young practitioners today will stand back and listen with wry smiles to their seniors who talk about the “good old days” when the PC first appeared and early Local Area Networks provided the first hint of productivity. In fact, they are smiling already at the old fogies.
herokuapp.com domain, custom domains and custom SSL endpoints and Maintaining multiple environments. Also, each router maintains an internal per-app request queue. When processing an incoming request, a router sets up an 8KB receive buffer and begin reading the HTTP request line and request headers. It could be sent up to 1MB response in size before the rate at which the client receives the response will affect the dyno even if the dyno closes the connection, the router will keep sending the response buffer to the client. Heroku lets us run application with a customizable configuration and ruby is best choice in this case. Also, in this paper, we used git to keep datain the .git/objects subdirectory. Git heuristically ferrets out renames and copies between successive path files and determine whether a file has changed, Git compares its current status with those cached in the index. If they match, then Git can skip reading the file again [3, 9, 7, and 15]. Some of factors to choose solution in designing expert system are presented in table 1.
could serve as a guide to help users and providers make decisions about risk mitigation in their organizations . Reference  proposes a comprehensive conceptualization of Perceived IT Security Risks in the CC context that is based on six distinct risk dimensions grounded on an extensive literature review, Q-sorting, and expert interviews. Second, a multiple-indicators and multiple-causes analysis of data collected from 356 organizations is found to support the proposed conceptualization as a second-order aggregate construct. The final set of six security risk dimensions is: Confidentiality, Integrity, Availability, Performance, Accountability and Maintainability risks. Each risk dimension is further categorized to risk items, in total 31 risk items. For example performance risk is categorized to network risks, scalability risks, underperformance risks and internal performance risks. Reference  presents a method to assess security risks including a cohesive set of steps to identify a complete set of security risks and also to assess them. The method is based on the integration of qualitative and quantitative models that focus on formal evaluation and assessment. In order to assess risks, risks are categorized to Six-View Perspectives: Threat view, Resource View, Process View, Risk Assessment View, Management View, and Legal View. To summarize, there is no one single framework describing all CC risk factors.
In recent years, mobile communication devices and mobile computing devices become increasingly popular. The computing power of lightweight netbooks and smart phones is growing stronger. Many electronic devices, such as medical diagnosis and healthcare instruments, electrical facilities, automobile equipments, and home appliances, are gradually toward the development of network interconnection. At the same time, network communication environment is also well constructed. Broadband networks and wireless communication networks become more and more speedy and popular. These facts, in addition to the demand that people desire flexible, convenient, and geographical location independent access to data and services, bring forth the era of cloud computing. Incloud computing, data and application software are moved from traditional local hosts to remote data centers and application servers, providing on-demand service, heterogeneous and ubiquitous network access, location independent resource pooling, rapid resource elasticity, and usage-based pricing . Cloud computing also may provide some value-added services, such as automatic data backup and group collaboration support. Users can use various client devices, no matter desktop PCs or lightweight thin client devices such as netbooks and smart phones, to subscribe services from cloud server providers with relatively cheaper software and hardware costs, and relief from the complexity of direct hardware maintenance.
Internet of Things (IoT) is characterised by the heterogeneity of the used devices, which leads to information exchange problems. To address these problems, the Plug’n’Interoperate approach is used, where the steps needed to perform the information exchange between devices are described by interoperability specifications (IS) and are operated by the devices. However, more than one IS can exist to describe the information exchange between each pair of devices, so to choose the suitable IS, there is the need to measure the information exchange described by each one. To do this, there already exist some methods. But, they rely on a deep understanding of the IS and the data formats involved. To overcome this, an advanced measurement method is presented. This method advances by measuring the datatransfer provided by an IS, without the need of specific knowledge about it. This measurement does that, by relying only on an abstract view of the datatransfer and providing results that allows the benchmarking of the entire interoperability performance of the IoT environment. Thus allowing the comparison of different IS without the need of being specialized on them.
Sustainability, appropriate use of natural resources and providing a better quality of life for citizens has become a prerequisite to change the traditional concept of a smart city. A smart city needs to use latest generation Information Technologies, IT, and hardware to improve services and data, to offer to create a balanced environment between the ecosystem and inhabitants. This paper analyses the advantages of using a private cloud architecture to share hardware and software resources when it is required. Our case study is Guadalajara, which has nine municipalities and each one monitor’s air quality. Each municipality has a set of servers to process information independently and consists of information systems for the transmission and storage of data with other municipalities. We analysed the behaviour of the carbon footprint during the years1999-2013 and we observed a pattern in each season. Thus our proposal requires municipalities to use a cloud-based solution that allows managing and consolidating infrastructure to minimize maintenance costs and electricity consumption to reduce carbon footprint generated by the city.
Abstract— Data protection incloud computing is constantly been an issue of discussion. There will always be something left undone, incomplete, insecure and unestablished. But a chance of improvement will be there every time. The data access should be monitored whenever there is a move from the client side. Also the privacy checks via internal control should be done in order to ensure the confidentiality of sensitive data. In this paper, a new entity has been introduced for managing the data accessibility and for assigning apt controls with respect to the business levels by employing Statement of Applicability (SOA) concept. Also the application of internal controls mentioned in this paper has a significant meaning. The issue of data security is also considered within this paper and therefore the sense of data protection is involved by implementing the respective internal controls. Service Level Agreements (SLAs) will also be discoursed as it is necessary to include a third party hand for smooth running of cloud applications.
This mobile device sensor data is related with the information of the distinct locations where the users spend their time throughout the day (e.g., home, work, shopping centers, restaurants, etc.). From the data that we have collected, we are particularly interested in identifying locations where people spend a great deal of time, and associating these locations with information about the environment, obtained from geographic information system data sources. Since users carry all times mobile devices it is possible to identify transportation mode , driving style , identify traffic information , identification of parking process  and road condition. Figure 3, shows in a high- level process data sensor from mobile devices towards the creation of knowledge. The big data created can be associated and stored in a user mobility profile, in a cloud database, with the information about time and routes (XML graph with time and GPS coordinates). It is possible to present the route representation for that month with associated information of the transportation mode, the number of times the route was performed, and also the temporal periods. Thus, it is possible to represent the time that a user spent in each location. All these profiles can be manipulated to extract useful knowledge for public transportation operators and municipalities’ authorities. This is more less free information about user habits in a city.
Subhasri P. et al.  proposed a Multi-level Encryption algorithm to secure the datain the cloud. The proposed algorithm uses rail fence and ceaser cipher algorithm. Initially, plaintext is encrypted using rail fence technique. Assign the position value i to each letter in the encrypted text. Generate the ASCII values of each character. Assign a key and apply it on the text using the formula: E = (p + k + i) % 256, where p denotes Plaintext, k denotes key and i denotes Position. Algorithm produces the ASCII character of the equivalent decimal value. Key used for encryption is not generated. Maintain the position of each character in the text requiring additional storage. Here, Author has not mentioned where the characters position details are maintained.
One of the modern developments in the internet is introduced by Cloud Computing (CC) technology. This technology becomes quickly popular due to its properties in which every kind of facilities offers to the users in the form of a service . The CC is an internet-based service model as it provides easy access to a set of changeable computing resources through internet for users based on their demands. In such mode, users try to access based on their needs regardless of where the service is located or how it is delivered. Various types of computing services try to offer such services to the users. Some of these computing systems are cluster computing, grid computing and recently CC. There are some services provided by CC architecture based on IT customers’ needs . Naturally, any new changes and concepts in IT environment has its own specific problems and complexities; using CC is not an exception and put many challenges in front of experts in this field, such as load balancing, security, reliability, ownership, backing up data, data portability and supporting
So far, incloud computing distinct customer is accessed and consumed enormous amount of services through web, offered by cloud service provider (CSP). However cloud is providing one of the services is, security-as-a-service to its clients, still people are terrified to use the service from cloud vendor. Number of solutions, security components and measurements are coming with the new scope for the cloud security issue, but 79.2% security outcome only obtained from the different scientists, researchers and other cloud based academy community. To overcome the problem of cloud security the proposed model that is, “Quality based Enhancing the user data protection via fuzzy rule based systems incloudenvironment ”, will helps to the cloud clients by the way of accessing the cloud resources through remote monitoring management (RMMM) and what are all the services are currently requesting and consuming by the cloud users that can be well analyzed with Managed service provider (MSP) rather than a traditional CSP. Normally, people are trying to secure their own private data by applying some key management and cryptographic based computations again it will direct to the security problem. In order to provide good quality of security target result by making use of fuzzy rule based systems (Constraint & Conclusion segments) incloudenvironment. By using this technique, users may obtain an efficient security outcome through the cloud simulation tool of Apache cloud stack simulator.