Top PDF A novel erasure based data transmission for cloud storage

A novel erasure based data transmission for cloud storage

A novel erasure based data transmission for cloud storage

$EVWUDFW² :LWK WKH FRQWLQXRXV JURZWK LQ WKH FORXG EDVHG DSSOLFDWLRQV DQG VWRUDJH UHTXLUHPHQWV RQ FORXG PDQ\ FRPPHUFLDO FORXG EDVHG VWRUDJH VHUYLFH SURYLGHUV DUH PDNLQJ WKH PDUNHW PRUH FR[r]

10 Ler mais

REVIEWING SOME PLATFORMS IN CLOUD COMPUTING

REVIEWING SOME PLATFORMS IN CLOUD COMPUTING

several business cloud models rapidly evolved to harness the technology by providing computing infrastructure, data-storage, software applications and programming platforms as services. However, the inter-relations inside cloud computing system have been ambiguous and the enable feasibility of inter-operability has been debatable while referring to the core cloud computing services. Furthermore, every cloud computing service has an interface and employs a different access control protocol. A unified interface for each cloud computing service to support integrated access to cloud computing services is not existent, though portals and gateways can provide unified web-based user interface. So, the introduction of the cloud computing domain, its components and their inner relations are necessary to help the researchers achieve a better understanding of the novel technology. The rest of this paper is structured as follows. The motivation for this study is introduced in section II. Section III addresses the three layers in cloud computing, while section 4 illustrates some cloud computing platforms. We conclude our paper and give the further work in Section 5.
Mostrar mais

6 Ler mais

 Secure Erasure Code-Based Cloud Storage with Secured Data Forwarding Using Conditional Proxy Re-Encryption (C-PRE)

Secure Erasure Code-Based Cloud Storage with Secured Data Forwarding Using Conditional Proxy Re-Encryption (C-PRE)

Rishi Chandra, a product manager for Google Enterprise, outlined in an Interview what he believes are key trends that will drive movement toward cloud-based enterprise applications. Chandra cited consumer-driven innovation, the rise of power collaborators, changing economics, and a lowering of barriers to entry as the chief reasons why the cloud model is being so widely adopted. Innovation behind the success of cloud services ultimately depends on the acceptance of the offering by the user community. Acceptance of an offering by users changes the economics considerably. As more users embrace such innovation, economies of scale for a product allow implementers to lower the costs, removing a barrier to entry and enabling even more widespread adoption of the innovation. In this section, we will present some of the applications that are proving beneficial to end users, enabling them to be “power collaborators.” We will take a look at some of the most popular SaaS offerings for consumers and provide an overview of their benefits and why, in our opinion, they are helping to evolve our common understanding of what collaboration and mobility will ultimately mean in our daily lives. We will be examining four particularly successful SaaS offerings, looking at them from both the user perspective and the developer/implementer perspective. Looking at both sides of these applications will give you a much better understanding of how they are truly transforming our concept of computing and making much of the traditional desktop-type software available to end users at little to no cost from within the cloud.
Mostrar mais

5 Ler mais

Guaranteeing Data Storage Security in Cloud Computing

Guaranteeing Data Storage Security in Cloud Computing

A few patterns are opening up the time of Cloud Computing, which is an Internet-based advancement and utilization of PC innovation. The ever less expensive and all the more capable processors, together with the product as an administration (SaaS) processing structural planning, are changing server farms into pools of figuring administration on a colossal scale. The expanding system transmission capacity and solid yet adaptable system associations make it even conceivable that clients can now subscribe astounding administrations from information also, programming that dwell singularly on remote server farms.
Mostrar mais

4 Ler mais

Cloud Storage and its Secure Overlay Techniques

Cloud Storage and its Secure Overlay Techniques

 Plutus: Plutus aims to provide strong security even with an untrusted server. The main feature of Plutus is that all the data are stored encrypted and all key distribution is handled in a decentralized manner [13]. All cryptographic and key management operations are performed by the clients, and the server writes very little cryptographic overhead. Mechanisms that Plutus uses to provide basic file system security features are: to detect and prevent unauthorized data modifications, to differentiate between read and write access to files and to change user’s access privileges [13]. This technique, Plutus enables secure file sharing without placing much trust on the file servers. In particular, it makes use of cryptographic primitives to protect and share files [13]. It provides protection against data outflow attacks on the physical device. It allows users to set arbitrary policies for key distribution. It enables better server scalability. Aggregating keys - reduces the number of keys that users need to manage, distribute, and receive. Most of the complexity of the implementation is at the client-side.
Mostrar mais

5 Ler mais

Privacy-Enhanced Dependable and Searchable Storage in a Cloud-of-Clouds

Privacy-Enhanced Dependable and Searchable Storage in a Cloud-of-Clouds

Google’s Encrypted BigQuery is an interesting approach but it’s geared towards databases and big data analytics. It also has some limitations on the operations allowed which the pro- grammer has to take into account when deciding the database schema. The wrong schema can be very costly in terms of performance and bandwidth costs and it’s not always easy to predict which queries will be made in the future. RACS doesn’t provide privacy and offers only two connectors for existing cloud providers and a connector for a network file system. Cloud-RAID provides privacy, availability, integrity and avoids the vendor lock-in while also adjusting dynamically in which clouds should files be stored to better suit the user’s needs. However there are no operations on encrypted data and all encryption keys are stored lo- cally in a database leaving key distribution completely to the user. NCCloud translate a RAID6 architecture almost directly to cloud storage and then makes some improvements to reduce network traffic to optimize the cost of recovery in case of failure. However it lacks privacy and integrity. InterCloud RAIDer provides availability, integrity and some weak privacy with non systematic eraser codes. If an attacker gets access to enough chunks of a file they will be able to reconstruct it, which would not be the case with the help of an encryption scheme. It also makes some improvements on storage overhead. It is however more geared towards private user with little to moderate amounts of outsourced data. RAM- Cloud provides an interest solution which tries to minimize latency. Although the results presented in the paper look promising it’s a solution designed to be deployed in a data center. This raises the problem of vendor lock-in. Also since replication is all the same data center availability might be affected in case of data center failure, which would not be a problem on other replication services. When deploying it with servers on different data centers the performance gains might not be as good. OpenReplica also provides an interesting approach to replication and the lineralization of operations by using Paxos to guarantee strong con- sistency. However the use of Paxos for every request can become a bottleneck if too many client proxies are used.
Mostrar mais

112 Ler mais

Review of Access Control Models for Cloud Computing

Review of Access Control Models for Cloud Computing

Bertino et al [11] proposed the temporal-RBAC (TRBAC) model that enables and disables a role at run-time depending on user requests. In [12], the authors argue that in some applications, certain roles need to be static and stay enabled all the time, while it is only the users and permissions that are dynamically assigned. In this context, they proposed a generalized TRBAC (GTRBAC) model that advocates for role activation instead of role enabling. A role is said to be activated if at least one user assumes that role. GTRBAC supports the enabling and disabling of constraints on the maximum active duration allowed to a user and the maximum number of activations of a role by a single user within a particular interval of time. In [13], the authors present an XML-based RBAC policy specification framework to enforce access control in dynamic XML-based web services. However, both GTRBAC and X-RBAC cannot provide trust and context-aware access control (critical for dynamic web services, characteristic of cloud computing environments), and rely solely on identity or capability-based access control. In [14], the authors propose an enhanced hybrid version of the X-RBAC and GTRBAC models, called the X-GTRBAC model. X-GTRBAC relies on the certification provided by trusted third parties (such as any PKI Certification Authority) to assign the roles to users. X-GTRBAC also considers the context (such as time, location, or environmental state at the time the access requests are made) to directly affect the level of trust associated with a user (as part of user profile), and incorporates it in its access control decisions. The access privileges for a user/role are based on the threshold (i.e. the trust level) established based on the requestor’s access patterns; if the user appears to deviate from his/her usual profile, then the trust level for the user is automatically reduced to prevent potential abuse of privileges. Such a real-time feature of X-GTRBAC suits to the web-based cloud computing environments with diverse customer activity profiles.
Mostrar mais

9 Ler mais

Data Mining-driven Manufacturing Process Optimization

Data Mining-driven Manufacturing Process Optimization

Prediction use cases focus on the forecast of certain pro- cess characteristics. Thus, they are always targeted as the user has to pre-define relevant characteristics to predict. Prediction can be done ex-ante and real-time. Ex-ante pre- diction comprises the forecast of characteristics of processes before their first execution, i. e., during process planning and design. As there is no process execution data available for the novel process, similarity inspections of existing process- es have to be conducted to derive corresponding predictions. This is not detailed in this article. In the following, we look at real-time prediction, i. e., forecasting of process charac- teristics during the actual execution of the process. Based on the current state of a running process as well as information about completed executions in the past, data mining-driven predictions can be made. A concrete use case for real-time prediction is the metric-oriented real-time prediction. The user selects a metric and defines whether numeric or nominal forecasts should be made. The former refers to the forecast of exact values using numeric prediction techniques, e. g., regression [28], the latter focuses on predicting categorized metrics in analogy to the metric-oriented RCA. On this ba- sis, predictions can be made at certain defined stages of the process, e. g., after the completion of each production step. Each stage defines a restricted data basis for the generation of prediction and classification models using data of past process executions as training data. These models are then employed to make predictions about the process in execu- tion. Metric-oriented real-time prediction is valuable for all target groups including production workers on the shop floor whereas each target group uses its own specific metric selec- tion. In analogy to the failure-oriented RCA we could image a failure-oriented real-time prediction as well to forecast likely failures during process execution.
Mostrar mais

7 Ler mais

J. Microw. Optoelectron. Electromagn. Appl.  vol.16 número2

J. Microw. Optoelectron. Electromagn. Appl. vol.16 número2

Design and development of a compact CRLH TL based UWB BPF is presented in this paper. The CRLH TL is designed on coupling of two unit-cells of via-less CRLH TL. The filter is compact in size 11.9 x 6 mm 2 and excited by the CPW-fed. Due of the absence of via the filter ease the fabrication process and saves the time taken to build the vias. The developed UWB BPF shows the measured insertion-loss less than 0.6 dB and return-loss more than 10.4 dB throughout the passband 3.12 GHz – 12.43 GHz with fraction bandwidth of 120 %. Moreover, filter also shows the low (maximum deviation of 0.98 ns) and flat group delay response throughout the passband. Because of the compact in size and good in performance the proposed UWB BPF may find the potential applications in modern small sized wireless communication systems.
Mostrar mais

15 Ler mais

Caracterização de tráfego de rede : Cloud Storage na Universidade do Minho

Caracterização de tráfego de rede : Cloud Storage na Universidade do Minho

O terceiro serviço mais requisitado diz respeito à Google. A Google Inc, começou por dis- ponibilizar um motor de busca, mas depressa cresceu e hoje disponibiliza aos seus utilizadores desde software de produtividade (SaaS), serviço de correio electrónico (Gmail), redes sociais, navegador de internet (Google Chrome), entre outros. Também disponibiliza um serviço de ar- mazenamento na cloud, o Google Drive. No entanto no Gráfico mostrado em 4.1, este serviço encontra-se agrupado com os restantes CSP, na categoria Cloud Storage. Os acessos represen- tados no gráfico referem-se então a acessos ao motor de busca na Internet. No entanto, apesar de ser a terceira aplicação mais acedida na amostragem, não reporta o seu acesso global na UM, uma vez que só diz respeito aos acessos via HTTPS .Apenas os utilizadores autenticados na sua conta Google é que podem fazer a procura utilizando o protocolo cifrado como a empresa in- forma em [25], ou então aqueles que acedam directamente ao URL HTTPS. Para os restante, e por default a pesquisa é sempre HTTP.
Mostrar mais

121 Ler mais

Computer Based Asset Management System For Commercial Banks

Computer Based Asset Management System For Commercial Banks

because of the need for permeable and flexible boundaries in an organization‘s decision-making and because communication between the various levels is paramount for the overall success of the management process. The decision-making levels do have, however, different scopes and require different data and information inputs in order for the decision-making to be carried out effectively and efficiently. The strategic decision-making level is the broadest and most comprehensive. It pertains to strategic decisions concerning all types of assets and systems within the civil engineering environment, one of them being the transportation sector. Within transportation, it may consider all different i-nodes and all assets pertaining to these modes. The strategic level of decision-making is concerned with generic and strategic resource allocation and utilization decisions within the constructed environment. The network decision-making level pertains to determining the overall agency-wide maintenance, rehabilitation, construction strategies, and works programs. This decision level considers system-wide decisions, but its scope is narrower than the strategic level‘s. Overall budget allocation and transportation planning are the key focus areas. This decision level is often broken down into program and project selection levels Haas et al. 1994).
Mostrar mais

7 Ler mais

An Enhanced trusted Image Storing and Retrieval Framework in Cloud Data Storage Service Environment

An Enhanced trusted Image Storing and Retrieval Framework in Cloud Data Storage Service Environment

In this paper, we initiate the investigation for these challenges and propose a novel outsourced image recovery service (TISR) architecture with privacy assurance. For the simplicity of data acquisition at data owner side, TISR is specifically designed under the compressed sensing framework. The acquired image samples from data owners are later sent to cloud, which can be considered as a central data hub and is responsible for image sample storage and provides on-demand image reconstruction service for data users. Because reconstructing images from compressed samples requires solving an optimization problem [11], it can be burdensome for users with computationally weak devices, like tablets or large-screen smart phones. TISR aims to shift such expensive computing workloads from data users to cloud for faster image reconstruction and less local resource consumption, yet without introducing undesired privacy leakages on the possibly sensitive image samples or the recovered image content. To meet these challenging requirements, a core part of the TISR design is a tailored lightweight problem transformation mechanism, which can help data owner/user to
Mostrar mais

5 Ler mais

A Novel K-Means Based Clustering Algorithm for High Dimensional Data Sets

A Novel K-Means Based Clustering Algorithm for High Dimensional Data Sets

Abstract- Data clustering is an unsupervised method for extraction hidden pattern from huge data sets. Having both accuracy and efficiency for high dimensional data sets with enormous number of samples is a challenging arena. For really large and high dimensional data sets, vertical data reduction should be performed prior to applying the clustering techniques which is performing dimension reduction, and the main disadvantage is sacrificing the quality of results. However, because dimensionality reduction methods inevitably cause some loss of information or may damage the interpretability of the results, even distorting the real clusters, extra caution is advised. Furthermore, in some applications data reduction is not possible e.g. personal similarity, customer preferences profiles clustering in customer recommender system or data by which is generated sensor networks. Existing clustering techniques would normally apply in a large space with high dimension; dividing big space into subspaces horizontally can lead us to high efficiency and accuracy. In this study we propose a method that uses divide and conquer technique with equivalency and compatible relation concepts to improve the performance of the K-Means clustering method for using in high dimensional datasets. Experiment results demonstrate appropriate accuracy and speed up.
Mostrar mais

5 Ler mais

MQARR-AODV: A NOVEL MULTIPATH QOS AWARE RELIABLE REVERSE ON-DEMAND DISTANCE VECTOR ROUTING PROTOCOL FOR MOBILE AD-HOC NETWORKS

MQARR-AODV: A NOVEL MULTIPATH QOS AWARE RELIABLE REVERSE ON-DEMAND DISTANCE VECTOR ROUTING PROTOCOL FOR MOBILE AD-HOC NETWORKS

In this paper a novel multipath QoS-aware reliable reverse on-demand distance vector routing protocol (MQARR-AODV) for MANET has been proposed. MQARR discovers two paths and perform the data transmission along the two paths in parallel. In order to select best two reliable routes, proposed protocol uses three parameters, the MAC overhead, the Eminence of Link, node residual energy. The proposed protocol provides high energy efficiency, security and load balancing thus prolongs the network life time and makes up high reliability communications. If one path fails, the data transmission will be continued in the backup path automatically. Simultaneously, finding multiple paths in a single route discovery reduces the routing overhead incurred in maintaining the connection between source and destination nodes and reduces the routing process overhead. The simulation results show that the proposed scheme is better than AODV in discovering and maintaining routes. The performance analysis shows that the frequency of an on demand route 0
Mostrar mais

5 Ler mais

Securing Data Transfer in Cloud Environment

Securing Data Transfer in Cloud Environment

A key hurdle to moving IT systems to the cloud is the lack of trust on the cloud provider. The cloud provider, in turn, also needs to enforce strict security policies, which in turn requires additional trust in the clients. To improve the mutual trust between consumer and cloud provider, a good trust foundation needs to be in place. Cloud computing can mean different things to different people. The privacy and security concerns will surely differ between a consumer using a public cloud application and a medium-sized enterprise using a customized suite of business applications on a cloud platform and this brings a different package of benefits and risks. What remains constant, though, is the real value that the user seeks to protect. For an individual, the value
Mostrar mais

5 Ler mais

Aeration strategy for controlling grain storage based on simulation and on real data acquisition

Aeration strategy for controlling grain storage based on simulation and on real data acquisition

bulk under levels that reduce insect and mold activities. In spite of this control strategy requires reliable equipment and software, it is powerful enough to meet the needs of opti- mum grain storage, electrical energy saving, data simulations and data saving. As the AERO controller evaluates the rela- tionships between simulated and real data, it can be used in different regions, during different seasons and with different aeration systems, automatically adjusting its set points. Also, this control strategy can be easily adapted to different data acquisition systems, as it can be used to monitor and control several storage structures.
Mostrar mais

7 Ler mais

Influences of in-cloud aerosol scavenging parameterizations on aerosol concentrations and wet deposition in ECHAM5-HAM

Influences of in-cloud aerosol scavenging parameterizations on aerosol concentrations and wet deposition in ECHAM5-HAM

Predicted aerosol concentrations, burdens and deposition have been compared between simulations that implemented the new diagnostic scheme, the prescribed scavenging fractions of the standard ECHAM5-HAM, and the prog- nostic aerosol cloud processing approach of Hoose et al. (2008a,b). The prescribed fractions approach was the least computationally expensive, but was not as physically de- tailed as the diagnostic and prognostic schemes, and was not able to represent the variability of scavenged fractions, particularly for submicron size particles and for mixed and ice phase clouds. As a result, the diagnostic and prognos- tic schemes are recommended as preferable to the prescribed fraction scheme. The global and annual mass burdens in- creased by up to 30% and 15%, for sea salt and dust, re- spectively, and the accumulation mode number burden in- creased by near to 50%, for the prognostic scheme relative to the diagnostic scheme. Aerosol mass concentations in the middle troposphere were increased, by up to one order of magnitude for black carbon, for the diagnostic and pro- gostic schemes compared to the prescribed scavenging frac- tion approach. Thus, uncertainties in the parameterization of in-cloud scavenging can lead to significant differences in predicted middle troposphere aerosol vertical profiles, par- ticularly for mixed and ice phase clouds. Additionally, we recommend that the next generation of aerosol microphysi- cal models should give careful attention to the representation of impaction processes, particularly in mixed and ice phase clouds, and for dust at all cloud temperatures. Different impaction parameterizations changed the global and annual mean stratiform dust mass removal attributed to impaction by more than two orders of magnitude, which illustrates the considerable uncertainty related to in-cloud impaction scav- enging. For the prognostic scheme, exclusion of parameter- ized impaction increased the the global, annual accumulation mode number burden by near to 60%.
Mostrar mais

33 Ler mais

A new meta-data driven data-sharing storage model for SaaS

A new meta-data driven data-sharing storage model for SaaS

In section 2, we describe five techniques on design and implement multi-tenant database schema, which can extend default data model to address tenant ’s unique needs. However, there are some drawbacks or limits in these techniques. The “Extension Table” technique is limited to use in small tenants applications, for the number of tables will be increased by increasing the number of tenants and the variety of their business requirements. The “Universal Table” technique enables tenants to extend their tables in different ways according to their needs. However it has the obvious disadvantage that the rows need to be very wide, even for narrow source tables, and the database has to handle many null values. Furthermore, indexes are not supported in universal table columns, as the shared tenant’s columns might have different structure and data type. This issue leads to the necessity of adding additional structures to make indexes available in this technique. The “Pivot Tables” technique can eliminating NULL values and selectively read from less number of columns, but it has more columns of meta-data than actual data and reconstructing an n-column logical source table requires (n−1) aligning joins along the Row column. This leads to a much higher runtime overhead for interpreting the meta- data. It seems a good choice to use “Chunk folding”, Stefan Aulbach in his paper described the advantage of this technique through many experimental data. However, this technique is in the phase of theoretical research, and lack of an effective vertical partitioning algorithm to get the most appropriate results. The last technique “XML Table” makes the data model arbitrarily extensible while retaining the cost benefits of using a shared database. If the customer requires a considerable degree of flexibility to extend the default data model, I think it is the best approach to take if the ISV wishes to use a shared database. However, this technique is limited to extend fields in a table, sometimes, customers need to define their own objects, for example, in the ERP System.
Mostrar mais

6 Ler mais

Data storage system for wireless sensor networks

Data storage system for wireless sensor networks

With CoAP/Observe, a client may store notifications in it and use a stored notification, as long as it is fresh, to serve other clients requesting it, without contacting the origin server. This means that the support for caches and prox- ies, incorporated in Observe, allows registrations and transmission of notifica- tions, from different clients and servers, to be carefully planned so that energy saving and bandwidth utilization is optimized. This avoids energy depletion of nodes and increases network lifetime, with a positive overall impact on de- lay. In this thesis, a careful planning of registration steps for data/notification storage is proposed. That is, a framework to plan registration steps through proxies, and also the aggregation and scheduling of notification deliveries at such proxies, so that maximum energy saving is achieved, bandwidth is used efficiently and the overall delay is reduced. This tool can be used offline by network managers or can be incorporated at a manager node responsible for the management of observation requests.
Mostrar mais

73 Ler mais

Mapping tropical forest biomass with radar and spaceborne LiDAR: overcoming problems of high biomass and persistent cloud

Mapping tropical forest biomass with radar and spaceborne LiDAR: overcoming problems of high biomass and persistent cloud

Despite the high biomass and persistent cloud-cover, we have produced a high resolu- tion (100 m) map of AGB over Lop ´e National Park in Gabon. This estimate was made possible through a novel fusion of radar and spaceborne LiDAR data. Also, using a conservative error-estimation method optimised for carbon payments for REDD+, we have shown that these estimates at a park level have a ± 25 % uncertainty, due to

35 Ler mais

Show all 10000 documents...