Top PDF Cloud Data Storage for Group Collaborations

Cloud Data Storage for Group Collaborations

Cloud Data Storage for Group Collaborations

Abstract—Cloud computing has been an important development trend in information technology. By moving data and application software from traditional local hosts to network servers, cloud computing provides more flexible and convenient access to data and services, with cheaper software obtainment and hardware maintenance costs. Cloud computing also may provide some value-added services, such as automatic data backup and group collaboration support. Many researches about cloud computing have been proposed in the literature. However, how these methods could be used for group collaborations is still unclear. In this paper we develop a secure cloud data storage scheme which can be used for group collaborations.
Mostrar mais

2 Ler mais

Protecting user privacy in the Cloud: an analysis of terms of service

Protecting user privacy in the Cloud: an analysis of terms of service

(2) the need to adopt a common definition of cloud services, while at the same time recognizing the different types of services and the potentially different assessment required for each group. According to the National Institute for Standards and Technology, a cloud service is based on "a model for enabling ubiquitous, convenient, on­demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction"). [15] For the purposes of this study, the services were subdivided into three groups according to their main activity: (i) storage; (ii) collaboration and (iii) IaaS/PaaS. While the first two groups are mainly addressed to plain users, the services of the third group are targeted primarily to corporate users and are offered in exchange for a fee. The hypothesis that guided this subdivision is that the more similar the services are, the more their practices regarding users' privacy and data protection would coincide.
Mostrar mais

12 Ler mais

Integration of browser-to-browser architectures with third party legacy cloud storage

Integration of browser-to-browser architectures with third party legacy cloud storage

In the next evaluation scenario, we will measure the scalability of propagating data between two separate Legion groups that can only share information via Antidote. In order to do so, the first evaluation setup is composed by one Antidote instance, two Le- gion objects server instances and two Legion nodes connected to separate objects servers. The second setup uses two Antidote nodes connected to different Legion groups. In this scenario, the time measured starts when a Legion node issues an update and ends when the other group’s Legion node receives the update. In order to propagate updates between Legion groups, Antidote is used as a synchronization point, where one Legion node up- dates the storage system, and the other fetches the updates. In Figure 5.3 we measure the time of propagating messages between the two Legion nodes with an increasing number of operations in each message. As we can see, the time needed scales nearly linearly with the number of operations, having a consistent result with the results presented before.
Mostrar mais

66 Ler mais

Failure Analysis of Storage Data Magnetic Systems

Failure Analysis of Storage Data Magnetic Systems

This paper shows the conclu sions about the corro sion mecha nics in storage data magnetic systems (hard disk). It was done from the inspec tion of 198 units that were in service in nine diffe rent climatic regions charac te ristic for Mexico. The results allow to define trends about the failure forms and the factors that affect them. In turn, this study has analyzed the causes that led to mecha nical failure and those due to dete rio ra tion by atmosp heric corro sion. On the basis of the results obtained from the field sampling, demons trates that the hard disk failure is funda men tally by mecha nical effects. The dete rio ra tion by envi ron mental effects were found in read-write heads, inte grated circuits, printed circuit boards and in some of the elec tronic compo nents of the contro ller card of the device, but not in magnetic storage surfaces. There fore, you can discard corro sion on the surface of the disk as the main kind of failure due to envi ron mental dete rio ra tion. To avoid any incon ve nience in the magnetic data storage system it is neces sary to ensure sealing of the system.
Mostrar mais

13 Ler mais

Retrieval methods of effective cloud cover from the GOME instrument: an intercomparison

Retrieval methods of effective cloud cover from the GOME instrument: an intercomparison

of ozone, especially in the troposphere, are dependent on a correct description of the partially cloudy scenes in the field of view (Burrows et al., 1999; Hoogen et al., 1999; van der A et al., 1998; Munro et al., 1998; Koelemeijer and Stammes, 1999a; Newchurch et al., 2001; Hsu et al., 1997; Thompson et al., 1993). This is due to the high albedo of clouds, which interferes with the detection of the absorption signal of the target species. The high optical thickness of a cloud often causes it to shield the “sight” of the air below, thus making it impossible to retrieve information from that part of the atmo- sphere. Also, the scattering nature of a cloud makes the cal- culation of the path length through which a photon has trav- eled before it reaches the detector difficult. For these reasons, clouds are often used as the reflecting lower boundary of the atmosphere, and everything below may be parameterized as a ghost column or is not treated at all in retrieval radiative transfer calculations. The calculation of photo-dissociation rates of many species in the atmosphere is influenced by the presence (or absence) of clouds. For example, scattering of light at a cloud layer can increase the diffuse backscattered radiation above this layer and at the same time shield the lay- ers below from the strong direct component of sunlight, lead- ing to significant differences in the actinic flux (van Weele and Duynkerke, 1993; Los et al., 1997).
Mostrar mais

19 Ler mais

Using Neo4J geospatial data storage and integration

Using Neo4J geospatial data storage and integration

The API insertion page is responsible for querying the various online geo- database services and returning all relevant places of interest. This is done by utilizing the Factual API, the GeoNames service endpoint and the Nominatim OpenStreetMap service. The user is prompted to insert a place name to lookup, which then returns all the related results, ordered by relevance and source. Each result consists of several attributes (which may vary depending on the source and the quality and completeness of data) presented in a table. As different sources can often return inconsistent results (for example, GeoNames does not keep addresses), it becomes important to create a schema- less structure to store our data. Several relationships of interest can also be found here, as a natural hierarchy of results is also present. Place names are located in a particular address, which belongs to a locality, which belongs to a country. Certain place names are also labeled according to several filters, such as recreations, universities, restaurants or transportation entities. Other data, such as business schedules and reviews are also included in the database as custom attributes; however, it is not shown on the webpage to avoid cluttering. Once a desirable result is found, the user can press the ‘Add’ button, which brings up a confirmation form prior to inserting data (Figure 18).
Mostrar mais

83 Ler mais

Dependable data storage with state machine replication

Dependable data storage with state machine replication

The first thing to notice when comparing the times to execute the different types of transaction is that configurations with clients in TC invoking queries directly to the database servers hosted in the public clouds has the worst results. This is true for both read and write-dominated work- loads. Schedule Modification, for instance, takes 1306 milliseconds, on average, to be executed in EC2, but takes 73% less time when executed with SteelDB in CoC (318 ms). This is explained by the optimization made in SteelDB to execute all the statements first in the master replica and replicate the data only by commit time. That transaction execution would invoke only one wide area network (WAN) messages, plus the messages from the ordering protocol of BFT-SM A R T , against eleven without SteelDB. The gain in performance is also true for the State Report transac- tion which takes 881 ms in EC2 against 293 ms in CoC, 67% less. This is also explained by the optimization where we read operations from only the master as we assume the component to be trusted.
Mostrar mais

81 Ler mais

DESIGN AND IMPLEMENTATION OF A PRIVACY PRESERVED OFF-PREMISES CLOUD STORAGE

DESIGN AND IMPLEMENTATION OF A PRIVACY PRESERVED OFF-PREMISES CLOUD STORAGE

overcoming the limitations of existing work and acquiring their strengths. For-instance we developed the process of efficient third party auditing process as suggested by (Ateniese et al., 2007; Wang et al., 2010) but we further enhanced their efficiency by implementing partial homomorphic cryptography where users are not only able to store and audit their data files, they can also perform transaction on their encrypted data. Venkatesh et al. (2012) proposed a RSA based cryptography technique which facilitates users to only encrypt their data but this technique does not have capability of processing the encrypted data. Similarly, as (Ranchal et al., 2010), argued that involving TPA may associate additional risk to confidentiality of client’s data, we overcame this issue by implementing a process of sound steganography which secures the significant parameters of client by malicious activities of TTPA hence user’s parameters can be stored without any security concerns. Prasadreddy et al. (2011) provided the solution for securing the data of client by storing the data and their associated keys with isolated CSPs, however this is a good suggestion, but we believe that involving multiple CSPs may result in enhancing the communication and security complexity. In order to improve this approach we implemented a concept of encoding the cryptographic keys while at storage and during the transfer. Keys and data can be only decrypted by the relevant privileged authorities due to the implementation of RBAC and SCG. In order to overcome data violation issues, we further enhanced the existing work by implementing the process of data backup and recovery, where data of user is efficiently recovered from secondary or backup cloud storage with any loss or damage.
Mostrar mais

14 Ler mais

Enabling and Sharing Storage Space Under a Federated Cloud Environment

Enabling and Sharing Storage Space Under a Federated Cloud Environment

A redundância ao nível dos discos (volumes) é um requisito base para a disponibiliza- ção de sistemas de alta disponibilidade, i.e., sistemas capazes de rapidamente recuperar de faltas e repor em funcionamento os seus serviços; mas, para que tal seja conseguido, os volumes não podem estar assentes sobre discos internos dos servidores, pois assim se- ria necessária intervenção humana para os colocar noutro servidor. Vulgarizam-se assim nos centros de dados (CD), a partir da década de 90 do séc XX, as redes de armazena- mento, Storage Area Networks (SAN), assentes em infraestruturas Fibre Channel (FC) interligando servidores a disk arrays, sendo que estes últimos disponibilizam volumes de dados aos servidores (que os formatam usando um sistema de ficheiros tradicional – ext3, XFS, NTFS, etc. – ou um sistema de ficheiros especializado – GFS, GPFS – que permite o volume ser partilhado simultaneamente por vários servidores. Assim, em caso de fa- lha de um servidor, um outro poderá rapidamente “mapear” o(s) volume(s) do servidor avariado, e aceder ao seu conteúdo, re-disponibilizando o(s) serviço(s). Já entrados no séc. XXI, a existência de duas infraestruturas distintas nos CDs, uma usando tecnologias e protocolos FC para “movimentar” blocos de dados, e outra usando tecnologia Ether- net e TCP/IP para movimentar “outra informação” acaba por redundar numa “fusão” dos dois tipos de utilização numa única rede, baseada em Ethernet e TCP/IP, tendo sido criado um novo protocolo, iSCSI, para substituir o FC.
Mostrar mais

145 Ler mais

A new meta-data driven data-sharing storage model for SaaS

A new meta-data driven data-sharing storage model for SaaS

In the experiments, we simulate a real multi-tenant scenario in “client/server” model by sending query and update requests from many tenants concurrently, and then evaluate the solutions by analysis the response time and TPS data captured during those experiments. The experiment simulated four kinds of scenarios which were described in table.1. Clients are designed to be able to simulate many tenants, in the experiment we set the number of tenants from 1-100 and every tenant had 50 users in parallel. Every simulated tenant would submit all the four kinds of scenarios mentioned above to the server and records the execution time. We divide the experiments into two groups by the number of simulated tenants. We collect average response time as the indicator of evaluation. The comparisons are shown in Fig 5. The horizontal axis shows the different request classes, and the vertical axis shows the response time in milliseconds. The experiment was run on a sql-server database server with a 3.0 GHz Intel Xeon processor and 1 GB of memory
Mostrar mais

6 Ler mais

A Survey on Cloud Storage Systems and Encryption Schemes

A Survey on Cloud Storage Systems and Encryption Schemes

CP-ABE, an alternative to KP-ABE was suggested. The inverse of KP-ABE is CP-ABE. In KP-ABE, attributes denote the cipher-texts while access policies are built based on the user’s keys. However, the limitation in KP-ABE system is that encryptors are not allowed to create the access policies. This in turn leads to development of CP-ABE [2]. The key provider is responsible for granting the keys and creating access policies. Altogether, the entire KP-ABE system is influenced by trust. CP-ABE is used to recognize the complex access control in cipher-texts even in case of untrusted servers. This system includes the attributes that denote the user credentials. The person who encrypts the data designs the access policy to determine the person responsible for data decryption. This system is similar to Role Based Access Control (RBAC). Collusion attack is eliminated efficiently by CP-ABE rather than KP-ABE. There is a chance for collusion attack if the attributes describing the cipher-text is combined. Private key randomization technique is used to generate keys in CP- ABE. CP-ABE is found to be more efficient than KP-ABE.
Mostrar mais

5 Ler mais

Securing Data Transfer in Cloud Environment

Securing Data Transfer in Cloud Environment

In [3] the author says that the US National Institute of Standards and Technology (NIST), an agency of the Commerce Department Technology Administration, has created a cloud computing security group. This group considers its role as promoting the effective and secure use of the technology within government and industry by providing technical guidance and promoting standards NIST has recently released its draft wide to adopting and using the Security Content Automation Protocol which identifies a quite of specifications for organizing and expressing security-related information in standard ways, as well as related data, such as identifiers for software flaws and security configuration issues. Its application includes maintaining enterprise systems security. In addition to NIST efforts, the industry itself can affect an enterprise approach to cloud security. If there is application of due diligence and development of a policy of self-regulation to ensure that security is effectively implemented among all clouds, then this policy can also help in facilitating law-making. By combining industry best practices with the oversight NIST and other entities are still developing, we can effectively address cloud computing future security needs.
Mostrar mais

5 Ler mais

Data storage system for wireless sensor networks

Data storage system for wireless sensor networks

With CoAP/Observe, a client may store notifications in it and use a stored notification, as long as it is fresh, to serve other clients requesting it, without contacting the origin server. This means that the support for caches and prox- ies, incorporated in Observe, allows registrations and transmission of notifica- tions, from different clients and servers, to be carefully planned so that energy saving and bandwidth utilization is optimized. This avoids energy depletion of nodes and increases network lifetime, with a positive overall impact on de- lay. In this thesis, a careful planning of registration steps for data/notification storage is proposed. That is, a framework to plan registration steps through proxies, and also the aggregation and scheduling of notification deliveries at such proxies, so that maximum energy saving is achieved, bandwidth is used efficiently and the overall delay is reduced. This tool can be used offline by network managers or can be incorporated at a manager node responsible for the management of observation requests.
Mostrar mais

73 Ler mais

Secure Deduplication for Cloud Storage Using Interactive Message-Locked Encryption with Convergent Encryption, To Reduce Storage Space

Secure Deduplication for Cloud Storage Using Interactive Message-Locked Encryption with Convergent Encryption, To Reduce Storage Space

indicate the first record (15). Block-level Deduplication is square level information deduplication works on the premise of sub-document level. As the name suggests, the record is broken into portions, squares or pieces that will be inspected for already put away data repetition. The well-known way to deal with excess information is by allocating identifier to lump of information, by utilizing hash calculation, For instance it creates an ID to that specific piece. The specific ID will be contrasted in the focal record. In that event the ID is present. Then a pointer reference is made before the information put away. It creates a chance that the ID is new and does not exist, then that piece is special. The novel lump is put away and the special ID is upgraded in the Index. The span of the piece which should be checked differs from seller to merchant. Some will have altered square sizes, whereas a few others use variable sizes in the same manner few might likewise change the extent of settled square size for purpose of befuddling. Square sizes of altered size might fluctuate from 8KB to 64KB however the primary contrast with it is the smaller the part, than it will be prone to have chance to recognize it as the copy information. In the event that less information is put away then it clearly implies that informat ion is present in the server. The main significant problem by utilizing altered size pieces is that in the event that if the document is adjusted. Deduplication result utilizes the same previous assessed result can lead to a risk of not recognizing the same repetitive information section. The squares of record would be moved or changed, and then they will move downstream from change, by balancing whatever remains of correlation (15). Variable block level Deduplication is compare the various sizes of data blocks that can reduce the chances of collision. Variable block level deduplication involves using different algorithms to decide a variable block size. The data is divided based on the algorithm’s determination. Then, those blocks are stored in the subsystem.
Mostrar mais

13 Ler mais

An Enhanced Secure and Authorized Deduplication Model in Cloud Storage System

An Enhanced Secure and Authorized Deduplication Model in Cloud Storage System

7RPDNHGDWDPDQDJHPHQWVFDODEOHLQFORXGFRPSXWLQJGHGXSOLFDWLRQ>@KDVEHHQDZHOONQRZQWHFKQLTXHDQGKDV DWWUDFWHG PRUH DQG PRUH DWWHQWLRQ UHFHQWO\ 'DWD GHGXSOLFDWLRQ LV D VSHFLDOL]HG GDWD FRPSUHV[r]

4 Ler mais

Enhancing Accountability for Distributed Data Sharing in the Cloud

Enhancing Accountability for Distributed Data Sharing in the Cloud

The main concept behind cloud computing is here computing is done in remote location. Basically it is done in a virtualization environment implemented on large servers [2]. Cloud computing gives new way of hosting andprocessing of data by providing scalable and often virtualised resources. Now a days there are many commercial cloud service providers are offering service including Amazon,Google, Microsoft, Yahoo and Salesforce etc. The main advantage behind the success of this technology is that anyone can use this technology for that user don’t need to be expertise of that technology infrastructure. While enjoying the facility brought by this emerging technology user also started worrying about the fate of their data as they don’t know in which machine their data is stored and who is processing their data[12],[14]. This worry has raised so many security issues and it is a known fact that only SLA’s(service level agreement) can’t give desired security to the user’s data . Cloud is a layered architecture where user data is processed by so many service providers and it is highly impossible for the user to track their data[3].
Mostrar mais

5 Ler mais

Survey of Data Security Challenges in the Cloud

Survey of Data Security Challenges in the Cloud

A pollution control system performs the following actions: Gathering pollution data, analyzing data to identify pollution levels and initiating corrective measures. With the advent of new technologies, we can achieve effective pollution monitoring and control using use a combination of different technologies. Internet of Things (IoT) can be used for data collection and monitoring via various sensors. Similarly, Data analytics and machine learning can be used for better prediction. In this paper we will see how we can integrate these technologies to have more effective pollution monitoring and control measures.
Mostrar mais

6 Ler mais

Braz. J. Phys.  vol.30 número2

Braz. J. Phys. vol.30 número2

detetor data from the front-end eletronis to the event. storage[r]

14 Ler mais

AROCRYPT: A CONFIDENTIALITY TECHNIQUE FOR SECURING ENTERPRISE’s DATA IN CLOUD

AROCRYPT: A CONFIDENTIALITY TECHNIQUE FOR SECURING ENTERPRISE’s DATA IN CLOUD

Subhasri P. et al. [16] proposed a Multi-level Encryption algorithm to secure the data in the cloud. The proposed algorithm uses rail fence and ceaser cipher algorithm. Initially, plaintext is encrypted using rail fence technique. Assign the position value i to each letter in the encrypted text. Generate the ASCII values of each character. Assign a key and apply it on the text using the formula: E = (p + k + i) % 256, where p denotes Plaintext, k denotes key and i denotes Position. Algorithm produces the ASCII character of the equivalent decimal value. Key used for encryption is not generated. Maintain the position of each character in the text requiring additional storage. Here, Author has not mentioned where the characters position details are maintained.
Mostrar mais

9 Ler mais

Arquitetura de um sistema de análise de dados Big Data no modelo cloud computing

Arquitetura de um sistema de análise de dados Big Data no modelo cloud computing

A disponibilização de dados através da cloud obriga a definir um conjunto de restrições e de políticas de permissões de acesso aos dados por parte dos utilizadores, para que os dados sejam acedidos por quem deles necessita sem que os mesmos percam a sua integridade. Tudo isto pode ser implementado através de mecanismo de gestão de utilizadores que estão implementados nas plataformas de infraestruturas online , como é o caso do Microsoft Azure, a qual permite utilizar protocolos de segurança como Secure Sockets Layer (SSL) e Hyper Text Transfer Protocol Secure (HTTPS). A correta implementação de políticas de gestão de acesso leva a que os utilizadores confiem na utilização de soluções em cloud , sobretudo em soluções de análises de dados críticos. Uma das características importantes e que a solução adotada não permite totalmente prende-se com a possibilidade de as bases de dados terem níveis de segurança ao nível da encriptação dos dados, fazendo com que o acesso aos dados seja realizado apenas por quem tenha autorização (chave de desencriptação), e evitando assim que mesmo o Database Administrator (DBA) ou quem desenvolve a solução possa aceder aos dados.
Mostrar mais

129 Ler mais

Show all 10000 documents...