Top PDF Scada in a cloud-based architecture

Scada in a cloud-based architecture

Scada in a cloud-based architecture

A Smart grid transforms the way the power is distributed and used, adding intelligence throughout the grid to dramatically reduce outages and faults, handles current and future demand, increases efficiency and manage costs. To do this, a Smart Grid uses sensors, digital controls and analytic tools to automate, monitor the flow and delivery of energy to consumers. In addition, Smart Grid enables a two-way flow of electricity and informa- tion between the end-points and the appliance. Through Smart Grids it is also possible to incorporate new sustainable energies such as wind and solar generation, and interacts locally with distributed power sources, or plug-in electrical vehicles. To support this new paradigm, Smart grids will boost a widespread use of intelligence data sensors on the grid improving both energy efficiency and demand response. All those equipments will require substantial computing power for controlling and managing. In this scenario, clouds com- puting will provide substantial cost benefits. Using real-time information through cloud computing gives to the consumer the possibility of understanding how energy is consumed and what they can do to improve it. Expectatively, the real-time feedback on energy con- sumption should help to improve the consumer’s behaviour. On the other hand, on the utility side, providing information about benefits, can foster incentives for consumption adjustments and, as a result, reduce demand during peak usage periods. Concerning en- ergy efficiency, using clouds allows us to reduce the computational power of data centers through efficient application management, increasing the efficiency of servers, power sup- plies and cooling systems. However, the risks incurred by the migration of such critical activities to an unprotected cloud environment would be unacceptable. In our work we will assess the risk that a Smart Grid is exposed and the impact, in financial, business continuity and reputation levels, for an organization due to the loss of any of major se- curity attributes (confidentiality, integrity and availability). Based on the previous results and the need for scalability, underlying of a Smart Grid evolution, is then proposed an architecture based on cloud computing.
Mostrar mais

93 Ler mais

Cloud Computing: A study of cloud architecture and its patterns

Cloud Computing: A study of cloud architecture and its patterns

used, will understand and interoperate multiple identity schemes. OpenID is an open and decentralized standard for user authentication and access control, by allowing users to logon to multiple services with the same digital ID. Any service provider can authenticate the user in to the system. Oauth is again an open protocol that enables a user to grant permission to a consumer site to access a provider site without any sharing of credentials [10]. SPML is used for XML based IDM LC. This is extremely useful for an e-commerce web world where there are multiple service providers based on a common user space. The central identity system, understands all technologies used for authentication like SAML, OpenID, OAuth, etc. Let us assume that the central identity system to be collection of modules, each handling a technology, taking to a common user space and a policy database. The information is converted to different formats, depending on the technology used like OpenID, or SAML, or WS-Security and conveyed to the participating service providers [Fig. 5]. A brief comparison of the three patterns is shown in Table 1.
Mostrar mais

6 Ler mais

A collaborative architecture against DDOS attacks for cloud computing systems.

A collaborative architecture against DDOS attacks for cloud computing systems.

Some time later, when the DDoS is already under control due to the collaboration, the attack pattern changes and starts including packets that are only filtered by rules installed at the Security Service Function S2F4. This raises the resource usage of the corresponding filtering task τ from 0σ to 30σ (and, thus, S2F4’s resource usage becomes 110σ. Unfortunately, S2F4 cannot offload the whole task to any other firewall, as that would just change the place of the problem. Nevertheless, it can collaborate with some of them for sharing the corresponding load by combining a “best effort” approach with optimization oportunities. For example, if τ would take the same amount of resources wherever executed, S2F4 could ask S2F3 and S2F2 to invest 10σ each on this task, so the load on S2F4, S2F3 and S2F2 would be 90σ. It is possible, however, that running τ with only 10σ at S2F2 will not be as effective as in S2F4, since the volume of traffic at S2F2 is higher; after all, in S2F4, τ runs only on packets that have not been filtered by S2F3’s tasks, whereas S2F2 sees the full load of packets to which τ must be applied even if S2F2’s regular filtering tasks are all applied prior to τ. Or even better, S2F4 could raise the need for collaboration for a centralized controller that is able to verify which are the best nodes that are able to collaborate based on their current resource usage through a resource optimization approach. Therefore, it becomes possible to decide a reasonable workload division that ensures no single firewall gets overloaded.
Mostrar mais

85 Ler mais

ART 2 - Cloud Computing for Higher Education Institutes Architecture

ART 2 - Cloud Computing for Higher Education Institutes Architecture

Cloud computing is an emerging technology paradigm that promises to provide solution to the current financial crisis faced by HE institutes. The migration from traditional system towards CC would enable the HE institutions to cope with rapidly changing software and hardware needs at lower cost. It would help to standardize and update the educational content, and help enhanced collaboration between HE institutes. The HE institutes expect to cut 20% of their IT budget by moving most of its applications to the cloud. This presents a major shift in approach and provides a major opportunity to increase organizational efficiency, improve agility, and stimulate innovation. However, to support a smooth transition and optimal outcomes, HE institutes must first develop a comprehensive cloud- computing strategy that addresses the challenges unique to each institution. HE institutes are at the beginning of a transition period during which they face many challenges with respect to cloud adoption. In this paper, we have presented a five phase strategy for implementing cloud computing in higher education. We have also proposed a CC architecture for HE institute containing the various deployment models, Service Models and user domain. The correlation and dependencies between these models is elaborated. Finally, we provide a comprehensive list of recommendations for a successful and efficient migration from traditional to cloud based system for a HE institute.
Mostrar mais

11 Ler mais

Bio-Cryptography Based Secured Data Replication Management in Cloud Storage

Bio-Cryptography Based Secured Data Replication Management in Cloud Storage

Cloud computing is new way of economical and efficient storage. The single data mart storage system is a less secure because data remain under a single data mart. This can lead to data loss due to different causes like hacking, server failure etc. If an attacker chooses to attack a specific client, then he can aim at a fixed cloud provider, try to have access to the client’s information. This makes an easy job of the attackers, both inside and outside attackers get the benefit of using data mining to a great extent. Inside attackers refer to malicious employees at a cloud provider. Thus single data mart storage architecture is the biggest security threat concerning data mining on cloud, so in this paper present the secure replication approach that encrypt based on biocrypt and replicate the data in distributed data mart storage system. This approach involves the encryption, replication and storage of data.
Mostrar mais

2 Ler mais

A Framework for Predicate Based Access Control Policies in Infrastructure as a Service Cloud

A Framework for Predicate Based Access Control Policies in Infrastructure as a Service Cloud

Infrastructure as a Service (IaaS) is the service with which IT of enterprises integrated for on-demand services. Different deployment models of cloud further makes it flexible so as to meet the requirements of users. As the customers’ policies are not same, Cloud Service Provider (CSP) needs a flexible architecture to accommodate the varied requirements of customers with respect to access control. The existing access control models such as Role Based Access Control (RBAC) and Attribute Based Access Control (ABAC) have limitations. The combination of RBAC and ABAC also could not offer fine grained access control. We also studied the RBAC model offered by Open Stack and came to know its limitations in catering to diversified needs of customers. The One Size Fits for All policy cannot provide flexible access control due to the aforementioned reason. Therefore a more flexible access control model is required. In this paper we proposed a framework with Predicate Based Access Control (PBAC) in general and then implemented it in Open Stack. Our empirical results revealed that the proposed framework can improve the granularity with fine grained access control mechanism. Though our framework is at primitive stage, it shows significant step forward in access control policies for IaaS clouds. Keywords - Authorization, predicate based access control, Infrastructure as a Service, Open Stack, fine-grained access control
Mostrar mais

9 Ler mais

Scheduling in cloud and fog architecture: identification of limitations and suggestion of improvement perspectives

Scheduling in cloud and fog architecture: identification of limitations and suggestion of improvement perspectives

The task scheduling refers to the allocation of resources needed to complete a task execution and it is intended that the requests are implemented taking into account the defined constraints (Musumba and Nyongesa, 2013). The task scheduling, assumes as an essential process to improve the reliability and flexibility of the systems. It requires advanced algorithms able to choose the most appropriate resource available to perform the task. The systems deal with priority requests, priority tasks and/or tasks with strict requirements of QoS. To ensure the proper functioning of these systems and the execution of tasks within the set time limits, the analysis needs to be handled with perfection. An efficient task-scheduling algorithm must ensure efficient simultaneous processing of tasks independent of their workflow. In fog architecture, there is a demand for more sophisticated task scheduling algorithms because the data needs to flow between client devices, fog nodes and cloud servers (Swaroop, 2019). A scheduling algorithm should take two important decisions can be based on some default values or through dynamic data obtained during the execution of the task (Mahmud et al., 2016): determine the tasks that can be performed in parallel and define where to execute those parallel tasks. In fog there are two main stages involved: Resource provisioning phase: this one detects, classifies and provides the resources necessary for the execution of the task and the Task mapping phase: phase in which a suitable server/processor is identified and the task is mapped to that server/processor (Mahmud et al., 2016).
Mostrar mais

12 Ler mais

Framework for E-business design based on enterprise architecture

Framework for E-business design based on enterprise architecture

Recently, Mozambique has adopted ETA, and the country sees a growing internet usage index for the last five years. Moreover, all mobile network operators offer an aggregated mobile banking service for subscribers, and although technology literacy in Mozambican is low, mostly 80% of urban mobile network subscribers have access to smartphones. Physical store owners are using non-traditional channels to keep in contact with regular customers via social networking platforms (e.g., WhatsApp, Facebook, etc.). Also, shoppers started using the same means to share information related to promotional and limited-edition products. Merkatos office has one fixed-line IP Phone, and one wireless router connects to the internet through the local landline network operator. All Personal Computers in the office connect to the internet through an installed router, and the designers are using cloud storage services to store and share multimedia contents. Additionally, Merkatos’ field representatives' daily work includes assembling a stand equipped with powerful speakers and memory card readers or a laptop to play advertisements in public squares.
Mostrar mais

89 Ler mais

A User-Centered and Autonomic Multi-Cloud Architecture for High Performance Computing Applications

A User-Centered and Autonomic Multi-Cloud Architecture for High Performance Computing Applications

-agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-scale cloud-unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user’s configuration. Finally, this thesis presents a software product line engineering (SPLE) method to handle the commonality and variability of infrastructure-as-a-service (IaaS) clouds, and an autonomic multi-cloud architecture that uses this method to configure and to deal with failures autonomously. The SPLE method uses extended feature model (EFM) with attributes to describe the resources and to select them based on the users’ objectives. Experiments realized with two different cloud providers show that using the proposed method, the users could execute their application on a federated cloud environment, without needing to know the variability and constraints of the clouds.
Mostrar mais

282 Ler mais

A fault - and intrusion - tolerant architecture for EDP Distribuição SCADA system

A fault - and intrusion - tolerant architecture for EDP Distribuição SCADA system

Almost  ten  years  ago,  EDP  Distribuição  just  had  a  conventional  SCADA  platform  with  an  application interface. At that time, the redundant SCADA servers were solely responsible for  the  monitoring  and  control  of  the  power  grid,  which  made  it  the  only  critical  server  whose  operation could not fail. More recently, the SCADA servers were connected with DMS servers  to  integrate  distribution  management  capabilities  to  the  more  classical  SCADA  features.  A  new application was created to integrate both of the systems into a single and more powerful  tool,  GENESys.  The  new  platform  is  dependent  of  both  systems  to  operate  correctly  since  most of its processes require constant communication with both SCADA and DMS systems.   Nowadays,  both  the  SCADA  and  DMS  servers  have  some  fault  tolerance  capabilities.  These  capabilities  are  related  with  error  masking  based  on  dual  redundancy  (primary‐backup  replication [19]). The objective is to maintain a correct operation in the presence of a benign  failure [39] in one of the servers, since in that case, the other will maintain the service. 
Mostrar mais

116 Ler mais

Priority Based Prediction Mechanism for Ranking Providers in Federated Cloud Architecture

Priority Based Prediction Mechanism for Ranking Providers in Federated Cloud Architecture

consists of three phases namely (i) Discovery of service providers (ii) Rank the shortlisted service provider (iii) assigning the service to the b est service provider. The customized broker based federated architecture is shown in Figure -1. Bro ker Manager (BM) collects the various levels of services information offe red by cloud service providers through broker learn ing algorith m. Bro kers manage the cloud provider‟s resources, collect its information and updates in Broker Status Registry (BSR). BM co mmun icates with brokers, discovers the appropriate providers for the requests and short list the providers. Broker based Learn ing Algorith m (BLA) helps to study the workload of the providers, understand the requirement of resources for the service requests, analysis the suitability of the providers automatica lly and shortlist it. The clouds are managed by Cloud- Bro kers (CB) that is capable of handling service requests and managing virtual machines within in federate cloud systems. The components of the broker Managers are Differentiated Module (DM), Discovery of the Providers (DP) and Ranking the providers. It must support the manage ment of the collaboration that includes all involved service providers, partners, and end users or consumer. Cloud clients are the users of the cloud services that offer different services towards business goals that driven resource sharing. Application Progra m Interface (API) acts as an interface point for all kinds of users to interact with services or offerings. This architecture is based on the differentia l module it was developed in the follo wing steps.
Mostrar mais

5 Ler mais

Secure Cloud Architecture

Secure Cloud Architecture

14 implementations, which can twist strong encryption into weak encryption or sometimes no encryption at all. For example in cloud virtualization providers uses virtualization software to partition servers into images that are provided to the users as on-demand services [24]. Although utilization of those VMs into cloud providers' data centres provides more flexible and efficient setup than traditional servers but they don't have enough access to generate random numbers needed to properly encrypt data. This is one of the fundamental problems of cryptography. How do computers produce truly random numbers that can't be guessed or replicated? In PCs, OS typically monitors users' mouse movements and key strokes to gather random bits of data that are collected in a so-called Entropy Pool (a set of unpredictable numbers that encryption software automatically pulls to generate random encryption passkeys). In servers, one that don't have access to a keyboard or mouse, random numbers are also pulled from the unpredictable movements of the computer's hard drive. VMs that act as physical machines but are simulated with software have fewer sources of entropy. For example Linux-based VMs, gather random numbers only from the exact millisecond time on their internal clocks and that is not enough to generate strong keys for encryption [25].
Mostrar mais

14 Ler mais

SLA for E-Learning System Based on Cloud Computing

SLA for E-Learning System Based on Cloud Computing

The SLA parameters are specified by metrics; these metrics define how the parameters of the cloud service can be measured. Usually, these metrics are varied from an application to another. So in this paper, the SLA parameters are specified only for the E-Learning applications [6]. Most users are confused in defining the important parameters. For the E-Learning applications, four types of services which providers can provide to the users. These services are IaaS, SaaS, PaaS, and Storage as a Service [6]. For each part of the SLA, the most important parameters that the users can use to create a reliable model of negotiation with the service provider are defined [6].
Mostrar mais

6 Ler mais

Representing organizational structures in enterprise architecture : an ontology-based approach

Representing organizational structures in enterprise architecture : an ontology-based approach

The Department of Defense Architecture Framework (DoDAF) (US DEPARTMENT OF DEFENSE, 2010) is an approach for development of Enterprise Architecture created and maintained by the US Department of Defense. DoDAF (current version, 2.02) is a specific purpose and data-focused framework. It does not follow the traditional architecture arrangement (business, data, application and infrastructure), but specifies seven viewpoints: capability, data and information, operational, project, service, standards and systems. Each viewpoint is associated with many models which describes the specific content that permeates it. Despite the fact DoDAF does not adopt the traditional architectural stratification, the various visions of DoDAF permeate aspects of business, application and infrastructure. From this vision, the Operational View has great relevance to us, once describes business aspects, including common elements of active structure (OV-4 model – Organizational Relationships). presents how these viewpoints correlate. On DoDAF’s architectural modeling process, it is recommended the use of the UPDM modeling language (US DEPARTMENT OF DEFENSE, 2010)(OMG, 2014).
Mostrar mais

162 Ler mais

Software architecture based on XMLmessages in a project for secondary loan trading

Software architecture based on XMLmessages in a project for secondary loan trading

The life of the syndicated loan will be dictated by various corporate events, and the loan agent will  communicate  them  as  well  as  any  decision  or  amendment  to  the  investment  to  the  investors  participating  in  the  loan.  The  initial  drawdowns  will  be  allocated  into  contracts  that  as  per  credit  agreement  will  accrue  the  interest  with  a  rate  that  is  periodically  fixed.  Considering  this  we  could  identify two main types of loans – fixed rate loans accruing with a fixed rate usually constructed by a  certain  established  in  credit  agreement  spread  plus  a  base  rate  and  floating  rate  loans  that  accrue  interest with a rate that includes spread but is also base rate linked to a currency index that changes  at each periodical prolongation called rollover. Throughout the life of a syndicated loan each drawing  in  the  loan  will  undergo  periodical  rollovers  with  the  most  common  ones  being  monthly  and  quarterly  contracts.  The  traches  of  the  loan  will  also  be  gradually  repaid  back  to  the  investors  in  mandatory  repayments  that  would  follow  the  pre‐established  in  credit  agreement  schedule  called  amortization  schedule  or  in  voluntary  prepayments  of  larger  amounts  of  the  debt  that  would  take  place in case of the issuer having excess cash available. Across the life cycle of the syndicated loan  the  debt  can  be  reorganized  and  restructured  what  would  translate  into  splits  and  combines  of  various  tranches  and  drawings.  In  some  cases,  if  the  credit  agreement  allows  it,  the  initially  issued  loan could be at request of the issuer and the borrowers increased, however since it would require  additional  funds  from  the  existing  lenders  it  would  call  for  their  consent  and  involve  payment  of  compensation fees. If everything goes according to the plan, at one point the investors are paid back  their  funds  and  the  loan  terminates.  This  could  happen  either  at  defined  in  credit  agreement  maturity date or at any point before.  
Mostrar mais

33 Ler mais

APOSTILA DE ELIPSE SCADA

APOSTILA DE ELIPSE SCADA

Set Point estará desabilitado.Isso é um problema e acontece porque o Scada mantém o estado dos objetos da última execução do aplicativo.Para solucionar esse problema, seráhabilitado esse objeto Set point. Podemos lembrar que num projeto a idéia é iniciar a execução da aplicação e não parar mais, portanto não se pode colocar o comando no StartRunning.

69 Ler mais

Desenvolvimento de uma aplicação scada

Desenvolvimento de uma aplicação scada

No primeiro semestre tive a oportunidade de desenvolver um pequeno sistema de SCADA com o software Lookout, para o controlo de uma habitação virtual no âmbito da disciplina de Computação Industrial do MIEM. O trabalho despertou um interesse por esta tecnologia ser bastante atual. Desta forma, quando a empresa C-ITA propôs o desenvolvimento de um sistema deste tipo para a monitorização da produção do soluto para impregnação das cordas do pneu, decidi aceitar o desafio. Foi realizada uma visita à empresa com o intuito de conhecer as instalações e a linha de produção e perceber mais concretamente os objetivos do trabalho.
Mostrar mais

70 Ler mais

Automatic coverage based neighbour estimation system: a cloud-based implementation

Automatic coverage based neighbour estimation system: a cloud-based implementation

While the elasticity of AWS was to be commended, it was not without its limitations. Lambda could only hold up to 512 MB of data in temporary storage (required for file processing), only allocated between 128 and 3008 MB of memory and a function was limited to a maximum processing time limit of up to 15 minutes, though a custom timeout (below that threshold) could be given. S3 does not allow an account to own more than 100 buckets, but that can be extended to 1000 buckets by submitting an extension form. EC2’s limits depend on the region chosen for the requested computing operation. To deliver on the deeper levels of func- tionality such as input/output, Google’s GSON library [18] handled the conversion of Java objects into JSON files and vice-versa (a process called serialisation/deserialisation) and OpenCSV [19] processed files in the comma/tab separated values format (CSV/TSV). For the algorithmic process, a cus- tom GeoHashing class translated geographical square coor- dinates into coded text strings and vice-versa; the strings could hold up to 10 levels of precision, which represented more and more precise and smaller square areas on the globe as it became higher. A custom Quad Tree API created for video-game development [20] was modified to calculate
Mostrar mais

12 Ler mais

QoS Based Resource Management for Cloud Environment

QoS Based Resource Management for Cloud Environment

18 Scheduling Problem in the Cloud proposed CG starts with the cheapest assignment as an initial least-cost schedules which schedule all tasks to the resource with minimum cost consumption. Then, the CG algorithm repeats the rescheduling process to reduce the total execution time until no rescheduling is feasible with the left budget. Chase et al. [ WC16 ] proposed Multi-Cloud Workflow Mapping (MCWM) scheduling algorithm to minimize the total execution time under budget constraint values in IaaS multi-cloud environments. Wu et at. [ WLY + 15 ] proposed the Critical-Greedy algorithm scheduling to min- imize the workflow makespan under a user-specified financial constraint for a single datacenter cloud. In the first step, the proposed Critical-Greedy algorithm generates an initial schedule where the cloud meets a given budget for the workflow application. Then, in the next step, by itera- tive searching, it tries to reschedule critical tasks in order to reduce the total execution time until no more rescheduling is possible. Zeng et al. [ ZVL15 ] introduced a Security-Aware and Budget- Aware workflow scheduling strategy (SABA) to minimize the total workflow execution time while meeting the data security requirement and budget constraint in cloud environments. Taking into account data security requirements, they defined two type of datasets : moveable datasets and im- moveable datasets to impose restrictions on data movement and duplication. For the scheduling phases, they introduced an objective function referred to as Comparative Factor (CF) to make a balance between execution time and consumption cost. The resource with best CF will be selected for task assignment.
Mostrar mais

58 Ler mais

Euronet Lab : a Cloud Based Laboratory Environment

Euronet Lab : a Cloud Based Laboratory Environment

Another problem is the parallel job processing that is allowed by the grid computing system, in laboratory applications. This is not applicable because each connection can belong to different instances, and very probably to different users, so they can originate erroneous data transmission, because can belong to another user and process. This means that this paralell processing can originate that one user receives (by mistake) the data from the v-lab operated by other user. This system presents a very high level of scalability because, everybody can access the resources and especially because every teacher or even user can participate in the system by a constructive way making upload of new experiences and lab environments.
Mostrar mais

9 Ler mais

Show all 10000 documents...