• Nenhum resultado encontrado

Self-adaptive authorisation in cloud-based systems

N/A
N/A
Protected

Academic year: 2021

Share "Self-adaptive authorisation in cloud-based systems"

Copied!
61
0
0

Texto

(1)Federal University of Rio Grande do Norte Center of Exact and Earth Sciences Department of Informatics and Applied Mathematics Graduate Program in Systems and Computing Academic Master’s Degree in Systems and Computing. Self-adaptive Authorisation in Cloud-based Systems. Thomás Filipe da Silva Diniz. Natal-RN April 2016.

(2) Thomás Filipe da Silva Diniz. Self-adaptive Authorisation in Cloud-based Systems. Dissertação de Mestrado apresentada ao Programa de Pós-Graduação em Sistemas e Computação do Departamento de Informática e Matemática Aplicada da Universidade Federal do Rio Grande do Norte como requisito parcial para a obtenção do grau de Mestre em Sistemas e Computação. Linha de pesquisa: Sistemas distribuídos. Supervisor. Nelio Cacho, PhD Co-supervisor. Carlos Eduardo da Silva, PhD. PPgSC – Graduate Program in Systems and Computing DIMAp – Department of Informatics and Applied Mathematics CCET – Center of Exact and Earth Sciences UFRN – Federal University of Rio Grande do Norte. Natal-RN April 2016.

(3) Catalogação da Publicação na Fonte. UFRN / SISBI / Biblioteca Setorial Centro de Ciências Exatas e da Terra – CCET. Diniz, Thomás Filipe da Silva. Self-adaptive authorisation in cloud-based systems / Thomás Filipe da Silva Diniz. - Natal, 2016. 60 f.: il. Orientador: PhD Nélio Alessando Azevedo Cacho. Coorientador: PhD Carlos Eduardo da Silva. Dissertação (Mestrado) – Universidade Federal do Rio Grande do Norte. Centro de Ciências Exatas e da Terra. Programa de Pós-Graduação em Sistemas e Computação. 1. Sistemas distribuídos – Dissertação. 2. Sistemas autoadaptativos – Dissertação. 3. Controle de acesso – Dissertação. 4. Computação em nuvem – Dissertação. 5. Openstack – Dissertação. I. Cacho, Nélio Alessando Azevedo. II. Silva, Carlos Eduardo da. III. Título. RN/UF/BSE-CCET. CDU: 004.75.

(4) Dissertação de Mestrado sob o título Self-adaptive Authorisation in Cloud-based Systems apresentada por Thomás Filipe da Silva Diniz e aceita pelo Programa de Pós-Graduação em Sistemas e Computação do Departamento de Informática e Matemática Aplicada da Universidade Federal do Rio Grande do Norte, sendo aprovada por todos os membros da banca examinadora abaixo especificada:. Dr. CARLOS ANDRE GUIMARÃES FERRAZ Examinador Externo à Instituição UFPE – Universidade Federal de Pernambuco. Dr. CARLOS EDUARDO DA SILVA Examinador Externo ao Programa UFRN - Universidade Federal do Rio Grande do Norte. Dr. THAIS VASCONCELOS BATISTA Examinador Interno UFRN - Universidade Federal do Rio Grande do Norte. Dr. NELIO ALESSANDRO AZEVEDO CACHO Presidente UFRN - Universidade Federal do Rio Grande do Norte. Natal-RN, 02 de Maio de 2016..

(5) Acknowledgments Firstly, i am grateful to the God for the good health and wellbeing that were necessary to complete this work. I would like to express my sincere gratitude to my supervisors Nelio Cacho and Carlos Eduardo for the continuous support of my MSc study and related research, for his patience, motivation, and immense knowledge. His guidance helped me in all the time of research. I could not have imagined having a betters supervisors and mentors for my MSc study. I must express my very profound gratitude to my wife Mileny, for providing me with unfailing support and continuous encouragement throughout my years of study and through the process of researching and writing this thesis. This accomplishment would not have been possible without you. I love you =D. Finally, I would like to express my gratitude to my mother Maristella, my grandmother Dadá, my sister Lara and my uncles Guilherme and Lúcio, thanks for the given support in this journey..

(6) Life is not about how hard you hit, but how hard you can get hit and keep moving forward. That is how winning is done. Balboa, Rocky..

(7) Self-adaptive Authorisation in Cloud-based Systems. Author: Thomás Filipe da Silva Diniz Supervisor: Nelio Cacho, PhD. Carlos Eduardo da Silva, PhD.. Resumo Apesar dos grandes avanços realizados visando a proteção de plataformas de nuvem contra ataques maliciosos, pouco tem sido feito em relação a proteção destas plataformas contra ameaças internas. Este trabalho propõe lidar com este desafio através da introdução de auto-adaptação como um mecanismo para lidar com ameaças internas em plataformas de nuvem, e isso será demonstrado no contexto de mecanismos de autorização da plataforma OpenStack. OpenStack é uma plataforma de nuvem popular que se baseia principalmente no Keystone, o componente de gestão de identidade, para controlar o acesso a seus recursos. A utilização de auto-adaptação para o manuseio de ameaças internas foi motivada pelo fato de que a auto-adaptação tem se mostrado bastante eficaz para lidar com incerteza em uma ampla gama de aplicações. Ataques internos maliciosos se tornaram uma das principais causas de preocupação, pois mesmo mal intencionados, os usuários podem ter acesso aos recursos e por exemplo, roubar uma grande quantidade de informações. A principal contribuição deste trabalho é a definição de uma solução arquitetural que incorpora autoadaptação nos mecanismos de autorização do OpenStack, a fim de lidar com ameaças internas. Para isso, foram identificados e analisados diversos cenários de ameaças internas no contexto desta plataforma, e desenvolvido um protótipo para experimentar e avaliar o impacto destes cenários nos sistemas de autorização em plataformas em nuvem. Palavras-chave: Sistemas autoadaptativos, Controle de acesso, computação em nuvem, Openstack..

(8) Self-adaptive Authorization in Cloud-based systems. Author: Thomás Filipe da Silva Diniz Supervisor: Nelio Cacho, PhD. Carlos Eduardo da Silva, PhD.. Abstract Although major advances have been made in protection of cloud platforms against malicious attacks, little has been done regarding the protection of these platforms against insider threats. This paper looks into this challenge by introducing self-adaptation as a mechanism to handle insider threats in cloud platforms, and this will be demonstrated in the context of OpenStack authorisation. OpenStack is a popular cloud platform that relies on Keystone, its identity management component, for controlling access to its resources. The use of self-adaptation for handling insider threats has been motivated by the fact that self-adaptation has been shown to be quite effective in dealing with uncertainty in a wide range of applications. Malicious insider attacks have become a major cause for concern since legitimate, though malicious, users might have access, in case of theft, to a large amount of information. The key contribution of this work is the definition of an architectural solution that incorporates self-adaptation into OpenStack in order to deal with insider threats. For that, we have identified and analysed several insider threats scenarios in the context of the OpenStack cloud platform, and have developed a prototype that was used for experimenting and evaluating the impact of these scenarios upon the self-adaptive authorisation system for the cloud platforms. Keywords: Self-adaptive Systems, Access Control, Cloud Computing, Openstack..

(9) List of figures 1. Authentication, Authorization, Audit . . . . . . . . . . . . . . . . . . .. p. 19. 2. ABAC Access control mechanisms. . . . . . . . . . . . . . . . . . . . .. p. 20. 3. OpenStack Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 24. 4. OpenStack overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 24. 5. MAPE-K Feedback loop . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 25. 6. OpenStack/ABAC component mapping . . . . . . . . . . . . . . . . . .. p. 29. 7. Overview of target system adaptation . . . . . . . . . . . . . . . . . . .. p. 31. 8. Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 32. 9. Package Diagram of the Prototype. . . . . . . . . . . . . . . . . . . . .. p. 36. 10. Probe/Monitor workflow . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 37. 11. Analyse, plan and execute workflow . . . . . . . . . . . . . . . . . . . .. p. 38. 12. Architecture of our experimental deployment. . . . . . . . . . . . . . .. p. 42.

(10) List of tables 1. Summary of inside threat scenarios. . . . . . . . . . . . . . . . . . . . .. p. 33. 2. Summary of responses. . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 33. 3. Summary of impacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 34. 4. Analysis of inside abuse scenarios. . . . . . . . . . . . . . . . . . . . . .. p. 35. 5. Controller Performace Metrics . . . . . . . . . . . . . . . . . . . . . . .. p. 44. 6. Elapsed Time for de scenarios . . . . . . . . . . . . . . . . . . . . . . .. p. 45.

(11) Summary. 1 Introduction. p. 12. 1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 13. 1.2. Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 15. 1.3. Work organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 16. 2 Background. p. 17. 2.1. Insider Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 17. 2.2. Identity Management . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 18. 2.3. Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 21. 2.3.1. Service Models . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 22. 2.3.2. Openstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 23. 2.4. Self-adaptive systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 25. 2.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 27. 3 Adding Self-adaptation to OpenStack Authorization Mechanisms. p. 28. 3.1. User Access Control in OpenStack . . . . . . . . . . . . . . . . . . . . .. p. 28. 3.2. Our approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 30. 3.3. Insider Attacks Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 32. 3.4. Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 35. 4 Results and Validation. p. 41. 4.1. Environment description . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 41. 4.2. Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 42.

(12) 4.3. Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 43. 4.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 46. 5 Related Works. p. 47. 6 Conclusion. p. 50. 6.1. Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 50. 6.2. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 51. 6.3. Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. p. 51. References. p. 53. Appendix A -- Scenarios Rules. p. 56.

(13) 12. 1. Introduction. Cloud computing is an ever evolving paradigm (MELL; GRANCE, 2011). According to (VAQUERO et al., 2008), a cloud is a pool of virtualized resources easily usable and accessible (such as: hardware, platform or services), in which these resources are configured dynamically according to demand. It also has the features of pay-per-use model, which guarantees the quality of the offered service through agreements. This paradigm is finding its place in the digital world, in a way that nowadays it is common for standard systems to interact with a cloud platform. Due to the large adoption of this type of service, a large number of cloud providers appeared on the market. Currently, this kind of service is being adopted on a large scale by different segments of society. From ordinary people, who use the public cloud services of Google Drive or Dropbox in their daily lives, to enterprise segments who prefer to keep a local infrastructure with internal access, featuring private clouds. Amazon Web Services 1 , Microsoft Azure 2. , as well as opensource: OpenNebula 3 , CloudStack 4 e OpenStack 5 are examples of well. known cloud providers today.. OpenStack appears as a set of software tools used to build public or private cloud infrastructures. Currently, it offers various services, such as: data Storage (Swift), processing (Nova), networks (Neutron), identity management (Keystone), orchestration (Heat), database (Trove), among others. Since 2010, OpenStack has been evolving and improving its services each year, involving several companies and big open source projects, such as: Ubuntu, IBM, redhat, Huawei, Dell, VMware, among others. Thereby, establishing itself currently as one of the most used IaaS platforms. Some use cases of OpenStack cloud-based deployments are known, such as CERN 6 , 1. https://aws.amazon.com/ https://azure.microsoft.com 3 http://opennebula.org/ 4 https://cloudstack.apache.org/ 5 https://openstack.org 6 http://docs.openstack.org/openstack-ops/content/cern.html 2.

(14) 13. that have deployed a cloud with 4700 processing nodes, with approximately 120000 cores. NeCTAR is a research institute that spans several sites with approximately 4000 cores per site7 . CNC (Cloud Computing for Science) is another project, sponsored by Brazilian NREN8 , that aims to provide a large storage cloud based on OpenStack Swift. CNC9 has deployed an OpenStack environment that spreads throughout all Brazilian territory and aims to support 10.000 users, increasing per year new 10.000 users. Despite this rapid growth, we believe that aspects related to security and data privacy are challenges that need to be addressed. The Cloud Security Alliance - CSA10 and ENISA (European Network Information Security Agency ). 11. list some security issues in cloud. computing, such as: data breach, data loss, unsafe APIs, denial of service, internal attacks, abuse of services (eg DDoS), etc. Among the security problems mentioned above, the issue of insider attacks is something relevant. According to (SILOWASH DAWN CAPPELLI et al., 2012), despite the amount of external attacks being higher than the internal ones, in 34% of the cases, internal attacks caused more damage to the organization than the external attacks, which account to 31% of cases. Although those numbers are not totally related to insider attacks in Cloud computing environments, this becomes an important fact to be considered, since in cloud the damages are potentially bigger. Thus, this work focuses primarily on treating and exploring security aspects related to insider attacks in the context of an OpenStack cloud platform.. 1.1. Motivation. When an internal attack takes place, the damage to the organization can be catastrophic, sometimes resulting in financial losses (COLE, 2015). In a cloud computing scenario, this damage can be increased given the large amount of not only files and resources, but also of users who have access to the data. A famous example of internal attack took place in July 2010, when an intelligence analyst of the US army had access and published more than 250,000 secret documents from the US Department of Defence. Apparently the analyst had access to the system, i.e., he was an authorized user. However, there were insufficient mechanisms to detect that 7. http://docs.openstack.org/openstack-ops/content/nectar_deploy.html National Research and Education Network 9 https://cnc.rnp.br/ 10 https://cloudsecurityalliance.org/download/the-notorious-nine-cloud-computing-top-threats-in2013/ 11 https://resilience.enisa.europa.eu 8.

(15) 14. downloading 250,000 documents in a short period of time would characterize an abnormal behaviour. CERT12 has a document that reports more than 700 cases of attacks from companys’ internal employees and its consequences. A reported case happened to a mortgage company13 . The organization notified an employee who worked as a software engineer in unix systems that he would be fired because of an error in a system development. However, after the notification, he was allowed to conclude his day job. Maliciously, he ran an algorithm that would disable monitoring systems and alerts, deleted more than 4,000 of the organization’s servers credentials, and deleted all data, including backups. In general, many organizations have several processes that rely on information systems and computer infrastructure. These systems rely on human labor for activities related to monitoring and auditing of malicious behaviour (ANALYTICS, 2008). In addition, a human system administrator is not able to monitor a large number of requests in the system, even if they were similar and occurred in a short period of time. If a situation like this happens, it must be immediately identified and then immediately mitigated to prevent further damage to the systems. Some efforts have been made to mitigate the occurrence of insider attacks (DUNCAN et al.,. 2013; GARKOTI; PEDDOJU; BALASUBRAMANIAN, 2014; STOLFO; SALEM; KEROMYTIS,. 2012), but a few of them considers that such an attack might come from a malicious user with authorized access to the data. In an internal attack, the user who has or had authorization to access the cloud system and its data is abusing(BAILEY; CHADWICK; LEMOS, 2014) the service. Its intention is to misuse the access he has to adversely affect the system, compromising the confidentiality, integrity and availability of data. It is important to note that these insider attacks are different from those mentioned in (DUNCAN; CREESE; GOLDSMITH, 2012), as they consider system components such as malicious agents. Self-adaptive systems are a good approach to treat these problems due to efficiency and effectiveness in dealing with uncertainty in a wide range of applications, including some related to user access control(BAILEY; CHADWICK; LEMOS, 2014; PASQUALE et al., 2012; SCHMERL et al., 2014). Self adaptive systems consist of mechanisms which allow them to change their own structure or behaviour during run time (OREIZY et al., 1999) 12. The CERT Division is part of the Software Engineering Institute, which is based at Carnegie Mellon University 13 www.cert.org/insider-threat/.

(16) 15. These changes happen when the system needs to adapt to new requirements or new ambient conditions. IBM defines a autonomic element into four main functions: Monitor, Analyse, Plan and Execution. Those parts communicate with one another and exchange information through a Knowledge base. Those elements implement a feedback control loop called MAPE-K control loop (IBM, 2006). In this loop, the Monitor element is obtain, aggregate and filter the target system status information. and send them to the Analysis element. Analysis element evaluates the data sent by Monitor in detail in orderto detect the need for target system adaptation. Once detected the need to adapt, the Plan phase builds a sequence of steps with the goal of ensuring the adaptation of the target system. Those four steps work together with a component called knowledge base. An example of solution would be SAAF14 (BAILEY; CHADWICK; LEMOS, 2014). This framework is capable of modify security policies on the authorization infrastructure at runtime using self adaptation mechanisms. The SAAF’s objective is to monitor the usage of authorization infrastructures, analysing subject interactions and adapt this infrastructure accordingly. SAAF was applied in the context of an application called PERMIS. The PERMIS has a particular architecture, causing the SAAF to become specific to deal with it. Thus, applying the SAAF to OpenStack would require a considerable refactoring in this framework source, since OpenStack has a different authorization infrastructure.. 1.2. Objectives. Considering the above issues, this study aims to propose a approach for self adaptation of authorization to OpenStack authorization infrastructures. This solution is based on MAPE-K (Monitor, Analyse, Plan, Execute)(IBM, 2006) model using the kilo version of OpenStack. For this, the following specific objectives are listed: 1. Define an architecture based on MAPE-K concepts for the solution with focuses on the Analysis phase, its means identify an abnormal behaviour and give possibilities to mitigate it. It considers an OpenStack cloud with processing and storage services enabled. 2. Design and implementation of a prototype for proof of concept. 3. Identify insider attacks scenarios in OpenStack authorization context, and evaluate its possible responses and impacts of each response in the cloud environment. 14. Self-adaptive Authorisation Framework.

(17) 16. These scenarios of internal attacks are based on the components of OpenStack authorization. This exercise aims to evaluate and discuss possible scenarios, responses for attacks and impacts created by these responses.. 1.3. Work organization. This master dissertation is structured in six chapters including the introduction. The second chapter presents the main concepts used in this work, including the following topics: Insider Attacks, Identity Management, Cloud computing and self-adaptive systems. Chapter three begins with an overview of the proposed solution followed by the insider attacks scenarios description and ends with implementation details of the solution. Chapter 4 presents some validation and results of our approach including the environment description, a concrete use case, experiments and a discussion. Chapter 5 presents a selection of related works. Lastly, we conclude presenting the main contributions followed by the limitations and possible future works..

(18) 17. 2. Background. This chapter presents the main concepts used along this work. Section 2.1 describes some concepts about insider attacks. Section 2.2 introduces concepts related to identity management with emphasis on processes related to AAA (Authentication, Authorisation and Audit) exploring the access control model considered in this work (ABAC and RBAC). Section 2.3 presents some cloud computing concepts regarding to its services models and deployment followed by the main OpenStack concepts and how it works. The last section describe the main concepts related to self-adaptive systems covering its characteristics and properties.. 2.1. Insider Attacks. According to (SCHULTZ, 2002), an internal attack can be seen as a misuse of the system by authorized users. Thus, the first element of such an attack is the internal user. CERT defines an internal user as an employee, former employee, contractor or business partner who has access to system data or company information. In this way, to characterize an internal attack, this user needs to have intentions to abuse or take advantage negatively of the company’s data, affecting the confidentiality, integrity and availability of their systems(SILOWASH DAWN CAPPELLI et al., 2012). On the order hand, (COLWILL, 2009) divides insiders into two main groups: intentional and unintentional. In (SILOWASH DAWN CAPPELLI et al., 2012), only the first group is considered. However, some internal attacks caused by an innocent user may have high potential damage as well, for example: inappropriate Internet use, which opens possibilities to virus and malware infection, exposition of the enterprise influencing in its reputation and future valuation. CERT database contains about 700 registered cases of internal attacks in 2012. Looking at these cases, it was possible to categorize them by analysing the patterns in 371.

(19) 18. of these attacks (SILOWASH DAWN CAPPELLI et al., 2012). Thus, the following categories were identified: • Sabotage: When a internal user have access to some information and use it with intent to injury a company. For example, leak sensitive information for others companies take advantage in market dispute. • Data theft: An internal attack category where some user steal information with the intent of compromising privacy or obtain confidential information. • Fraud: It happens when a insider use IT infrastructure for unauthorized operations of an company data for personal gain, or thefts information leading to identity crimes. • Others: Cases in which an insider attacks were performed by others intentions or a miscellaneous of Sabotage, Data theft, and fraud. This work considers that insider attacks are carried out by users or former users of a particular organization who have access to a particular system or company information across the network. In addition, it takes into consideration that they are authorized to access such data. Among the above categories, our case studies and implementation are focused on data theft.. 2.2. Identity Management. Identity management consists of an integrated system of policies, technologies and business processes that enable organizations the treatment and handling of identities (identity attributes) of its members (JØSANG; POPE, 2005). From the perspective of a service provider (SP), which makes services and resources available through the Internet, identity management allows an SP to know who its users are (by means of authentication) and to manage what services they are entitle to use (by means of authorization). Different identity management models have been proposed to deal with issues related to user authentication (BHARGAV-SPANTZEL et al., 2007) (JØSANG et al., 2005). One such model is the federated identity management, where different providers form an association, establishing a relationship of trust between them (CHADWICK, 2009). In this model,.

(20) 19. Identity Providers (IdP) are responsible for authenticating users by sending messages containing authentication credentials. In this way, the user can access resources offered by different SPs using a single set of credentials issued by a federation IdP. Authorization services aim to control user access after the authentication process (Figure 1). Audit services involve the registration of all user requests and activities in the system for future analysis, completing the tripod AAA (authentication, authorization and auditing). To support robust systems, federated access control depends on authentication models and authorization as the RBAC (Role Based Access Control) (SANDHU et al., 1996) and ABAC (Attribute Based Access Control) (HU et al., 2014).. Figure 1: Authentication, Authorization, Audit The basic purpose of access control mechanisms is to protect objects, whether data, services, applications or distributed systems. This type of operation involves the creation, deletion, discovery, reading and running objects. Access to these objects is requested by subject. In general, a subject is an entity that performs an operation on objects, for example: a non-person-entity (NPE) human, or for example autonomous systems. In the 1970s, the concept of role based access control (RBAC) has been implemented in multiple applications, web systems and multi-user. According to (SANDHU et al., 1996), the central notion of RBAC is that permissions are associated with roles, and users are assigned to appropriate roles, which greatly simplifies permissions management. For example, in a given system, a user login in and the application identifies the role for the user. Based on its role, the application interacts with the policy repository to search for the permission associated to that role. Based on those permissions, the system is able to identify the allowed action for that user. ABAC (Attribute Based Access Control) is an access control model that aims to inherit the best practices of previous models and treat changeable and specific attributes of an environment. As the name suggests, access control is based on attributes and takes.

(21) 20. into account the following elements: subject, object, operation, policies and environmental conditions. Attributes are characteristics of a subject, object and conditions of an environment. Attributes are generally described by a name and a value. An subject can be a human user or other system/device that aims to access and perform actions on a given object. A subject is associated with one or more attributes. A object is the resource that the access control system based on ABAC is to protect, for example: files, databases, web systems, cloud resources, etc. An operation is the execution of a request from the subject to an object. Finally, environment conditions are additional attributes to making access control decisions. Examples of this type of attribute are: current date/ time, geographic location and etc. For example, a company has different levels of employees (Manager, Director, Staff etc) and only Director is able to access the Data Center room at weekend. So, when the Director (subject) enter with their credentials, the system evaluates policies about the subject, for example, if it has the attribute Director, as well as evaluate environmental conditions (if weekend day or week day). After that, the subject (Director) has access to the object (Data Center room).. Figure 2: ABAC Access control mechanisms. In Figure 2 when an individual requests an operation on a given object, this request is intercepted by the PEP. The PEP has the role of protecting the object, redirecting the request to the PDP. The PDP must make a decision allowing or not the subject’s access. This is done first based on PIP. The PIP provides the PDP information necessary for decision-making based on environment conditions and a repository of attributes (Eg.: object attributes). In background, the PDP also requests information on system policy repository. This policy repository is managed by the PAP, which system administrators.

(22) 21. or an administrative system manage the policies used. Only after consulting these components, the PDP may decide that the subject of the request will or will not be held in the given object. For example, in a Hospital Manager system a given user requests access to analyse some test results of a critical patient. This way, the gateway module of the system (PEP) intercepts this request and redirect it to a module able to allow or not this access (PDP). This process is performed with basis on additional information of the user (Department, employee, access level), for example, to see test results the user must be a Doctor(Attribute repository). Also is checked if the user request those results in a useful day (Environment conditions). With these information, the system decides(PDP) if the user have access or not to the test results and redirect the request (PEP) to the object (test results).. 2.3. Cloud Computing. The use of cloud services is being widely adopted. Nowadays, they can be found in a large number of companies, research center and universities. According to (MELL; GRANCE,. 2011), there are some essential features that all the cloud computing systems. must possess, such as on-demand service, network access, resource pooling, rapid elasticity, service measurement. • On-demand service: service distribution method where customers use cloud resources according to their needs. • Network access: cloud resources must be accessible over the network by a heterogeneous range of clients (laptops, smartphones, workstations, etc.). • Resource pooling: Resources are pooled to serve multiple consumers. These resources may be virtual and assigned on demand. Some examples of cloud computing resources are storage, processing, memory and network • Rapid Elasticity: Cloud property that offers more or less resources according to consumption required. It’s understood as a dynamic allocation and deallocation capacity of resources. • Service Measuring: The Cloud can itself measure the use of its resources through monitoring and management techniques. This may help charging consumers and service providers..

(23) 22. Regarding to deployment models, cloud computing infrastructures have different strategies. This depends mainly on the physical location and how resources are accessible to users. According to (MELL; GRANCE, 2011), the four main cloud deployment models: private cloud, community cloud, public cloud and hybrid cloud. In a private cloud, the infrastructure is provisioned for exclusive use by a single organization made up of many consumers. A public cloud provides open use services to general public. Its property, management and operation can be done by a company, an academic institution, a government organization, or a mixed combination. A hybrid cloud is made of two or more infrastructure in the cloud (private, community, or public) which remain separate entities but are linked by standardized or owned technology that enables communication of data and application portability.. 2.3.1. Service Models. Cloud computing offers its services according to some models. These models work in a layered manner, so that the SaaS layer depends on a platform to run the software, and such software requires an infrastructure for hosting it. The main models are: Software as a service (SaaS) is a model where users access applications in cloud infrastructures in way that the clients does not need to install or manage software or even manage servers, operating system or storage infrastructure, such as: Google Docs1 , Microsoft One Drive2 , dropbox 3 . In Platform as a service, (PaaS) it is provided to the users an environment where applications and programs, that were created or acquired by them, can be installed and developed with programming languages, libraries, services and tools supported by the cloud infrastructure. Some examples are: Google App Engine4 (which currently supports applications developed in languages such as Python, Java, PHP e Go), Microsoft Windows Azure 5 , Heroku6 , Cloud Foundry7 , etc. Infrastructure as a service (IaaS) is the model provided to the computing infrastructure users through storage, processing and network services, in which the customer can install and configure software in general, including operating systems. Some examples of 1. http://www.google.com/docs/about/ https://onedrive.live.com/ 3 https://www.dropbox.com/ 4 https://cloud.google.com/appengine/docs 5 http://azure.microsoft.com/ 6 https://www.heroku.com/ 7 http://www.cloudfoundry.org/ 2.

(24) 23. IaaS are: Amazon AWS 8 , OpenNebula9 and Openstack10 . Others service models are become known, such as: Network as a Service (NaaS), Access Control as a Service (ACaaS) and others. In this work we did not cite then once it were not referred (MELL; GRANCE, 2011). 2.3.2. Openstack. Openstack. 11. is a free and open source platform that offers tools to build cloud in-. frastructures. It is composed by a set of software projects, which are used to provide different cloud services. This project got bigger in 2010, when Rackspace Hosting and NASA released its first version, Austin, in which part of the code belonged to the Nebula platform related to processing and another part was from Cloud Files (related to storage). Since 2011, open source software communities began contributing to the project (e.g the Ubuntu community).Since then, Openstack releases at least two versions per year, the latest one being Liberty, released at the end of 2015. Figure 3 presents some of the main projects of Openstack: Nova (Compute), Swift (Object Storage), Neutron (Network), Horizon (Dashboard), Glance (Image), Cinder (Block Storage) and Identity (Keystone). Each one is responsible for providing different kinds of services in the cloud. Nova allows management and provisioning of virtual machines in cloud infrastructure, characterizing a processing cloud. Nova also supports the most commonly used hypervisors (Ex .: Xen, KVM vShpere), and support some emulation software (Ex .: QEMU. 12. ).. Neutron aims to provide virtual network services (virtual balancers, switches, etc.), for example, neutron is used by nova to provide software defined network to virtual machines. Swift is an object storage service, with characteristics of scalability, availability, performance, and data replication. Its architecture is based on two types of nodes: Proxy and storage. Proxy nodes receive the client requests (Ex .: upload, delete, list objects) and redirect to the storage nodes where the data is stored. Keystone is the OpenStack identity management component responsible for managing user access to cloud resources. Keystone uses an access control model based on tokens. According to Figure 4, the flow to have access to a cloud OpenStack resource begins 8. aws.amazon.com http://opennebula.org/ 10 Openstack 11 https://www.openstack.org/ 12 www.qemu.org/ 9.

(25) 24. Figure 3: OpenStack Services when a certain user wants for example, create an instance of the virtual machine (VM) in Nova. It is important to note that OpenStack has a Rest API for each service provided, to facilitate the interaction of users and customers to the cloud environment. For creating the VM, the user firstly authenticate in Keystone. For this, the user sends its access credentials (user name, password and tenant), receiving a access token. With this token, he performs a request to the service attaching the received token. Then, the cloud infrastructure performs two internal authorization procedures before performing the request. The first process is performed by the service with the Keystone. On receiving the request, the service checks if the token is actually valid. After that, the service validates if the request in accordance with its own policies and then executes the requested action sends s response to the user.. Figure 4: OpenStack overview.

(26) 25. 2.4. Self-adaptive systems. A self-adaptive software system is able to modify its own structure and/or behaviour during run-time in order to deal with changes in its requirements, the environment in which it is deployed, or the system itself (ANDERSSON et al., 2009b). One way to adapt systems is through the implementation of a feedback control loop for monitoring of the target system (target system), data analysis, planning of adaptation actions, their execution on the target system and a knowledge based which is used to share data between all activities. One of the widely followed models to achieve these features is the the MAPE-K (Monitor, Analyze, Plan and Knowledge) defined by IBM (IBM, 2006).. Figure 5: MAPE-K Feedback loop Figure 5 presents an overview of MAPE-K phases and how they interact with the target system. The monitoring phase (Monitor ) is responsible for obtaining, aggregate and filter the target system status information. This information is captured by the Probes in a raw form. Once the information obtained by the Monitor are handled and filtered, they are sent to the analysis phase. The analysis phase (Analyser ) is responsible for evaluating the data sent by Monitor in detail. The goal of this stage is to detect the need for target system adaptation. It is possible to split analysis phase in two subphases: the problem domain and the solution domain. In the problem domain the goal is to implement mechanisms to identify triggers for adaptation. Based on the detected problem, the solution domain aims to point out the possible adaptation solutions that fill this need. Once detected the need to adapt, the planning phase (Plan) builds a sequence of steps with the goal of ensuring the adaptation of the target system. Once this sequence is defined, the stage of execution receives the execution plan and act on the target system through the effectors. Those four steps work together with a component called knowledge base (textitKnowlegde). Communicating with the knowlegde base, all components of feedback.

(27) 26. loop can use information to optimize and assist in the processes of MAPE-K cycle. Self-adaptive systems can be categorized in two main approaches: top-down and bottom-up (CHENG et al., 2009). The first category has a centralized controller which is responsible for managing all aspects related to the system adaptation. The second is characterized by a decentralized approach, where the adaptive control is done by distributed components that individually do not have full knowledge of the system, but together, contribute to the adaptation of the target system. This approach is also usually referred to as self-organising. Another way to categorize self-adaptive systems is according to its properties. In (IBM, 2006), IBM defines four main properties of self-adaptive systems: self-healing, selfconfiguration, self-protection, and self-optimization (These properties are usually called self-* properties), where: • Self-Healing: Capability to detect, diagnose and treat problems (software errors, exceptions, fault tolerance). • Self-Configuration: Capability to act on its own components (updating, installation, removal, reconfiguration) to better fit a situation. • Self-Protection: This property is related to the capability to detect, identify and protect against attacks. • Self-Optimisation: Property where a self adaptive system can monitor their resources and adapt them to tuning resources automatically. Regarding the type of adaptation there are also two approaches: parametric or structural (ANDERSSON et al., 2009a). Parametric adaptation is to change the parameters of the components according to the context. One problem of this approach is that the parameters are limited, which implies a fixed number of behaviours. The aspects of structural adaptation are different mainly because it allows the change of system components as needed, i.e., if a component does not provide a feature, its possible to replace with one that has. Another important aspect of the self-adaptive systems categorization is regarding to the type of the decision-making process. This process can be static or dynamic (SALEHIE; TAHVILDARI,. 2009). In static approach the decisions are defined during the development. stage of the self-adaptive software. On the other hand, dynamic decisions has its main.

(28) 27. focus during program execution time and considers all the experience acquired during the execution to make the appropriate decision (OREIZY et al., 1999) Given those characteristics, the proposed solution in this work adopts an approach top-down (using a central controller). The adaptation type is parametric, where only system parameters are modified. In addition the property in focus is self-protection.. 2.5. Conclusion. This chapter presented the necessary information to understand this work, this includes: Insider attacks, identity management with emphasis in authorization issues , cloud computing applied to OpenStack and self-adaptive systems. It was important once our works aims to provide a solution to deal with insider attacks in a OpenStack authorization infrastructure and for that, mechanisms of self-adaptive are implemented to detect the abnormal behaviour and mitigate it..

(29) 28. 3. Adding Self-adaptation to OpenStack Authorization Mechanisms. This chapter aims to present the architecture of the proposed solution. First, it is necessary to analyse the architectural components of OpenStack in order to map them in terms of ABAC ones (Section 3.1). After this analysis, we present an approach with the integration of OpenStack architecture and self-adaptation mechanisms. Moreover, we have create several insider threat scenarios, possible responses, and their impact to the cloud platform (Section 3.3). These scenarios were used as basis for developing a prototype.. 3.1. User Access Control in OpenStack. OpenStack employs the RBAC model for handling access control. A user in OpenStack is assigned a role associated with a tenant, which represents a cloud resource in one of the services provided by the platform. The service can specify policies associating the roles with permissions to conduct operations on the service, such as, permission to download a particular object from the storage service. Since OpenStack employs the RBAC model, it is possible to identify all functional components necessary for providing authorisation. However, due to its distributed nature and the ability to support multiple heterogeneous services, these components are arranged in a different way when compared with traditional systems. In OpenStack, access decisions are computed in two different points when processing a user request. While the keystone, as the first authorization point, is based on RBAC model, in the second authorization point, each service has an access control list (ACL) with its own policies. This heterogeneity demands some effort to mapping those different models and from that, propose an self-adaptive infrastructure. Figure 6 presents a general view of the OpenStack architecture in terms of ABAC.

(30) 29. components, where the components responsible for managing access control are identified, along with the operation flow performed in the system during user access attempt to a particular service, e.g, Swift.. Figure 6: OpenStack/ABAC component mapping The flow begins with a user sending its credentials for performing authentication (Step 1), and then receiving a token as reply of a successful authentication (Step 2). The user then requests an operation in the cloud (Step 3). Swift PEP intercepts this request, protecting the service from a possible unauthorised operation, and requests the PDP in Keystone (Step 4) to validate the token and to check whether the user has access permissions to this service. In order to validate the token, the Keystone PDP consults the Keystone PIP (Step 5a) and obtains its security policies (Step 5b) for deciding whether the user has access to the Swift service, and returning its decision (Step 6). At this point, the first part of authorisation has finished, but OpenStack platform has an additional second authorisation step that is performed by the service. After consulting Keystone, Swift needs to evaluate the request against its own policies, for checking whether the user has permission to conduct the requested operation. The Swift PEP then activates Swift PDP to decide (Step 7) whether the user can conduct the requested operation (e.g., upload a file). Swift PDP obtains the access control policies for the service (Step 8a) and.

(31) 30. uses the Swift PIP (Steps 8b) to obtain any information that it needs for evaluating the access control policy. Once an access decision is made (Step 9), Swift PEP allows the user to perform the requested operation (represented by Step 10 towards the Object) and returns to the user a response to the request (Steps 11 and 12). Each OpenStack service contains a Log component, as shown in Figure 6. These components represent the Audit service (as presented in section 2.2), and are used to record the different activities related to access control within the system. Among the information logged, we can mention access requests, access control decisions, operations performed in the service and unauthorised attempts.. 3.2. Our approach. The idea of this work in propose an OpenStack cloud with self-adaptive authorization mechanisms is that the dynamic policy evolution of authorization is capable to mitigate the malicious user threats, limiting the action scope of them. This way, we confirm that particularity of OpenStack is the fact that there are two PDPs and two sets of policies in the same system. Because of this, solutions for dealing with authorisation, such as (BAILEY; CHADWICK; LEMOS, 2014), cannot be directly applied to OpenStack. For this reason, we have defined an architectural solution for allowing the addition of self-adaptive capabilities into OpenStack, which is presented in Figure 6, together with the flow of activities related to self-adaptation. Figure 7 presents a general view of our approach, where the MAPE-K loop adapts the target system, composed by the AAA mechanisms: Authentication, Authorization and Audit. For that, the Monitor stage is responsible for obtaining information about the access control infrastructure (target system) and its environment through the use of probes. This information may include user attributes, access control policies, and event logs, such as, access requests and authorisation decisions, which can be used to update behaviour models in the Knowledge. The Analyse stage is responsible for assessing the collected information in order to detect any malicious behaviours. This stage also identifies possible solutions for mitigating the perceived malicious behaviour, and prevent future occurrences. The Plan stage is responsible for deciding what to do and how to do it by selecting an appropriate solution for dealing with the malicious behaviour, and producing the respective adaptation plan. Finally, the Execute stage adapts the authorisation infrastructure by means of effectors by following the instructions of the adaptation plan..

(32) 31. Figure 7: Overview of target system adaptation Figure 8 presents the architecture of our approach, where a Controller implementing the MAPE-K feedback loop monitors the cloud platform, and performs adaptations when a malicious behaviour is detected. The target system is composed by the different services provided by the OpenStack platform, including its identity service Keystone. Each OpenStack service has its own set of probes and effectors, which allow the Controller to interact with OpenStack. The information collected by the probes include the different activities related to access control that take place in the OpenStack platform, such as, access requests and access control decisions. Each OpenStack service contains a Log component that can be queried for this information (Steps 1a and 1d). There are also probes for obtaining the access control policies currently in place (Steps 1c and 1f), and information about users (Step 1e) and about objects being protected (Step 1b) by means of their respective PIP. It is important to mention that, for the moment, we are not considering changes in the authentication mechanisms. The collected information is fed into the Monitor (Steps 2a and 2b). Steps 3, 4, and 5 represents the Controller activities that have been previously described. Finally, the Execute stage employs effectors (Steps 6a and 6b), which alter the access control policies in place through the PAP of each service, i.e., Keystone PAP (Step 7a) and Swift PAP (Step 7b)..

(33) 32. Figure 8: Architecture. 3.3. Insider Attacks Scenarios. This section describes some internal threats scenarios that are representative of an OpenStack cloud platform. These scenarios capture essentially data theft by malicious insiders, where users with legitimate access to the system abuse their rights for stealing sensitive data. They also capture the distributed and heterogeneous nature of the OpenStack cloud platform in which multiple services are protected by means of a two.

(34) 33. step token-based authorisation. This has prompted us to perform an analysis on different insider threat scenarios, and their impact to the cloud platform and its users.. Scenarios SCE#1 SCE#2 SCE#3 SCE#4 SCE#5 SCE#6 SCE#7 SCE#8. Table 1: Summary of inside threat scenarios. Description One user exploits one role for abusing one specific service One user exploits one role for abusing several services One user with several roles abuses one service One user exploits several roles for abusing several services Several users exploit one role for abusing one service Several users exploit one role for abusing several services Several users exploit several roles for abusing one service Several users exploit several roles for abusing several services. OpenStack users can have access to different services with one or more distinct roles, and we have used this characteristic as basis for defining the insider threat scenarios. For defining these scenarios, we have considered three variables: the number of users abusing the system, the number of roles involved in the abuse, and the number of services being abused. These variables can assume two values, one (1) or many (N). Based on this, we have defined a total of eight abuse scenarios, which are listed in Table 1, ranging from the case where one user exploits one role for abusing one specific service (SCE#1), one user with several roles abusing one service (SCE#3), several users exploiting one role for abusing one service (SCE#5), to several users exploiting several roles and abusing several services (SCE#8). Table 2: Summary of responses. Acronym DU DR ER RRA RUR DUT TSO. Meaning Disable user Disable role Exchange user role to one with stricter permissions Restrict role actions by modifying the permissions of a role regarding a service action Remove user role by removing the role associated with the user in Keystone Disassociate user’s tenants by removing access to all tenants the user has access to Turn the service off. In addition to the scenarios, we have identified possible responses that can be adopted by the MAPE-K controller, which are captured in Table 2. The responses can be executed either over Keystone, or over the service begin abused. Among the responses, we consider disabling an user (DU) or a role (DR) in Keystone, exchanging an user’s role to another.

(35) 34. (ER), completely removing a role (RUR) or a tenant (DUT) from the user, restrict role actions (RRA) and shutting down the service (TSO). These responses may have different levels of impact over the user, the role, or the service being accessed. It is possible that some of the responses may disrupt access to legitimate users whilst removing access to insider attackers. Based on this, we have summarized in Table 3 these possible impacts.. Impact IMP1 IMP2 IMP3 IMP4 IMP5 IMP6 IMP7 IMP8. Table 3: Summary of impacts. Description User does not have access permissions to the cloud Access permissions are revoked for all users associated with a particular role Role is disabled in the system With the new role access permissions for the user are restricted Service must be configured with new access permissions and restarted for deploying modifications User does not have access permissions to any resource in the cloud Service must identify which role is used to abuse. Once the user is assigned to many roles. Service(s) will become unavailable. Table 4 finally combines the information from the previous tables in order to present a complete picture of the identified insider threat scenarios, their possible responses over Keystone or the service, and the impact of these responses to users, roles and services. The first column of that table identifies the scenario number. The next three columns describe the scenarios in terms of the number of users, roles and services that are involved in the scenario (which have been summarised in Table 1). The following two columns identify the types of responses expected from a controller when handling abuse. These responses are either associated with Keystone or the service, and they are summarised in Table 2. For example, in scenario SCE#1 (see Table 4), once the abuse is identified by the controller, there are a set of responses that can be performed by the controller, either over Keystone or over the service, such as, “DU" or “DR", meaning, respectively, “disable user" and “disable role". Finally, the last column identifies the types of impact that the scenario might have on the users, roles and services, and these are summarised on Table 3. It is important to note that one response may cause more than one impact. For instance, in the first scenario (SCE#1), if the response is to disable the user in Keystone (DU), the user does not have access permissions to the cloud (IMP#1), while disabling a role (DR) impacts all users that are assigned that particular role (IMP#2), which might.

(36) 35. Scenarios. SCE#1. SCE#2. SCE#3. Table 4: Analysis of inside abuse scenarios. Number of Response Impact Users Roles Services Keystone Service DU IMP1 DR IMP2 and IMP3. 1 1 1 ER IMP4 RRA IMP4 and IMP5 RUR IMP2 1 1 N DUT IMP6 RRA IMP4, IMP5 and IMP6 DU IMP1 1 N 1 ER IMP4 RRA IMP4, IMP5, and IMP7. SCE#4. 1. N. N. SCE#5. N. 1. 1. SCE#6 SCE#7 SCE#8. N N N. 1 N N. N 1 N. DU. -. IMP1. DR DR -. RRA TSO TSO. IMP2 and IMP3 IMP4, IMP5 and IMP6 IMP2 and IMP3 IMP8 IMP8. hinder the use of the role in the future (IMP#3). Although removing a role might be an inappropriate response when dealing with scenario SCE#1, the response might be more efficient for scenario SCE#5, which considers that several users is abusing a service with the same role.. 3.4. Implementation details. Aiming to validate our approach, we have implemented a prototype of the solution. In this section some implementation details are presented to clarify how the Controller works. It is important to note that, we do not delve into each of the stages of the MAPE-K, but only the main concepts are present in terms of the key components of our prototype. Figure 9 gives a general view of our solution, in which is possible to observe the solution package structure. There are two main packages: java and resource. In package java are the probes and effector packages. These probes observe the logs created by Keystone, Swift and Nova, as well as effectors to act on those services (Nova and Swift). In the controller package the MAPE-K loop functionalities are implemented, i.e., Monitor, Analyse, Plan and Execute activities. The prototype was developed using Java language with specific libraries of the Jboss.

(37) 36. Figure 9: Package Diagram of the Prototype. Drools 1 . The probes work by listening the OpenStack log files that stores the information necessary to the Controller. This was implemented using thread mechanisms that monitoring each new entry in the log file. The information data is captured in a raw format. The probe module is composed by the main classes: App, LogFiletailer and LogFileTailerListener. The App is instanciated with basis on log file path. Each log line captured by the probe is then passed to Monitor (Controller package). This module filters the log entries information that are of interest to thresholds established for analysis phase of feedback loop. This filter is needed because other operations records are saved in the same log. Once a significant log entry is passed to the Monitor, these data are standardized according to the model of LogInfo class and extracted in order to be used by the Analysis module. The data needed by the analysis module are: • Operation timestamp: The timestamp in which the operation was performed. • Operation ID: Unique identifier of the operation in the cloud. For example, even though two or more operation are performed at the same time, its ID’s are different. This ID is used as the id of LogInfo object. • Username: Name of the user that performed the operation. • Operation type: Indicate which type of operation was performed. For example, in the Swift can represent a download, upload or delete action. In Nova service, it can represent if a user turn a virtual machine off, or delete it and others operations. • Tenant Name: Name of the tenant used by the user that performed the action. 1. http://www.drools.org/.

(38) 37. • Roles: represents a role or set of roles associated to the user that are performing the operation.. Figure 10: Probe/Monitor workflow In figure 10 is presented in details the workflow from the beginning. Firstly, in step 1, the App class get every new entry that appears in log file. This entry is split and send to the Monitor, in which the relevance of the entry is checked (step 2). That process happens due the diversity of entries present in log that not represent a significant information to decision process. Then, if the entry contains a significant information the Monitor calls the Analyse class method to save the entry, else, this entry is ignored by the Monitor. This loop is executed for each new entry. The flow of figure 10 follow up to Analyse class with the new entry stored and send to drools mechanisms to compare it against all registered rules (See figure 11). Drools returns all activated scenarios to Analyse class that calls the Plan for chose a adaptation Plan with the activated scenarios and all possible responses to mitigate attacks in that scenarios. Planner builds a schedule for executing the adaptation and send the sequence of steps to Executor. Executor calls executeAdaptationAction method in Effector class. Important to note that, its possible to exist many OpenStack services, what according to the proposed architecture, demands a effector dedicated to each service. This is represent in the diagram of figure 11 with a cascade of box behind the OpenStackSwift Effector. At this point, the drools tool is important to this work. This mechanism manages all rules to detect the insider attack scenarios. A rule in Drools is divided into two main parts that uses first order logic: when <conditions> then <actions>. The first block describes all conditions that may activate the rule. The second block describes what action will be performed referred to that rule..

(39) 38. Figure 11: Analyse, plan and execute workflow In this way, we represent the scenarios described in section 3.3 in terms of rules that, when triggered, capture the identification of one of the insider threat. In these rules, we have assumed that one abuse constitutes the download of five or more objects in an interval of one minute. Once a scenario is identified, a notification is sent to the Plan component, which needs to make a decision about which response to employ among the ones available to deal with the respective scenario. As our intention was to validate the impact caused by each response, our Plan was configured to select the response being evaluated at the moment. Each response has been implemented as a parameterised script in order to allow the modification of the users, roles or permissions involved in the abuse. Drools rules are implemented by the rules package of Figure 9. Since the rules are able to interact with Java objects, the analysis module receives new log entries encapsulated in a object, that is passed to the drools rule file and analysed. For example the Rule that represents the scenario 1: 1 r u l e " Rule_Scenario1 " 2. when. 3. $r : LogInfo ( $id : idtrans ,. 4. $time : timestamp ). 5 6. $c : L o g I n f o ( i d t r a n s < $id ,. 7. $ r . username == username &&. 8. $ r . r o l e . s i z e ( ) == 1 &&. 9. $ r . serviceName == serviceName. 10 $time . getTime ( ) − timestamp . getTime ( ) < $ r .getMAX_DURATION 11. ).

(40) 39. 12 13 $m : MapLogResquest ( 14. ). then. 15 $m . s e t S c e n a r i o 1 ( $m . g e t S c e n a r i o 1 ( ) + 1 ) ; 16 $m . s e t S c e n a r i o 2 ( $m . g e t S c e n a r i o 2 ( ) + 1 ) ; 17 $m . s e t S c e n a r i o 5 ( $m . g e t S c e n a r i o 5 ( ) + 1 ) ; 18 $m . s e t S c e n a r i o 6 ( $m . g e t S c e n a r i o 6 ( ) + 1 ) ; 19 end The received object is a instance of the LogInfo class. Then, drools can access each attribute of the class as a local object. The operator when is a conditional operator that checks the condition to activate that rule. In the case of Rule_Scenario_1 we check if the entry belongs to the same user(line 7), associated to 1 role (line 8), abusing the same service (line 9), and if the request is in the time interval established in the threshold(line 10). It is important to note that the function getMax_DURATION gets the time interval hard coded in the core of the application, once we are proposing a prototype that aims to validate the proposed solution. So, if the condition is satisfied, the instructions in block then are executed. In this case, controller variables are incremented. Due the fact that in rules that represent scenarios 2, 5 and 6 the role 1 is probably activated, its controllers are also incremented (lines 15 to 18). This module first checks the thresholds, in other words, if it is detected that some user is with a high download rate (for examples 50 per second), a attack was detected. The second analyse is regarding how the attack was performed, according with OpenStack elements. The scenarios were specified in terms of rules (See page A) using drools, this way, if the rules are activated, a internal control is made in order to verify which rules were activated and how many times. Based on the activated rules we are able to identify possible responses that can be applied to mitigate the attack. These are sent by the analysis module to Plan. The plan module chooses one possible response and send this instruction to the executor module. The executor calls the appropriated effector to execute in the target system the adaptation action. The effectors implement mechanisms to modify the target system, in this case the OpenStack ones. For that, OpenStack offers a API REST to interact with all services and.

(41) 40. perform a large set of action in the Cloud. In this work we have used 3.0 version of the API. An effector that acts on an OpenStack service directly needs to be able to manipulate policies described in json, and the mechanisms to update these policies. In case of swift on the Kilo version, the swift deals with ACLs to describe local policies, which demands the Swift effector to implement calls to Swift 1.0 API..

(42) 41. 4. Results and Validation. As described in the chapter 3, the incorporation of self-adaptive authorisation into OpenStack comprises many steps. All these steps need to be well integrated to ensure a secure cloud platform. Therefore, the goal of this chapter is to evaluate our approach as well as to present important considerations and results about the behaviour of our approach. For that we have deployed an OpenStack environment with our Controller, and used it to simulate some scenarios described in the previous chapter. We then conducted performance experiments in terms of the controller behaviour in different scenarios, and experiments regarding to the elapsed time between the detection and mitigation of an insider attack to prove the effectiveness of the controller.. 4.1. Environment description. In order to validate the proposed approach, we have implemented a prototype of the MAPE-K controller for evaluating the scenarios and their impact. This prototype has then been applied for monitoring and controlling an experimental OpenStack, in version Kilo, deployed as a private cloud in our laboratory. Figure 12 presents the structure of the deployment of our experimental prototype, which is distributed in five nodes. Each node is a physical machine with 8GB RAM, core i7 processor and 500GB disk. Two nodes are dedicated to storage (Storage Nodes), two nodes dedicated to Processing Nodes, and one acting as the OpenStack Management Node. The OpenStack Management Node contains the following OpenStack components: Swift Proxy, Nova Controller and Keystone. Swift Proxy is the component responsible for managing access to its storage service, while Nova Controller takes care of virtual machine management, and Keystone deals with identity management. The MAPE-K controller node is hosted in the Openstack Management Node..

(43) 42. Figure 12: Architecture of our experimental deployment.. 4.2. Use Case. An example of insider attack situation to steal information is given below to illustrate the operation of our solution. ACME is a Information and Communication Technology company and has a private cloud based on OpenStack, exploring the processing (Nova) and storage (Swift) services. Multiple users with different functions have access to the services offered by the cloud. The actions and privileges of each user vary according to the permissions associated with their roles. These roles are associated with users through the OpenStack Keystone, and the permissions set for each service according to their ACLs in each service. Alice has been working for some time as a consultant on several ACME projects, and she needs full access to files stored on Swift, as well as multiple folders within it. This is possible because it is associated with a role of consultant. This role has full access to system files and folders. The consultancy is completed, but Alice continues with her enabled user in the cloud. Days later she ended up discovering that she still has access to.

(44) 43. the system and starts abusing the service, downloading indiscriminately current projects of the company, once she does not know for how long this gap is opened. This scenario characterizes Alice as a malicious user. With a controller based on MAPE-K, the cloud was monitoring all download actions performed in the cloud. In Alice’s case, the system would have detected that there was a high number of downloads in a short time, characterizing an abnormal behaviour. By identifying this abuse coming from a single user (Alice in this case), the system would characterize it in the scenario SCE 1. Once the attack scenario was detected, the possible responses would be the choice to: disable the user (DU), disable the user role (DR), exchange the user role (ER) or restrict user the user action modifying the role permission on the service. Those different possible responses brings different impacts (as described in table 3).. 4.3. Experiments. The experimental step of this work consists feasibility and performance. The feasibility experiments were focused on rules validation, and for that considered a low number of users and requests. The second set of experiments consists of real simulations i.e., using considerable number of users and request using load tests and considering the environment described in Figure 12. Part of the feasibility experiments were performed in the private cloud deployed in our laboratory (as described in Figure 12) and another part in a simulated environment. The simulated environment consists of a local environment using synthetized logs, in which we inserted log entries equivalent to those created in the real environment while an user performed a cloud operation. This adaptation is necessary due to access problems to real environment and lack of optimization tests, since every change demands the creation of a new version of the controller and the deployment of the cloud server. With the use of the local environment its not necessary to generate a new version and deploy it remotely. Once implemented the rules, it was possible to carry out some experiments with a larger number of user loads and requests. In the second set of experiments, 100 different users on the OpenStack cloud were created, where these 100 users are associated to the same role and a single tenant. In order to generate the request load, JMeter software 1.0 was used. It was configured to import a .csv file containing the credentials of 100 users used to make requests in the.

(45) 44. cloud. Furthermore, it was configured to perform two HTTP requests. The first one is an authentication request, where JMeter uses all user entries in the csv file to acquire the access token and saves them internally in a variable for each user. The second http request attaches the token and performs an operation in the cloud. In this case, to download a file. If the request is successful, it will return the status 200, otherwise it will return an error status. The set of tests consisted basically in generating request loads that could violate or not a pre-established threshold. For these tests, it was established that a charge of downloads in the cloud higher than 50 download actions in less than 2 seconds would be considered an abnormal behaviour. In this context, two metrics sets were captured. The first set of metrics was related to the controller performance and its impact when deployed in the same cloud controller. The software used for measurement data was the htop1 . For this case the following data were obtained:. # Number of Users 1 10 100. Table 5: Controller Performace Metrics Normal Behaviour Abnormal CPU Con- Memory CPU Consumption Consumption sumption 4.7% 1.7% 57% 26% 3.3% 97% 40.3 % 5.3% 128%. Behaviour Memory Consumption 5.2% 10.2% 9.7%. As it can be seen in the tables, the controller consumed less memory and less CPU when users were behaving normally. The numbers related to performance increase as so do the number of users, since the number of different entries increases for controller analysis. In this case, the resources consumption does not influence decisively the full resources of the cloud. In the test that simulates an abusing situation, its important to note that we do not analyse a user behaviour individually. Our experiment consider if the behaviour of the set of user violates the threshold. For example, we note that the consumption of CPU and memory increased, reaching more than 100% of CPU, for example for 100 users we have 128%. It means that 1 core of the cloud controller node and 28% of other core were working in this test. With basis on these example, it is possible to conclude that there are situations where the required resource of the Controller have a significant impact on the cloud server, influencing potentially on the cloud performance. 1. hisham.hm/htop/.

(46) 45. Table 6: Elapsed Time for de scenarios Number of Users Elapsed Time Detected Scenario 1 0.508 s Scenario 1 10 3.418 s Scenario 5 100 3.378 s Scenario 5 The second set of metrics were collected during this experiment, it was relative to the elapsed time of the controller given the abnormal behaviours already cited. The elapsed time consists in the interval of time between the instant in which the abnormal behaviour is detected and the instant in which the Controller receives from the cloud API a HTTP 200 status indicating that the actions to mitigate the abnormal behaviour was completed with success. For each set of users we collected the elapsed time between the attack scenario identification and the end of the mitigation action. From this moment on, all requests were interrupted. For the tests with 10 and 100 simultaneous users, we note the identified scenario was the same (Scenario 5), what is expected due the trio: N users, associated to 1 role abusing 1 service. For 1 user, we note that the elapsed time between the identification of the scenario and the end of the mitigation action was less than 1 second, while in the others cases, was around 3 seconds. This happens due the amount of requests made to mitigate the attack in those attacks. In the first case, only two requests are sent to mitigate the attack (Disable User). The first one is regarding to get a valid token and the second to disable the User passing as parameter its ID. In the second case, the request to get the token is also performed but, in order to disable the Role, is necessary to obtain its ID. In thi way, another request is made to retrieve all roles in details. Once we have this information, the effector sends the request to disable the current Role passing its ID as parameter. In the first case, it is possible to optimize the response operation because the user ID is a information that is present in the log entry. The Role id is not registered in log, this way the effector needs to obtain this id in order to disable it. The restriction of just receive the Role ID as parameter is a OpenStack API(2 ) decision, what justify the flow adopted in order to mitigate the scenario 5. 2. http://developer.openstack.org/api_-ref_-identity_-v3.html.

Referências

Documentos relacionados

Na notação musical ocidental a melodia é representada no pentagrama de forma horizontal para a sucessão de notas melodia é representada no pentagrama de forma horizontal para

This log must identify the roles of any sub-investigator and the person(s) who will be delegated other study- related tasks; such as CRF/EDC entry. Any changes to

Dentre seus objetivos está a preservação do meio ambiente, com a adoção de medidas sustentáveis, para evitar, reduzir e mitigar os impactos negativos advindos da ação

O credor que não requerer perante o Juízo da execução a(s) adjudicação(ões) do(s) bem(ns) a ser(em) leiloado(s) antes da publicação deste Edital, só poderá adquiri- lo(s)

O problema a ser abordado neste trabalho será verificar se as PME portuguesas estão muito endividadas, comparando-as com as de outros países da UE, através da análise

Ao Dr Oliver Duenisch pelos contatos feitos e orientação de língua estrangeira Ao Dr Agenor Maccari pela ajuda na viabilização da área do experimento de campo Ao Dr Rudi Arno

Neste trabalho o objetivo central foi a ampliação e adequação do procedimento e programa computacional baseado no programa comercial MSC.PATRAN, para a geração automática de modelos

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados