• Nenhum resultado encontrado

Plug and trace: a component-based approach to specify and implement traces

N/A
N/A
Protected

Academic year: 2021

Share "Plug and trace: a component-based approach to specify and implement traces"

Copied!
96
0
0

Texto

(1)“Plug and Trace: A Component-Based Approach to Specify and Implement Traces” By. Rafael Ferreira Oliveira M.Sc. Dissertation. Universidade Federal de Pernambu o posgradua ao in.ufpe.br. www.cin.ufpe.br/~posgraduacao. RECIFE, AUGUST/2010.

(2) Universidade Federal de Pernambuco Centro de Informática Pós-graduação em Ciência da Computação. Rafael Ferreira Oliveira. “Plug and Trace: A Component-Based Approach to Specify and Implement Traces”. Trabalho apresentado ao Programa de Pós-graduação em Ciência da Computação do Centro de Informática da Universidade Federal de Pernambuco como requisito parcial para obtenção do grau de Mestre em Ciência da Computação.. A M.Sc. Dissertation presented to the Federal University of Pernambuco in partial fulfillment of the requirements for the degree of M.Sc. in Computer Science.. Advisor: Roberto Souto Maior de Barros Co-Advisors: Jacques Robin and Pierre Deransart. RECIFE, AUGUST/2010.

(3)

(4)

(5) To my parents and my wife. . ..

(6) Acknowledgements “Do not be anxious about anything, but in everything, by prayer and petition, with thanksgiving, present your requests to God.” Philippians 4:6, Holy Bible. I would like to priorize my gratitute to God who always stood by my side supporting me, leading me and also showing me how good it is to do everything in His presence. To all my family, especially to my parents for the solid educational foundation I received, for the zeal and incentives throughout my Master’s Program. To my wife for her care and love, especially important in my walk. Also for her concerns when I had been absent myself for short and long times. To my friends Arlucio Viana, Rilton Souza, Marcelo Pedro and Halley Bezerra, who not only lived with me during my stay in Recife, but constantly supported me to continue working hard. To my classmate Marcos Aurelio for the constant and fruitful discussions on this work and partnership in the studies To the professors Jacques Robin and Pierre Deransart for the cooperation and encouragement they gave me during the program. I am also particularly grateful to my professor Roberto Souto Maior for graciously agreeing to guide me in completing this work, for his objective guidance and support in the improvement of all content. My thanks to all workmates, friends of CIn/UFPE and Itapetinga, you have contributed indirectly, but, without a doubt, you had been necessary! Thank you all!. iv.

(7) Resumo. A análise de aplicações tem ganhado bastante valor comercial com o grande crescimento de heterogeneidade e distribuição dos atuais sistemas - tanto logicamente quanto fisicamente. Esta convergencia de complexidade em relação aos ambientes de projeto, desenvolvimento e produção tem introduzido novos desafios em se tratando do monitoramento, análise e melhorias desses sistemas. Além disso, as abordagem tradicionais tem oferecido cada vez menos valor para o gerenciamento dos atuais ecosistemas das aplicações cada vez mais sofisticadas e distribuídas. Diante desse cenário, o projeto Plug and Trace integra duas propostas, a Meta-Teoria dos Rastros e o Desenvolvimento Baseado em Componentes, para prover uma maneiras simples de embutir uma variedade de serviços de análise em qualquer tipo de aplicação. Dessa forma, nossa intenção é mudar a maneira com que as ferramentas de análise são projetadas, de somente construir ferramentas de análise para applicações específicas, para prover um framework de rastreamento independente de domínio e altamente reusável em qualquer domínio. Adicionalmente, com o intuito de forcener para os atuais sistemas um framework com um boa relação custo-benefício, nós focamos em automação usando a Engenharia Dirigida por modelos, ou seja, fazer mais com menos, eliminando tarefas redundantes e manuais e facilitanto o processo de estensão de nossa proposta sobre qualquer aplicação. Claramente essas vantagens representam uma contribuição para o domínio de Análise de Aplicações, no qual o projeto Plug and Trace simplifica o processo de conceber uma ferramenta de análise e facilita o análise de qualquer aplicação usando um framework comum. Há também contribuições em outros domínios: no Desenvolvimento Baseado em Componentes, com a primeira proposta de componentização da Meta-Teoria dos Rastos adornada com novos componentes genéricos de rastreamento; e, na Engenharia Dirigida por Modelos, com um framework de rastreamento baseado em quatro princípios: qualidade, consistência, produtividade e abstração, reduzindo a codificação manual e promovendo a reusabilidade de todo o framework. A fim de validar nossa proposta, apresentamos um estudo de caso que mostra como estender o framework Plug and Trace para o domínio da linguagem CHR. Palavras-chave: Análise de Aplicações, Rastro, Desenvolvimento Baseado em Componentes, Engenharia Digirida por Modelos, CHR. v.

(8) Abstract. Application analysis has assumed a new business importance as the world moves increasingly towards heterogeneous and distributed systems - both logically and physically. This convergence of complexity across design, development and production environments has introduced new challenges regarding monitoring, analysis and tuning these systems. Furthermore, traditional approaches offer less and less value in managing today’s sophisticated and distributed application ecosystems. Given the aforementioned shortcomings, the Plug and Trace project integrates two proposals, the Component-Based Development and the well-founded Trace Meta-Theory, to provide an easy way to embed a variety of analysis services into any kind of application. In that case, we envisage a change in the way of the application analysis tools are designed, from building only analysis tools of specific applications to providing a domain-independent trace framework highly reusable in any domain. Additionally, to enable a cost-effective adoption of the tracer framework in everyday systems, we focus on automation by using the Model-Driven Engineering, i.e., to do more with less, eliminating redundant and manual tasks and making the phase of extending our proposal over any application easy. We advocate these advantages represent a contribution in the domain of Application Analysis, in which the Plug and Trace simplifies the process of conceiving analysis tools and facilitates the analysis of any application using a common tracer framework. There are also contributions in other domains: in Component-Based Development, by providing the first proposal for the Trace Meta-Theory applied to the Component-Based development with generic components for tracing; and, regarding Model-Driven Engineering, a tracer framework based on four principles: quality, consistency, productivity and abstraction, reducing the hand coding and promoting the reusability of the entire framework. In order to validate our proposal, we present a case study showing how to extend the Plug and Trace framework to the domain of CHR language. Keywords:. Application Analysis, Trace, Component-Based Development, Model-. Driven Engineering, CHR. vi.

(9) Contents. List of Figures. x. Acronyms. 1. 1 Introduction 1.1 Plug and Trace: Goals and Design Principles . . . . . . . . . . . . . .. 2 4. 1.2 1.3. Scope of the Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . Envisioned Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Contributions to Application Analysis . . . . . . . . . . . . . .. 6 7 7. 1.3.2 1.3.3. Contributions to CBD and MDE . . . . . . . . . . . . . . . . . Contributions to Rule-Based Automated Reasoning . . . . . . .. 7 8. Outline of the Dissertation . . . . . . . . . . . . . . . . . . . . . . . .. 8. 2 Software Engineering Background 2.1 Component-Based Software Development . . . . . . . . . . . . . . . . 2.1.1 Fundamental Changes From Traditional Software Development. 10 10 11. 1.4. 2.1.2 2.1.3. Software Components Specification . . . . . . . . . . . . . . . Component-Based Development Process . . . . . . . . . . . .. 12 13. Building systems from components . . . . . . . . . . . . . . . Building reusable components . . . . . . . . . . . . . . . . . . Model-Driven Engineering . . . . . . . . . . . . . . . . . . . . . . . .. 13 14 14. 2.2.1. MDE Languages . . . . . . . . . . . . . . . . . . . . . . . . . MOF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 16 17. UML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Transformations . . . . . . . . . . . . . . . . . . . . . .. 18 19 21. Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 22. 3 Trace Meta-Theory 3.1 Towards reusability and extensibility . . . . . . . . . . . . . . . . . . .. 23 24. 2.2. 2.2.2 2.3. 3.2. Generic Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Generic Full Trace of a Familly of Applications . . . . . . . . . 3.2.2 Generic Full Trace of an Application . . . . . . . . . . . . . .. 26 27 28. 3.3. Querying Trace Events . . . . . . . . . . . . . . . . . . . . . . . . . .. 29. vii.

(10) 3.4. Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 4 The Plug and Trace Project 4.1 4.2. 29 30. Goal and Design Principles . . . . . . . . . . . . . . . . . . . . . . . . The Top-Level Plug and Trace Component . . . . . . . . . . . . . . . . 4.2.1 Trace Receiver component . . . . . . . . . . . . . . . . . . . .. 31 32 35. 4.2.2 4.2.3. Trace Driver component . . . . . . . . . . . . . . . . . . . . . Trace Analyzer component . . . . . . . . . . . . . . . . . . . .. 37 39. 4.3 4.4 4.5. Trace Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic Trace Schema . . . . . . . . . . . . . . . . . . . . . . . . . . The Plug and Trace Process . . . . . . . . . . . . . . . . . . . . . . . .. 40 43 46. 4.6. Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 49. 5 Extending Plug and Trace to CHR 5.1 Case Study: A Debugging Tool for CHR . . . . . . . . . . . . . . . . .. 50 50. 5.1.1 5.1.2. Understanding the context: CHR by example . . . . . . . . . . Operational Semantics . . . . . . . . . . . . . . . . . . . . . . Modeling the trace events: ωt . . . . . . . . . . . . . . . . . .. 51 52 54. 5.1.3 5.1.4. Instrumenting: A Debugging Tool for Eclipse Prolog . . . . . . 57 Configuring the Plug and Trace framework: Connecting all pieces 60. 5.1.5 Evaluating the Analysis . . . . . . . . . . . . . . . . . . . . . Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 62 63. 6 Related Work 6.1 Eclipse TPTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 64 64. 5.2. 6.1.1 6.1.2. Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weaknesses . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 66 66. 6.2. dynaTrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Weaknesses . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 67 69 70. 6.3. TAU Performance System . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 70 71. 6.3.2 Weaknesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . InfraRED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 72 72 73. 6.4.2. 74. 6.4. Weaknesses . . . . . . . . . . . . . . . . . . . . . . . . . . . .. viii.

(11) 6.5. Chapter Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 7 Conclusion. 74 75. 7.1. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Contributions to Application Analysis . . . . . . . . . . . . . . 7.1.2 Others related contributions . . . . . . . . . . . . . . . . . . .. 75 76 77. 7.2. Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . .. 78. Bibliography. 80. ix.

(12) List of Figures. 1.1. Increase and heterogeneity of the application architectures . . . . . . .. 3. 2.1 2.2 2.3. CBD process as a combination of several parallel processes . . . . . . . Basic component specification concepts . . . . . . . . . . . . . . . . . The 4-level architecture of MDA . . . . . . . . . . . . . . . . . . . . .. 12 13 16. 2.4 2.5. EMOF and CMOF package architecture . . . . . . . . . . . . . . . . . MOF Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 17 18. 2.6 2.7 2.8. Simplified UML metamodel . . . . . . . . . . . . . . . . . . . . . . . Association between operations and OCL expressions . . . . . . . . . . Class Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 19 20 20. 2.9 Derived Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 OCL pre and post condition example . . . . . . . . . . . . . . . . . . .. 21 21. 3.1 3.2 3.3. Virtual and Actual Trace . . . . . . . . . . . . . . . . . . . . . . . . . Roles in the TMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic and specific traces . . . . . . . . . . . . . . . . . . . . . . . .. 24 25 27. 3.4. A unique abstract model for several observed processes . . . . . . . . .. 28. 4.1. The Top-Level Plug and Trace Component . . . . . . . . . . . . . . . .. 33. 4.2 4.3 4.4. The Plug and Trace workflow . . . . . . . . . . . . . . . . . . . . . . . The Trace Receiver acts as a listener to trace events . . . . . . . . . . . The Trace Receiver component . . . . . . . . . . . . . . . . . . . . . .. 34 35 36. 4.5 4.6. The Trace Driver component . . . . . . . . . . . . . . . . . . . . . . . The Trace Analyzer component . . . . . . . . . . . . . . . . . . . . . .. 37 39. 4.7 4.8 4.9. An example of a trace event model . . . . . . . . . . . . . . . . . . . . The Plug and Trace Process . . . . . . . . . . . . . . . . . . . . . . . . Artifacts involved in the Plug and Trace instrumentation . . . . . . . .. 41 46 49. 5.1 5.2. Solver strictly connected to the debugging tool . . . . . . . . . . . . . . ωt model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 51 55. 5.3 5.4. Visualizing a CHR execution . . . . . . . . . . . . . . . . . . . . . . . Running the CHR debugging tool . . . . . . . . . . . . . . . . . . . .. 59 63. 6.1. TPTP Project Architecture . . . . . . . . . . . . . . . . . . . . . . . .. 65. 6.2. dynaTrace Architecture . . . . . . . . . . . . . . . . . . . . . . . . . .. 68. x.

(13) 6.3. TAU Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 71. 6.4. InfraRed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . .. 73. xi.

(14) Acronyms. CBD Component-Based Development CBSE Component-Based Software Engineering CHR Constraint Handling Rules CIn Centro de Informática TPTP Test & Performance Tools Platform Top-Level Project FACEPE Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco GUI Graphical User Interface INRIA Institut National de Recherche en Informatique et Automatique IT Information Technology MDA Model-Driven Architecture MDE Model-Driven Engineering OMG Object Management Group OOD Object-Oriented Development PIM Platform-Independent Model PSM Platform-Specific Model TMT Trace Meta-Theory UFPE Universidade Federal de Pernambuco UML Unified Modeling Language. 1.

(15) 1 Introduction. Modern application architectures are more complex than ever. Applications themselves have become ever more heterogeneous and distributed - both logically and physically. N-tier applications, globally distributed, are more and more common. Service oriented environments with components built by 3rd parties, including commercial and opensource, are commonplace. The Figure 1.1 illustrates this evolution, where most enterprises today are integrating several types of technologies into their IT infrastructures. While this convergence of complexity across development, architecture and production has been increasing as time goes by, it have also introduced new challenges with regard to the monitoring, diagnosis and tuning of these complex application ecosystems. Furthermore, traditional analysis approaches offer less and less value in managing the performance, scalability and stability of today’s sophisticated, distributed applications. A new generation of application analysis approaches is required. Below are the key limitations of these traditional analysis tools given today’s application reality. • no integrated lifecycle approach: In nearly all cases, traditional application analysis vendors amassed a cadre of tools that could be used for various tasks by different stakeholders throughout the application development lifecycle. Unfortunately, these tools have rarely been well integrated, forcing architects, developers and performance specialists to use human cycles and guess-work to correlate findings among themselves. • no conditional trace: It should be noted that, in most cases, it is very difficult to produce traces of applications that are simple and complete. Simple in the sense of producing minimal information, to reduce and improve traffic data. And, complete by showing everything the observer wants to see. Due to this fact, it is necessary. 2.

(16) Figure 1.1 Increase and heterogeneity of the application architectures. Source: The Application Performance Management Imperative, Forrester Research, 2009. to add the concept of querying, where the observer can request just what he wants to see. • difficult to integrate into existing environments: No environment is homogeneous with uniform hardware, development processes between teams, and application architectures. Therefore, applications must be easily integrated with preexisting systems, self-managed with complementary automation interfaces to fit existing processes, and highly extensible to adapt to future needs. • static applications only: As applications have become increasingly complex and dynamic, architects can no longer predict the exact runtime behavior of their applications. They know what their applications are supposed to do, but no one really knows how they actually behave and how transactions are really being processed under loading. This partly is due to the increase in services being used and the widely distributed nature of today’s multi-tiered applications. The dynamic code executes under load only, and the behavior of third-party code and frameworks is often impossible to determine even when the application is live. Taken together, these limitations of traditional application approaches, especially in light of the accelerating application complexity we are encountering, are driving the. 3.

(17) 1.1. PLUG AND TRACE: GOALS AND DESIGN PRINCIPLES. urgency for a new application analysis approach. This new approach must take into consideration the limitations described above and must anticipate the future requirements. Our proposal changes the focus “from” building only application monitoring tools of specific technologies “to” providing a generic tracer framework that specifies and realizes services for analysis of any kind of application. It is possible due to the fact that we focus on more abstract artifacts using a domain-independent and component-based approach. Our long-term goal is to provide the means for developing and deploying modeldriven tracer components that support a cost-effective adoption of monitoring techniques in everyday systems. Our specific goal is to develop the kernel of the bottom component of the tracer framework, Plug and Trace. It will be the first domain-independent tracer framework, model-driven, component-based and highly reusable debugging tool. Our work is to define the top level architecture of Plug and Trace as well as three of its main sub-components: the trace receiver, to get and adapt the received trace events; the trace driver, the core of our framework; and, the trace analyzer, the element to visualize and monitor traces. Finally, Plug and Trace is going to be the most reused component for the deployment of more advanced application analysis services. Today, Plug and Trace is already being reused i) to integrate any kind of application, ii) to design several GUI’s to easily analyze and monitoring application on the fly, and, iii) to implement a debugging tool for CHR [Sneyrs et al. (2003)]. For the future, we expect to achieve grid scalability by port Plug and Trace in the cloud computing, such as Google App Engine1, and by incorporating new application analysis built-in services, such as: Application Performance Management [Khanna et al. (2006)]; Transaction Performance Management [Gao et al. (2004)]; End User Experience Management [Croll and Power (2009)]; and, Performance Management for Cloud Hosted Applications [Vecchiola et al. (2009)].. 1.1 Plug and Trace: Goals and Design Principles The Plug and Trace project provides the first domain-independent tracer framework and a reusable debugging framework. It aims at provide services and artifacts to embed extraction and analysis services into any application. Each component of this framework will be used either as a stand-alone software or assembled to provide a variety of anal1 Google. App Engine is a platform for developing and hosting web applications in Google-managed data centers. It is cloud computing technology and virtualizes applications across multiple servers and data centers.. 4.

(18) 1.1. PLUG AND TRACE: GOALS AND DESIGN PRINCIPLES. ysis services that can be integrated in the most diverse domains. Furthermore, we will describe the process to build a tracer framework using our proposal. In order to provide a formally founded architecture we are going to use the Trace Meta-Theory (TMT), an approach that focus particularly on providing semantics to tracers and the produced traces. Our goal in this project is to provide the following deliveries: • A platform independent model that specifies generic trace components; • A generic trace schema. This generic trace will also enable any debugging tool to be defined independently from its application, and, conversely, tracers to be built independently from these tools; • GUI components to interactively submit queries and inspect solution explanations at various levels of details, with patterns to specify what trace information is needed. To fulfill these requirements Plug and Trace architecture is based on the following principles: 1. To integrate different domain into a unique environment, called Plug and Trace; 2. To combine the component-based development and model-driven architecture to produce reusable artifacts and a domain-independent framework; 3. To go one extra step towards an easy tool for application analysis 4. Automation, i.e. to do more with less, eliminating redundant and manual tasks The application we are going to use as case study will be the ECLiPSe Prolog3 , a CHR4 Solver. Versatility is the main reason that motivates our choice of rule-based constraint programming (and in particular CHR) to validate and test all steps of our framework. And, talking specifically about CHR, it has matured over the last decade to a powerful and elegant general-purpose language with a wide spectrum of application domains [Sneyrs et al. (2003)]. The Plug and Trace project is the result of the cooperation between CIn/UFPE and INRIA and co-financed in 2008-2009 by FACEPE and INRIA. 3 ECLiPSe. Prolog is an open-source software system for the cost-effective development and deployment of constraint programming applications. 4 Constraint Handling Rules (CHR) is a declarative programming language.. 5.

(19) 1.2. SCOPE OF THE DISSERTATION. 1.2 Scope of the Dissertation From the software engineering point of view, the core of our research is to investigate how most recent advances in reusable Component-Based Software Engineering (CBSE) can be leveraged to build a versatile tracer framework that fulfills today’s needs of application analysis. On the other hand, regarding the Trace Meta-Theory, the main Plug and Trace architectural design principles are based on the Deransart’s theory, we reuse his principles and roles necessary to specify tracers and traces. This project is, thus, an effort to harvest the benefits of the model-driven componentbased approach and the flexibility of an enhanced application analysis to provide the reusable and extensible components. The design decision of this dissertation involves the following topics: • Precise metamodeling in UML 2.0 and OCL 2.0 of all computational languages used in Plug and Trace. • Modeling of all Plug and Trace extensions using UML 2.0 and OCL 2.0. • Specifying automated transformations between models and between models and executable platforms using the MofScript Language 5 . • A Component-based model-driven approach, to design the artifacts of the Plug and Trace project. In the Application Analysis realm the scope of our thesis includes: • Mainly, developing a complete architecture to tracer any application; • Developing a generic trace schema to promote reusability of trace events; furthermore, • Incorporating the MDE to facilitate the modeling and generating of tracer structures. 5 MOFScript. is a tool for model to text transformation, e.g., to support generation of implementation code or documentation from models.. 6.

(20) 1.3. ENVISIONED CONTRIBUTIONS. 1.3 Envisioned Contributions Our work combines recent developments in three areas that traditionally do not interact very often. However, we believe that these techniques may contribute a great deal to each other bringing meaningful mutual benefits. Firstly, the application analysis are driving the urgency for monitoring tools that can be easily integrated with pre-existing systems, and are self-managed with complementary automation interfaces, to fit pre-existing processes, and highly extensible to adapt to future needs. Secondly, the component-based development, towards reusability and extensibility of the whole framework. Finally, a domain-independent approach, using the the model-driven engineering, to give us and make easy the adaptation to any kind of domain. The following subsections summarize the contributions of our work in sub-fields of these areas.. 1.3.1 Contributions to Application Analysis Our intention is not to propose yet another tool, but to redefine the way applications should be built, analyzed and managed in production, by supporting the analysis of the entire lifecycle and providing unprecedented insight into even their most complex applications.. 1.3.2 Contributions to CBD and MDE Although not yet widely adopted by industry, the Model-Driven Engineering (MDE) vision led by the OMG has already spawned a set of standards based on semi-formal tight-coupled artifacts that support the software development process with great flexibility. In particular the pervasive UML is the most fundamental element that aggregates many facets of the vision. Basically MDE proposes to raise the level of abstraction of the software process, prescribing that the application development starts with a Platform Independent Model (PIM) and then is transformed manually or automatically into other models, called Platform Specific Models (PSM), until eventually executable artifacts such as source and deployed code. Given the above, our mains contributions are: For model transformations, specifying rules for mapping Trace Events into an executable Java code using the MOFScript Language.. 7.

(21) 1.4. OUTLINE OF THE DISSERTATION. For Component-based development (CBD), provide a case study for specifying and realizing a trace framework by means of assembly components. For MDE, demonstrate its feasibility building the first tracer framework using a Model-Driven approach.. 1.3.3 Contributions to Rule-Based Automated Reasoning The study of rule-based automated reasoning is not our main focus on this project, but, due the fact that we have chosen this domain to validate the whole project, we intend to provide debugging tools, generic trace schemas and services of reasoning explanation facilities.. 1.4 Outline of the Dissertation The rest of this dissertation is organized in six other chapters. In chapter 2 we provide a summary of the Software Engineering background that we used for the development of our application. Firstly, we give an overview of CBD, showing its foundation and principles, detailing the fundamental changes from traditional software development to a CBD and describing how to specify a component. Then we proceed by briefly overviewing the Model-Driver Enginerring (MDE), its goals, principles and vision. We follow with a presentation of model transformation. We end the chapter with remarks focusing on our project. In Chapter 3 we discuss about the Trace Meta-Theory (TMT). We start by stating the current difficulties to analyze modern applications, in the sense of traditional approaches of application analysis. Then we present the TMT, as the basis of the entire framework that we will provide. We follow discussing about generic traces, our key ideas to promote its use and how to query traces. Finally, we explain, in some remarks, how we will use this theory as basis of our project. In chapter 4 we present the overall architecture of Plug and Trace, its design principles and its complete PIM. Firstly, we detail the top-level plug and trace component. Then we proceed explaining each sub-component evolved on the framework. We follow showing how to specify generic trace schemas to leverage any application to be analyzed and we talked about trace events, showing its structure and how MDE can improve its utilization. We end the chapter presenting some relevant points discussed. In Chapter 5 we present the way of extending our tracer framework to a given domain. We discuss which components should be reused and extended by creating a simple. 8.

(22) 1.4. OUTLINE OF THE DISSERTATION. debugging tool for Constraint Handling Rules (CHR), a rule-based language. In Chapter 6 we present related works regarding to the application analysis, highlighting the differences to our method and showing some ideas and techniques used in this work that were studied and derived from theses related works. Finally, in Chapter 7 we conclude this thesis summarizing our contributions, pointing the current limitations and proposing future developments.. 9.

(23) 2 Software Engineering Background. In this chapter we describe the key technologies of software engineering we use to develop the components of our proposed application. In particular, we present the ideas of component-based development and model-driven engineering, a set of principles and technologies that provide the structural basis of this thesis.. 2.1 Component-Based Software Development A software component encapsulates a set of basic functionalities whose need recurs in diverse applications. It contains metadata that specifies how to assemble these functionalities with those encapsulated in other components to build more complex functionalities through assembly. According to [Eriksson (2004)] “a component is a self-contained unit that encapsulates the state and behavior of a set of classifiers". All the contents of the components, including its sub-components, are private. Its services are available through provided and required interfaces. The key feature of CBSD is the ability to promote the reuse of software components. This is possible by using the full encapsulation and separation of interfaces from implementation, furthermore, this separation of concerns enables a component to be a substitutable unit that can be replaced at design time or run-time by another component that offers equivalent functionality. In an assembly, a given component may act as both a server to some component and a client to another component. The assembly structural meta-data of a component includes provided interfaces, the operations that are available by connecting to the server ports of the component. It may also include required interfaces, the operations that component expects to be available in the deployment environment through connections to its client ports.. 10.

(24) 2.1. COMPONENT-BASED SOFTWARE DEVELOPMENT. A component may also include assembly behavioral meta-data that describes the pre- and post-conditions of the operations provided and required at its ports in terms of its states and the states of its clients and servers in the assembly [Robin and Vitorino (2006)]. Such meta-data allows defining a contract between a client-server component pair. Such design by contract permits black-box reuse, which is ideal for leveraging third party software and more cost-effective than the white-box reuse by inheritance in object-oriented frameworks. A component can be substituted at any time by another one that is internally different but respects the same contracts at its ports, without affecting the rest of the software.. 2.1.1 Fundamental Changes From Traditional Software Development Mature software development, in general, follows a well-defined process model. Considering CBD is one of the many possible approaches to software development, it is worth discussing whether or not a generic software development process model is well suited for CBD. Several authors have argued against using a traditional process model in CBD, as described in the next paragraphs. Ning discusses several aspects of development which are common in CBD and require special attention, when defining a suitable process model for CBD [Ning (1996)]. He contrasts CBD with object-oriented development (OOD), where typical development models such as the waterfall model [Royce (1970)] encourage opportunistic forms of reuse, rather than systematic approaches to it. In such process models, reuse is not regarded as a “first class activity”, and it is up to the designers and developers to recognize opportunities for reuse. The lack of a “de facto” standard definition for components adds to the lack of systematic reuse by making the identification of potential reuse artifacts harder. Aoyama has found several potential approaches to facilitate reuse in OOD, including software architectures, design patterns, and frameworks [Aoyama (1998)]. All these approaches to reuse are set during development or maintenance. An important contrast from OO reuse to component reuse is that components may have to be composed at run-time, without further compilation, using a plug and play mechanism. This requires components to be viewed as black-boxes, accessible through their interfaces and fosters the definition of architectures for which the components are developed, including the standards for connecting components in those architectures. Crnkovic et al. add to the discussion the existence of several kinds of CBD, including architecture-driven CBD, product-line CBD and argue for the adoption of a process. 11.

(25) 2.1. COMPONENT-BASED SOFTWARE DEVELOPMENT. model tailored for each of these varieties of CBD [Crnkovic et al. (2006)]. The model presented in Figure 2.1, illustrates how the CBD process model can be regarded as a combination of several processes that occur in parallel. With some adaptations, Crnkovic et al. define variations of this model to support architecture-driven and product-line CBD.. Figure 2.1 CBD process as a combination of several parallel processes. A common point to these three studies [ Ning (1996), Aoyama (1998), Crnkovic et al. (2006)] is the adoption of a modified version of some existing well-known process model, with a shift of focus in some activities and the introduction of parallel process flows for each of the participating organizations. It is also worth noticing the introduction of a third process, component assessment, that can be carried out by an organization independent both from the component developers and component users.. 2.1.2 Software Components Specification UML has great potential for component-based systems, the de facto industry-standard in object-oriented modeling. Figure 2.2 depicts the basic concepts concerning components specification using a simplistic UML metamodel, adapted from [L"uders et al. (2002)].. 12.

(26) 2.1. COMPONENT-BASED SOFTWARE DEVELOPMENT. Figure 2.2 Basic component specification concepts. A component exposes its functionalities by providing one or more access points. An access point is specified as an interface. A component may provide more than one interface, each interface corresponding to a different access point. An interface is specified as a collection of operations. It does not provide the implementation of any of those operations. Depending on the interface specification technique, the interface may include descriptions of the semantics of the operations, it provides with different degrees of formality. The separation between interface and internal implementation allows the implementation to change while maintaining the interface unchanged. It follows that the implementation of components may evolve without breaking the compatibility of software using those components, as long as the interfaces and their behavior, as perceived by the component user, are kept unchanged with respect to an interaction model. A common example is to improve the efficiency of the implementation of the component, without breaking its interfaces. As long as that improvement has no negative effect on the interaction model between the component and the component clients, and the component’s functionality remains unchanged, the component can be replaced by the new version.. 2.1.3 Component-Based Development Process A CBD process includes all activities of a product or a system with components during its entire life, from the business idea for its development, through its usage and its completion of use. Building systems from components The general idea of the component-based approach is building systems from pre-defined components [Crnkovic et al. (2006)]. This assumption has several consequences for the system lifecycle. First, the development processes of component-based systems are sep-. 13.

(27) 2.2. MODEL-DRIVEN ENGINEERING. arated from development processes of the components; the components should have already been developed and possibly used in other products when the system development process starts. Second, a new separate process will appear: Finding and evaluating the components. Third, the activities in the processes will be different from the activities in noncomponent-based approach; for the system development the emphasis will be on finding the proper components and verifying them, and for the component development, design for reuse will be the main concern. System development with components is focused on the identification of reusable entities and relations between them, beginning from the system requirements and from the availability of components already existing [Goulão (2005)]. Much implementation effort in system development will no longer be necessary but the effort required in dealing with components; locating them, selecting those most appropriate, testing them, etc. will increase. Building reusable components The process of building components can follow an arbitrary development process model. However, any model will require certain modification to achieve the goals; in addition to the demands on the component functionality, a component is built to be reused. Reusability implies generality and flexibility, and these requirements may significantly change the component characteristics. For example there might be a requirement for portability, and this requirement could imply a specific implementation solution (like choice of programming language, implementation of an intermediate level of services, programming style, etc.). The generality requirements often imply more functionality and require more design and development efforts and more qualified developers. The component development will require more efforts in testing and specification of the components. The components should be tested in isolation, but also in different configurations. Finally the documentation and the delivery will require more efforts since the extended documentation is very important for increasing the understanding of the component.. 2.2 Model-Driven Engineering The term Model-Driven Engineering (MDE) is typically used to describe software development approaches in which abstract models of software systems are created and systematically transformed to concrete implementations [France and Rumpe (2007)]. MDE. 14.

(28) 2.2. MODEL-DRIVEN ENGINEERING. combines process and analysis with architecture [Kent (2002)]. Higher-level models are transformed into lower level models until the model can be made executable using either code generation or model interpretation. The best known MDE initiative is the Object Management Group (OMG) initiative Model-Driven Architecture (MDA) started in 1997 1 . Model-Driven Architecture (MDA) provides a framework for software development that uses models to describe the system to be built [Mellor et al. (2002)]. The MDA provides an approach in which systems are specified independently of the platform that supports it. The three primary goals of MDA are portability, interoperability and reusability through architectural separation of concerns [Miller et al. (2003)]. In the following we address some related principles and basic concepts to understand the MDE proposal: • Reusable assets: the most valuable, durable, reusable assets produced during the development is not code but models. • Improving design and code: the more significant and cost-effective quality gains are achievable by improving design and models rather than by improving code. • Extensibility: benefits from careful, detailed, explicit modeling are not limited to the application under development but extend to all the processes, artifacts, languages, tools and platforms used for this development. • Software process automation: a high degree of automation can be achieved by building a variety of models, each one with a different role in the process; by making each of these models machine processable expressing them in a semiformal notation devoid of natural language; by defining this notation itself as an object-oriented model; and by using model transformations to generate the target models from these source models. To realize the MDA vision, a modeling language such as UML is not enough. It is also important to express the links among models (traceability) and transformations. It requires accessing elements not only at model level but also at the modeling formalization level. A metaformalism is a language to define the constructors of another language as well as their structural relationships, such as composition and generalization. It thus defines an abstract syntax of a language, called a metamodel, that ignores the 1 http://www.omg.org/mda/. 15.

(29) 2.2. MODEL-DRIVEN ENGINEERING. ordering constraint among the constructors. A metamodel plays the role of a grammar, but at a more abstract level. MDE defines three levels of abstractions regarding model formalisms plus the object level (Figure 2.3): the model, the formalism for modeling (metamodel) and the metaformalism. MOF (Meta-Object Facility) [OMGa, 2006] is the OMG choice to express de modeling formalisms or metamodels, which in turn express the models. MOF expresses itself as a metametamodel. MOF reuses at another level and for another purpose the UML class diagram. Whereas in UML these diagrams at level M1 are used to model the application, MOF uses these diagrams at level M2 and M3 to model languages. MOF allows OO visual representation of computational language grammars. Furthermore it extends the UML class diagram with a reflective API.. Figure 2.3 The 4-level architecture of MDA. 2.2.1 MDE Languages A MDE approach must specify the modeling languages, models, translations between models and languages and the process used to coordinate the construction and evolution of the models [Kent (2002)]. In the next section, we will briefly describe three standard: UML2, a modeling language; OCL2 a language to specify constraint on UML models;. 16.

(30) 2.2. MODEL-DRIVEN ENGINEERING. and, MOF, a standard to represent and manipulate metamodels. MOF MOF (Meta-Object Facility) [OMGa, 2006] is the OMG choice to express metamodels, which in turn express the models. MOF reuses structural core of UML, a mature, wellknown and well-tooled language. The main benefits of MOF over traditional formalisms such as grammars to define languages are: abstract instead of concrete syntax (more synthetic); visual notation instead of textual notation (clarity); graph-based instead of tree-based (abstracts from any reader order); entities (classes) can have behavior (grammar symbols do not); relations between elements include generalization and undirected associations instead of only composition and order; and, specification reuse through inheritance. Figure 2.4 shows the package architecture of MOF. EMOF stands for essential MOF and is a subset of the complete MOF (CMOF) that closely corresponds to the facilities provided by most OO programming languages. A primary goal of EMOF is to allow simple metamodels to be defined using simple concepts while supporting extensions (by the usual class extension mechanism in MOF) for more sophisticated meta-modeling using CMOF.. Figure 2.4 EMOF and CMOF package architecture.. In essence the Basic package contains the Core constructs except for associations, which appear only in Constructs. The Reflection Package allows the discovery and manipulation of meta-objects and metadata. The Identifiers package provides an extension. 17.

(31) 2.2. MODEL-DRIVEN ENGINEERING. for uniquely identifying metamodel objects without relying on model data that may be subject to change, and the Extension package supports a simple means for extending model elements with name/value pairs. Figure 2.5 shows an example of the metamodel for the Use Case diagram, one of many UML diagrams. The diagram contains three metaclasses: Actor, System and UseCase, the Metaclass Actor has a meta-attribute name of type String, the Metaclass UseCase has a meta-attribute title of type String, and the Metaclass System has a metaattribute name of type String. There is a recursive meta-association which inherits from the Metaclass Actor. Also there are two more recursive meta-associations with the metaclass UseCase namely extends and includes. Finally there is an aggregation meta-association between the metaclass System and UseCase.. Figure 2.5 MOF Example.. UML The Unified Modeling Language (UML) [Rumbaugh et al. (2004)] is a graphical modeling language standardized by the OMG whose objective is to describe software systems, business processes and similar artifacts. It integrates most constructs from objectoriented, imperative, distributed and concurrent programming paradigms. In Figure 2.6, we show a simplified metamodel of UML. We are only going to focus on the constructors used to represent Class Diagrams. They represent the structure and interfaces provided and required by the objects on the running system, without putting too much emphasis on the representation of complicated execution flows.. 18.

(32) 2.2. MODEL-DRIVEN ENGINEERING. Figure 2.6 Simplified UML metamodel. In UML, we have the concept of InstanceSpecification that allows the modeler to include an abstract vision of how the instances of the classes in the model are going to be organized at runtime. We also added the concept of Constraint which annotates elements in the diagram and enriches its semantics. They are often used to simplify the graphical rendering of the model by utilizing a more expressive language in the constraints. In Figure 2.6, we also show the attribute isDerived to the meta-class Property, such that when it is true it indicates that the value of the attribute can be computed from the value of other attributes (and thus doesn’t need to be stored). The default association in the meta-class Property defines the default value for an attribute, which is the value to be associated to an attribute in case no value is defined by the model. OCL The Object Constraint Language (OCL) allows to adorn UML and MOF diagrams with constraints and make them far more semantically precise and detailed. In general, a constraint is defined as a restriction on one or more values of (part of) an object-oriented model or system [Warmer and Kleppe (1998)]. The main purpose of OCL is to augment a model with additional information that often cannot be expressed appropriately (if at all) in UML. This information is given by constraint which in general are easier to specify in a textual notation that in a graphicoriented language. UML modelers can use OCL to specify: • Arbitrary complex structural constraints among potentially distant elements of an application UML structural diagram or language metamodel; for this purpose OCL has the expressive power of first-order logic, and allows specifying class invariants. 19.

(33) 2.2. MODEL-DRIVEN ENGINEERING. and derived attributes and associations; • Arbitrary complex algorithms that combine behavior of class operations or message passing; for this purpose, OCL is Turing-complete and allows specifying operations preconditions, read-only operation bodies, and read-write operations post-conditions. Figure 2.7 shows the association between operations and OCL expressions. There are three kinds of constraints that might be associated to Operations: pre-conditions, pos-conditions and body (for query-only operations).. Figure 2.7 Association between operations and OCL expressions.. OCL allows the specification of invariant conditions that must hold for the system being modeled. For example, it is possible to specify in OCL that an attribute balance of a class BankAccount cannot store negative values. This can be accomplished (Figure 2.8) using a simple constraint on both the class and the specific attribute; below we show it in OCL concrete syntax.. Figure 2.8 Class Invariants.. 20.

(34) 2.2. MODEL-DRIVEN ENGINEERING. OCL allows developers to specify attributes or associations that permit their instances can be derived from those of others in the model. For example, the Figure 2.9 shows a simple model adorned with an OCL expression to derive attributes for the class Customer: The OCL expression derives the value of attribute golden by checking if the customer has an account which is greater than a given amount of money. This construction allows using OCL as a business rule specification language over business domains modeled as UML class diagrams.. Figure 2.9 Derived Attributes.. OCL pre-conditions may accompany an operation to detail which are the requirements to execute that operation, i.e. a pre-condition is a Boolean expression that must be true prior to the operation execution. OCL post-conditions express the state of the system after the operation is executed, including changes of objects. The diagram below gives a simple example of a pre-condition and post-codition: the withdrawing operation is only allowed if the balance is greater than or equal to the required amount and the new balance will be this previous value subtracted from the amount withdrawn.. Figure 2.10 OCL pre and post condition example.. 2.2.2 Model Transformations Model transformation is the process of converting one model to another model of the same system [Judson et al. (2003)]. Because many aspects of a system might be of interest, various modeling concepts and notations can be used to highlight one or more particular perspectives, or views, of that system, depending on what is relevant at any point in time. Furthermore, in some instances, it is possible augment the models with hints, or rules, that assist in transforming them from one representation to another. It is. 21.

(35) 2.3. CHAPTER REMARKS. often necessary to convert to different views of the system at an equivalent level of abstraction (e.g., from a structural view to a behavioral view), and a model transformation facilitates this. In other cases, a transformation converts models offering a particular perspective from one level of abstraction to another, usually from a more abstract to less abstract view, by adding more detail supplied by the transformation rules. MDA practitioners recognize that transformations can be applied to abstract descriptions of aspects of a system to add detail [Brown (2004)], to make the description more concrete, or to convert between representations. Distinguishing among different kinds of models allows us to think of software and system development as a series of refinements between different model representations. These models and their refinements are a critical part of the development methodology in situations that include (i) refinements between models representing different aspects of the system, (ii) addition of further details to a model, or (iii) conversion between different kinds of models. Underlying these model representations, and supporting the transformations, is a set of metamodels. The ability to analyze, automate, and transform models requires a clear, unambiguous way to describe the semantics of the models. Hence, the models intrinsic to a modeling approach must themselves be described in a model, which we call a metamodel. For example, the static semantics and notation of the UML are described in metamodels that tool vendors use for implementing the UML in a standard way. The UML metamodel describes in precise detail the meaning of a class, an attribute, and the relationships between these two concepts [Brown et al. (2005)]. The OMG recognizes the importance of metamodels and formal semantics for modeling, and it has defined a set of metamodeling levels as well as a standard language for expressing metamodels: the Meta Object Facility (MOF). A metamodel uses MOF to formally define the abstract syntax of a set of modeling constructs.. 2.3 Chapter Remarks This chapter presented the CBD and MDE, showing the principles and languages that establish the basis of our tracer framework. In order to promote reusable and extensible tracer artifacts we will adopt the CBD together with the MDE, that provide full encapsulation and separation of concern over the entire range of development stages, from requirements to modeling, implementation, testing quality insurance and maintenance.. 22.

(36) 3 Trace Meta-Theory. In this chapter we present the Trace Meta-Theory, that sets the foundation of our entire tracer framework. We address its principles and roles needed to specify tracers and traces. Furthermore, we discuss about generic traces and how to query traces. First of all, it is necessary to understand what a Meta-Theory is. According to the definition given by systemic TOGA [GADOMSKI (1997)], a Meta-Theory may refer to the specific point of view on a theory and to its subjective meta-properties, but not to its application domain. Therefore, a theory T of the domain D is a meta-theory if D is a theory or a set of theories. A general theory is not a meta-theory because its domain D is not composed of theories. By the previous definitions, the Trace Meta-Theory (TMT) [Deransart (2008)] is a meta-theory because it provides a set of definitions about how to define trace theories to specific domains. The term trace may be interpreted as a sequence of communication actions that may take place between the observer and its observed process, where its trace can be described in terms of finite-length sequences of events representing each step of running a given process. There is also the tracer that means the generator of trace. According to [Deransart (2008)], TMT focus particularly on providing semantics to tracers and the produced traces. Its semantics should be independent as possible from those of the processes or from the ways the tracers produce them. To illustrate the previous concepts, let’s suppose that we want to trace programs written in a given language called CHR 2 . The Figure 3.1 shows our scenario, where a CHR program is firstly translated into Prolog 3 , and after, it is executed in the SWIProlog 4 engine. 2 CHR. is a high-level language for concurrent logical systems. Prolog is a general purpose logic programming language. 4 SWI-Prolog is an open source implementation of the programming language Prolog. 3. 23.

(37) 3.1. TOWARDS REUSABILITY AND EXTENSIBILITY. Figure 3.1 Virtual and Actual Trace.. Suppose further that in our example we want to see the execution of CHR programs disregarding the states achieved on the underlying technologies, Prolog and SWI-Prolog. The remaining abstract states achieved during the execution, i.e. regarding the CHR environment forms a virtual trace. When we extract trace events from the virtual trace, for example, materializing these events by logging the CHR execution in a file system, we produce an actual trace. Finally, there is the idea of full trace, if the parameters chosen to be observed about the process represents the totality of knowledge regarding the process. In our example, the totality is represented by CHR, Prolog and SWI-Prolog.. 3.1 Towards reusability and extensibility The TMT approach is mainly based on the concepts of actual or virtual trace and constitutes the start point for studying the modular construction of tracers and traces. Figure 3.2 shows the different roles related to the conception of a tracer. The TMT distinguishes 5 roles. 1. Observed process The observed process, or input process (one that produces trace events), is assumed to be more or less abstract in such a way that its behavior can be described by a virtual trace, that is to say, a sequence of (partial) states. A formal description. 24.

(38) 3.1. TOWARDS REUSABILITY AND EXTENSIBILITY. Obs. Process. T^v. T^w Extractor. Full. OS. T^w Filter. Full. Partial. Querying. Filter. T^v. Rebuilder. Partial. Analyser. IS. Figure 3.2 Roles in the TMT. of the process, if possible, can be considered as a formal semantics, which can be used to describe the actual trace extraction. 2. Extractor This is the extraction function of the actual trace from the virtual trace. In the case of a programming language, usually requires modifying the code of the process. 3. Filter The role of the filter, or driver [Langevine and Ducassé (2005)], is to select a useful sub-trace. This element requires a specific study. It is assumed here that it operates on the actual trace (that produced by the extractor). The filtering depends on the specific application, implying that the produced trace already contains all the information potentially needed for various uses. 4. Rebuilder The reconstruction performs the reverse operation of extraction at least for a subpart of the trace, and then reconstructs a sequence of partial virtual states. If the trace is faithful (i.e. no information is lost by the driver) [Deransart (2009)], this ensures that the virtual trace reconstruction is possible. Also in this case, the separation between two elements (rebuilder and analyzer) is essentially theoretical; these two elements may be in practice very entangled. 5. Analyzer The element to visualize and monitor a trace, it may be a trace analyzer or any application. TMT defines that the whole process of providing tracers and trace can be visualized in three main views (Figure 3.2):. 25.

(39) 3.2. GENERIC TRACE. 1. Observational Semantics (OS) The OS describes formally the observed process (or a family of processes) and the actual trace extraction. Due to the separation in several roles, the actual trace may be expressed in any language. TMT suggests using XML. This allows to use standard techniques querying defined for XML syntax. 2. Querying TMT discusses about how to query a trace event, where it will be processed by the trace filter, on the fly, with respect to the conditions of the queries. 3. Interpretative Semantics (IS) The interpretation of a trace, i.e. the capacity of reconstructing the sequence of virtual states from an actual trace, is formally described by the Interpretive Semantics. In the TMT no particular application is defined; its objective is just to make sure that the original observed semantics of the process has been fully communicated to the application, independently of what the application does.. 3.2 Generic Trace The Trace Meta-Theory also describes the motivation to build a generic trace format. This artifact is intended to facilitate the adaptation of analyzer tools on different domains. Furthermore, it enables anlyzers to be defined almost independently from specific domains and, conversely, tracers to be built independently from these tools. For this reason it is qualified “generic". The generic trace format contains the definitions of the trace events and what each tracer should generate when tracing execution of a specific domain. As illustrated by Figure 3.3, each application may generate a specific trace with many particular events not taken into account by the generic trace. In order to produce genertic trace events, it is thus requested that each event matches to a generic event. To this match it is requested that the subsequence of the specific trace which corresponds to the generic trace must be a consistent generic trace, i.e. a trace whose syntax and semantics follows from the trace schema specified and thus can be understood by the analyzers tools. Notice that not all application may be able to generate all described generic events. Thus the generic trace format describes a superset of the generic events a particular tracer is able to generate.. 26.

(40) 3.2. GENERIC TRACE. Figure 3.3 Generic and specific traces. On the other hand a “portable" analyzer tool should be able to extract from a specific trace and to understand the sub-flow of events corresponding to the generic trace. Figure 3.3 illustrates two cases: portable tools which use the specific trace only (Tools A, B and Y), and specific tools which use generic traces (Tool X). Both situations are acceptable. However, a specific tool which relies on specific trace events may be more difficult to adapt to another application. In short, TMT represents a generic trace as a sequence of trace events consisting of: • a sequential event number; • the port (the name of one of the semantics rules); • the observed state of the observed process; and • some specific attributes on the port.. 3.2.1 Generic Full Trace of a Familly of Applications Consider Figure 3.4. This shows how different applications produce traces and the possibility to abtract them to a unique trace. This common trace is used to specify the virtual and actual traces.. 27.

(41) 3.2. GENERIC TRACE. Figure 3.4 A unique abstract model for several observed processes. This also illustrates how TMT proceeds to get a generic trace from any application: starting from an abstract theoretical suficiently refined semantics which is (almost) the same implemented in all applications.. 3.2.2 Generic Full Trace of an Application Now we consider again the case of an application written in CHR (Figure 3.1). It may be for example trace events regarding a specific domain, like CLP(FD) 6 . In this case there exists a generic trace called GenTra4CP [Deransart & al (2004)]. This trace is generic for most of the CLP(FD) existing constraint solvers. Therefore a tracer of CLP(FD) solver implemented in CHR should also produce this trace. But we may be interested in refining the trace considering that there are two layers: the layer of the application (CLP(FD)) and the layer of the language in which it is implemented (CHR). The most refined trace will then be the trace in the GenTra4CP format extended with elements of the generic full trace of CHR alone. The generic full trace of CLP(FD) on CHR is an extension of the application trace taking into account details of lower layers. 6 CLP(FD). is particularly useful for modeling discrete optimization and verification problems such as scheduling, planning, packing, timetabling etc.. 28.

(42) 3.3. QUERYING TRACE EVENTS. 3.3 Querying Trace Events The TMT approaches for trace querying is based on events and trace interrogation. This interrogation is processed by filtering the trace events, on the fly, with respect to the conditions of the queries. For this purpose, a tracer driver should contain a filtering mechanism: it will receive the filtering queries from the analysis process and send back filtered information to it. TMT suggests using XML. This allows to use standard querying techniques defined for XML syntax, like XPath [Clark et al. (1999)].. 3.4 Chapter Remarks This chapter presented the Trace Meta-Theory (TMT), an approach that focus particularly on providing semantics to tracers and the produced traces. We showed its three views: the Observational Semantics, that produces the trace information; the Driver Component, a trace query processor; and, the Interpretative Semantics, a front-end that takes as input the produced trace to show it in pretty-printting. This dissertation is focused on specifying the Observational Semantics and a Trace Driver in the context of a tracer framework and, as case study, a debugging tool for CHR. Furthermore, we specify a generic CHR trace schema for debugging using the XML Schema, and XML as host language to specify the rules of this schema.. 29.

(43) 4 The Plug and Trace Project. In previous chapters we explained how application analysis tools have steadily evolved during the last decades but, because the new generation of applications have increased in complexity across development, architecture and production, new challenges with regard to the monitoring, diagnosis and tuning of these complex application ecosystems are still emerging. Furthermore, as soon as applications are used for mission critical processes, performance and availability are important non-functional requirements. The need for flexible and user-friendly trace explanation facilities has been increasing as times goes by. This is so because the way applications are built today has fundamentally changed. The possibility of analyzing dynamic and static properties of several applications using a common analyzer is an important issue to reduce the learning curve. The actual scenario is that an analyzer of vendor X does not work with an analyzer of vendor Y, which does not work with another analyzer developed by vendor Z. In order to solve the aforementioned problem, the Plug and Trace project provides the first domain-independent and reusable debugging framework Its goal is to embed extraction and analyzer services into any application. Each component of this framework will be used either as a stand-alone software or assembled in order to provide a variety of analysis services that can be integrated in the most diverse domains. We proceed to explain how our tracer framework can be used as the basis for a trace analysis that realizes much of the existent tracer tools. We then argue the model-driven component-based architecture, that is the choice most aligned with our primary goal of delivering such a suite of reusable artifacts that can be easily integrated in everyday software. This chapter details the architecture of Plug and Trace, our proposed realization of such a framework.. 30.

(44) 4.1. GOAL AND DESIGN PRINCIPLES. 4.1 Goal and Design Principles The main Plug and Trace architectural design principles are based on the Deransart’s theory [Deransart (2008)]. In a nutshell, our work means a first object-oriented mapping of this theory. First of all, let us introduce the requirements of our framework the proposed framework should: • be able to integrate with any kind of input process, addressing the entire test and performance life cycle, from early testing to production application monitoring, including test editing and execution, monitoring, tracing and profiling, and log analysis capabilities. The platform should support a broad spectrum of computing systems including embedded, stand-alone, enterprise, and high-performance, permiting to expand its support to encompass the widest possible range of systems. • be built on a component-based architecture and to be simple, intuitive and easy to reuse and operate. This project should build a generic, extensible, standards-based tool platform upon which software developers can create specialized, differentiated, and interoperable offerings for world class analysis tools • contain a trace request, sent by the trace analyzer, which means the part of the trace that the trace analyzer wants to see. In other words, it consists of receiving all the execution events and analyzing them on the fly to show only the interesting information. • permit its integration with the input process in a simple way without compromising the performance of the input processes. The following goals should be achieved in order to meet the aforementioned requirements: • To integrate and manage different domains into a unique environment, called Plug and Trace; • To combine the component-based development and model-driven architecture to produce reusable artifacts and a domain-independent framework; • To use a generic trace schema with the intention of maintaining a unique structure of trace events produced by the input processes.. 31.

(45) 4.2. THE TOP-LEVEL PLUG AND TRACE COMPONENT. • To provide a set of services to facilitate the extension and reuse of the entire framework; • To provide a set of views to easily analyze the trace events produced. • To provide GUI components to interactively submit queries and inspect solution explanations at various levels of details, with patterns to specify what trace information is needed. • To provide a trace request processor, to analyze the requests sent by the trace analyzer; • To support automation, i.e. to do more with less, eliminating redundant and manual tasks. In the next sections we specify the whole framework in details, showing its theoretical foundations, architecture and components.. 4.2 The Top-Level Plug and Trace Component The Plug and Trace is designed for usage in any kind of application and across the entire lifecycle, including development, test, staging and production [Deransart and Oliveira (2009)]. Its architecture enables any application to be traced on the fly, an ideal solution for 24x71 production environments. The Plug and Trace framework describes all the phases involved from collecting the trace events until analyzing these information. The Plug and Trace basically acts as server collecting trace events from any kind of application. This is possible by injecting hooks into the application to produce trace events and is is the only source code changes required in the input process. After, all trace management will be performed through its three main sub-components: the TraceReceiver, the TraceDriver and and the TraceAnalyzer. Figure 4.1 shows the components involved in the application analysis process using the Plug and Trace. • The Trace Receiver, a listener that takes as input trace entries, sent by any input process, and forwards this trace to the Trace Driver component. It has also the important function of adapting these trace entries received in any format to a common structure inside the Plug and Trace framework, called TraceEvent. This adaptation 1 24/7. is an abbreviation which stands for "24 hours a day, 7 days a week", usually referring to a business or service available at all times without interruption. 32.

(46) 4.2. THE TOP-LEVEL PLUG AND TRACE COMPONENT. «component» PlugAndTrace. TraceForward TraceEntry. «component» TraceDriver. TraceAnalyzerRegister TraceAnalyzerRegister. «delegate» Analyzer. TraceEntry. «component» TraceReceiver. «component» TraceAnalyzer. TraceEvent TraceEvent. TraceFilter. «datatype» Event. Figure 4.1 The Top-Level Plug and Trace Component. is performed by extending the TraceAdapter class. This class is showed in details in the Section 4.2.1. • The Trace Driver, in a nutshell, provides the services and data structures necessary to process and filter the trace events. This component has the intention of maximizing the application analysis possibilities with minimum instrumentation and overhead. • The Trace Analyzer provides services and some views to analyze the trace events. Furthermore, through this component, it is possible to adjust the level of detail showed in the views by tuning it on-the-fly without restarting the target application. Other elements included in this framework are the model templates. Their goal are to reduce the hand coding by generating some artifacts using the MDE approach. These artifacts are detailed in Sections 4.3 and 4.4. The sequence diagram included in the Figure 4.2 presents how each component operates with one another and in which order.. 33.

(47) 4.2. THE TOP-LEVEL PLUG AND TRACE COMPONENT. sd PlugAndTrace. Process. loop. TraceReceiver. TraceDriver. TraceAnalyzer. [ true]. TraceAdapter sendTraceEntry (entry). getTrace (entry) traceEvent. TraceFilter sendTraceEvent (traceEvent). loop. [ for each filters ]. getRequest() request. doFilter(traceEvent, request) traceEvent. notify(traceEvent). Figure 4.2 The Plug and Trace workflow. 34.

(48) 4.2. THE TOP-LEVEL PLUG AND TRACE COMPONENT. In the first lifeline the main loop is started, this interaction means the execution of a given process sending trace entries to the Plug and Trace framework. This input process will connect to the Trace Receiver and send each trace entry. To maintain the common data structures of traces among our components, we have defined the Trace Event (see Section 4.3), to adapt the input data into Trace Event the Trace Adapter will get the these entries, to convert it to a trace event and forward this information to the Tracer Driver. The internal loop iterates through the connected Trace Analyzers, filtering and sending their requested information.. 4.2.1 Trace Receiver component In order to integrate our tracer framework with any kind of process, we provide the Trace Receiver component, as a mechanism for integrating, receiving and translating traces to a common structure, called TraceEvent. This component basically is a listener that gets the trace entries, sent by a connected input process, and forwards these entries to another component called TraceDriver. Figure 4.3 illustrates its operation.. Figure 4.3 The Trace Receiver acts as a listener to trace events. As a listener of trace entries, the Receiver has the function of forwarding these information to the subsequent components. Figure 4.4 shows its structure. The Trace Socket class is our default implementation of the Receiver class. This class is specified as a Java Socket. 2 Its operation is basically to run on a specific computer and use a socket that is 2 http://java.sun.com/j2se/1.4.2/docs/api/java/net/Socket.html. 35.

Referências

Documentos relacionados

Para realizar a coleta de dados dos sensores de temperatura, transmissores de pressão e temperaturas da água na entrada e na saída, foi utilizado um equipamento de coleta de dados

Um aluno de Mestrado poderá ser transferido para o programa de Doutorado sem o título de Mestre, atendidos os critérios vigentes, mediante solicitação formulada e justificada pelo

Para o método de inoculação do PMeV por abração, os resultados observados foram positivos para Brachiaria decumbens nas amostras correspondentes às folha

Moreover in Raman spectroscopy, a single frequency of radiation is used to irradiate a sample unlike IR absorption which requires matching of the incident radiation to the

O atual contexto de desenvolvimento das tecnologias da informação e comunicação faz emergir possibilidades de movimentos autorais em rede pelos sujeitos, o que

Além disso, o Facebook também disponibiliza várias ferramentas exclusivas como a criação de eventos, de publici- dade, fornece aos seus utilizadores milhares de jogos que podem

impedimento/suspensão imposta pela Justiça Desportiva, em participar do Campeonato Maranhense Série A 2021, até 20 (vinte) dias antes do início da

E finaliza: Estamos numa fase em que, ainda, o contagio tem sido muito grande em diversas localida- des do estado do Rio Grande do Sul e o número de