• Nenhum resultado encontrado

Automatização de Testes em Ambiente Web

N/A
N/A
Protected

Academic year: 2021

Share "Automatização de Testes em Ambiente Web"

Copied!
83
0
0

Texto

(1)

For

Jury

Ev

aluation

F

ACULDADE DE

E

NGENHARIA DA

U

NIVERSIDADE DO

P

ORTO

Test Automation in Web Environment

Jorge Miguel Guerra Santos

Mestrado Integrado em Engenharia Informática e Computação

Supervisor: ProfaAna Paiva

Proponent: EngoJoel Campos

(2)
(3)

Test Automation in Web Environment

Jorge Miguel Guerra Santos

Mestrado Integrado em Engenharia Informática e Computação

Approved in oral examination by the committee:

Chair:

External Examiner: Supervisor:

(4)
(5)

Abstract

In today’s fast moving world, it is a challenge for any company to continuously maintain and improve the quality and efficiency of software systems development. In many software projects, testing is neglected because of time or cost constraints. This leads to a lack of product quality, followed by customer dissatisfaction and ultimately to increased overall quality costs. Addition-ally, with the increasingly more complex software projects, the number of hours spent on testing increases as well, but without the support of suitable tools, the test efficiency and validity tends to decline. Some software testing tasks, such as extensive low-level interface regression testing, can be laborious and time consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly. However, test automation systems usually lack reporting, analysis and meaningful information about project status. The end goal of this research work is to create a prototype that can create and organize test batteries by recording user interaction, reproduce the recorded actions automatically, detect failures during test execution and generate reports, while also setting up the test environment, all in a automatic fashion and develop techniques to create more maintainable test cases. This tool can help bring technical advantage in automated web test-ing by creattest-ing new and more maintainable recorded test cases with minimal user action and allow testers to better evaluate the software project status.

(6)
(7)

Acknowledgements

First of all, I would like to thank FEUP for serving as a place for personal and interpersonal growth, as a well as for providing all the conditions for me to focus on my personal goals and to meet incredible people.

I thank ProfaAna Cristina Ramada Paiva for guiding me throughout this whole research work, making sure that I stayed focused on the main goal and for always managing her time in order to provide me feedback about the work being developed and clarify any doubts that I had.

Additionally, I would like to thank Glintt HS and its crew for allowing me to fit right in and provide all the conditions I needed during this research work. I hope the work done throughout my stay will be of value to the company for a long time. More specifically, I thank Joel for always being available for any questions that I had, even in the busiest days, and for letting me have the freedom to experiment with new ideas for the research work, while giving me his feedback.

A special shout-out to the Andrés group for being such a tight-knit group of friends. You keep my sanity in check—most of the times—and I thank you for all the moments during our stay at FEUP and I hope that it is just the beginning.

A special thanks to Simão for helping me throughout these academic years. I am eternally grateful for the fact that you had the initiative to work with me and for influencing me in a way that I was able to adapt to the new academic life and form a new work ethic that has without a doubt led me this far.

Another individual thanks goes to JP. Thanks for being such an upbeat guy and for, in a random day, asking me to study together in FEUP at night. This turned into a tradition that helped me manage my time to be able to do a little bit of everything.

I thank my parents for always supporting me through thin and thick but also allowing me the freedom to explore and focus on my interests while always giving me advice. I am thankful for having had the chance to be born to parents as incredible as you and sorry you had to deal with me and my brother.

Speaking of which, thank you my brother. You were always at my side throughout my life and I wish it stays that way no matter where our paths go.

Thank you,

(8)
(9)

“Only those who have patience to do simple things perfectly ever acquire the skill to do difficult things easily.”

(10)
(11)

Contents

1 Introduction 1

1.1 Context and Background . . . 1

1.2 Motivation . . . 1

1.3 Goals . . . 2

1.4 Structure . . . 2

2 Automated Web Application Testing 3 2.1 Software Testing . . . 3

2.2 Web Application Testing . . . 4

2.3 Test Automation . . . 5

2.3.1 Automated Web Application Testing Techniques . . . 7

2.4 Software Testing Metrics . . . 9

2.4.1 Automated Software Testing Metrics . . . 9

2.5 Tools Survey . . . 10

2.5.1 Automated Software Testing Tools . . . 10

2.5.2 Test Logging and Reporting Tools . . . 12

2.6 Conclusions . . . 13

3 Methodology 15 3.1 Requirements . . . 15

3.2 Technologies Comparison . . . 16

3.2.1 Automated Web Application Testing Tools . . . 16

3.2.2 Continuous Integration Tools . . . 17

3.2.3 Additional Technologies . . . 18 3.3 Test Cases . . . 19 3.3.1 Locators . . . 19 3.3.2 Recurrent Steps . . . 20 3.3.3 Implicit waits . . . 21 3.4 Test Environment . . . 21

3.4.1 Selection of Test Cases . . . 21

3.4.2 Automation . . . 22

3.4.3 Test Reports . . . 22

3.5 Conclusions . . . 23

4 Implementation 25 4.1 Method . . . 25

4.1.1 Test Case Structure . . . 25

(12)

CONTENTS

4.1.3 Test Results file structure . . . 27

4.1.4 Test Selector file structure . . . 28

4.1.5 Test Environment Setup . . . 28

4.2 Architecture . . . 30 4.2.1 Selenium IDE . . . 30 4.2.2 Jenkins . . . 35 4.3 Prototype . . . 39 4.3.1 Selenium IDE . . . 39 4.3.2 Jenkins . . . 43 4.4 Discussion . . . 47 4.4.1 Method . . . 47 4.4.2 Results . . . 48

5 Conclusions and Future Work 51 5.1 Final Remarks . . . 51

5.2 Future Work . . . 52

References 53 A Configuration 55 A.1 Selenium IDE Configuration . . . 55

A.2 Jenkins Configuration . . . 56

B Procedure 63 B.1 Translated Selenese commands in NUnit C# . . . 63

(13)

List of Figures

2.1 Mike Cohn’s test automation pyramid . . . 6

3.1 Integration between technologies . . . 18

3.2 RecurrentSteps case used by multiple test cases . . . 20

3.3 Jenkins Build Process . . . 22

4.1 Jenkins Workspace . . . 29

4.2 Overview of the developed system . . . 30

4.3 Implementation model of the Test suite chooser plugin . . . 31

4.4 Implementation model of the Command builders . . . 31

4.5 Implementation model of the Selenium IDE’s user extension . . . 32

4.6 Implementation model of the saveTestCaseToProperties executable . . . 33

4.7 Implementation model of the C# NUnit Formatters . . . 34

4.8 Use Case diagram of Selenium IDE . . . 35

4.9 Implementation model of the tests selector Jenkins plugin . . . 36

4.10 Implementation model of the transferTestsToBuild executable . . . 36

4.11 Implementation model of the parseTestsProperties executable . . . 37

4.12 Implementation model of the extractTestResults executable . . . 38

4.13 Use Case diagram of Jenkins . . . 38

4.14 Context menu with Selenium IDE commands in Selenese . . . 40

4.15 Selenium IDE interface with a select command . . . 41

4.16 Test suite tree view . . . 42

4.17 Locator’s format in Selenium IDE . . . 43

4.18 Jenkins main interface . . . 44

4.19 Jenkins test selector interface with a screenshot . . . 44

4.20 Jenkins test selector interface with a failed screenshot . . . 45

4.21 Jenkins test result interface . . . 45

4.22 Cause of the failed test presented in the test result interface . . . 46

A.1 Selenium IDE’s general settings . . . 55

A.2 Selenium IDE’s format settings . . . 56

A.3 Jenkins’ Tests folder . . . 57

A.4 Jenkins’ Project Pre-Build Configuration . . . 59

A.5 Jenkins’ Project Build Configuration . . . 60

A.6 Jenkins’ Project Post-Build Configuration . . . 61

(14)
(15)

List of Tables

3.1 Web Automated Testing Tools Comparison . . . 16

3.2 Continuous Integration Tools Comparison . . . 17

(16)
(17)

Abbreviations

UI User Interface

GUI Graphical User Interface

IDE Integrated Development Environment IT Information Technology

CI Continuous Integration C&R Capture & Replay

API Application Programming Interface URL Uniform Resource Locator

CSS Cascading Style Sheets HTML HyperText Markup Language DOM Document Object Model XML Extensible Markup Language JSON JavaScript Object Notation

(18)
(19)

Chapter 1

Introduction

2

This introductory section provides a brief overview of the problem being address throughout this

4

research work by first giving the background at a technical level. It then goes on to present the motivation behind the elaboration of this dissertation and the main goals to achieve with this

6

research work.

The last section describes the structure of this dissertation by introducing each of its chapters.

8

1.1

Context and Background

Testing is a vital part of the software development process and as web applications are becoming

10

increasingly important in our world it is crucial that they are tested properly and faster. At first, web testing was focused on finding bugs and security issues by going through the source code at a

12

low level, testing server and database communication. But as web applications are becoming more and more advanced and dynamic, testing the functionality of the web application UI has become

14

more important [LDD14].

The study and software developed throughout this research work will be done with the

col-16

laboration of Glintt HS. Glintt HS is the Glintt Group’s company focused in healthcare. Its core business is the development of software solutions to this market segment, where it is at the

fore-18

front at a national level. It is present in Brazil, Poland and Angola as well. It has around 300 collaborators and has its headquarters in Porto.

20

1.2

Motivation

Glinttoffers multiple web applications and the tendency is that the number of applications as well

22

as the respective features will increase. This naturally implies that the number of hours spent in tests will increase, but without the support of adequate tools, the efficiency and validity of the tests

24

(20)

Introduction

the creation/use of the correct tools, the investment that may be made in creating them will easily payoff. The use of automated testing of web application is becoming more common. However 2

it is still challenging to test UI functionality automatically. Most web applications are dynamic rather than static, which makes them complex to automatically test since its elements can change, 4

and are often comprised of different components built using different languages and techniques, which can also make automated testing difficult. In addition, test automation system often lack 6

reporting, analysis and meaningful information about project status.

1.3

Goals

8

The purpose of this research work is to study frameworks regarding web automated testing in order to create a prototype that takes into account software testers without experience in Web testing. 10

The prototype’s goal is to use technologies adapted for this purpose, such as automated testing techniques, test reporting, continuous integration, and implement new methods and techniques to 12

allow the following features:

1. Create and organize test batteries through the recording of user actions on the web applica- 14

tion.

2. Increase the robustness of the generated test cases. 16

3. Reproduce automatically the recorded actions.

4. Manage automatically the test environment, detect failures during testing and produce screen- 18

shots and graphical reports of the errors.

1.4

Structure

20

Besides the introduction, this research paper is structured in four more sections. Chapter2provides an overview of the state of the art of automated Web application testing as well as a automated 22

testing, logging and reporting tools survey.

Chapter 3 presents a framework comparison of the main components of the prototype and 24

specifies the technologies required for their integration. Additionally, it comprises a theoretical analysis of a set of relevant methods for the developed prototype. 26

Chapter4describes the prototype’s architecture and implementation by analyzing each com-ponent that structures the developed prototype and the methods and techniques implemented. It 28

finishes with a discussion of the results obtained.

(21)

Chapter 2

Automated Web Application Testing

2

The purpose of this chapter is to review the literature on automated web application testing. It

4

begins by introducing Software Testing as a core activity in the software development process, followed by assessing its techniques that are related to this research work, such as Web Application

6

Testing and Test Automation. Their characteristics and associated testing methods are reviewed before proceeding to the analysis of automated software testing metrics. In addition, a tool survey

8

is conducted, which gathered automated software testing, logging and reporting tools, followed by the conclusion of the literature review.

10

2.1

Software Testing

Software testing has been widely used in the industry as a quality assurance technique for the

12

components of a software project, including the specification, the design, and source code. As software becomes more important and complex, defects in software can have a significant impact

14

to users and vendors. Therefore, the importance of planning, especially planning through testing, is paramount. A company may devote as much as 40% of its time on testing to assure the quality

16

of the software produced due to the fact that software testing is such a critical part of the process of developing high-quality software [PRPE13] [LDD14].

18

In software testing, a suite of test cases is designed to test the overall functionality of the software, whether it conforms to the specification document or exposes failures in the software

20

(e.g., functionality or security failures). However testing is usually the process of finding as many errors as possible and thus improving assurance of the reliability and the quality of the software.

22

This is because, in order to demonstrate the nonexistence of errors in software, it would be needed to test all possible permutations for a given set of inputs. However, realistically, it is not possible

24

to test all the permutations of a given set of inputs for a given program, even for a trivial one. For any non-trivial software system, such an exhaustive testing approach is essentially technologically

26

(22)

Automated Web Application Testing

The main goals of any testing technique (or test suite) are the demonstration of the presence of errors during a program execution and to discover a new fault or regression fault in a successful 2

test case [LDD14].

2.2

Web Application Testing

4

Ever since the creation of the World Wide Web, there as been an increased usage of Web applica-tions. A Web application is a system which typically is composed of a database and Web pages, 6

also described as back-end and front-end respectively, with which users interact over a network using a browser. A Web application can be of two types – static, in which the contents of the Web 8

page do not change regardless of user input; and dynamic, in which the contents of the Web page may change depending on user actions [DLP05] [DLF06]. 10

Compared to traditional desktop applications, Web applications are unique, which presents new challenges for their quality assurance and testing [LDD14]. 12

1. Web applications are multilingual. They usually consist of server-side backend and a client-facing frontend, and these two components are usually implemented in different program- 14

ming languages [DLF06]. Moreover, the frontend is also typically implemented with a myr-iad markup, presentation and programming languages such as HTML, CSS and JavaScript, 16

which pose additional challenges for fully automated CI practices, as test drivers for differ-ent languages need to be integrated into the CI process and managed coherdiffer-ently. 18

2. The operating environment of typical Web applications is much more open than that of a desktop application. Such a wide visibility makes such applications susceptible to vari- 20

ous attacks, such as the distributed denial-of-service (DDOS) attacks. Moreover, the open environment makes it more difficult to predict and simulate realistic workload. Levels of 22

standards compliance and differences in implementation also add to the complexity of de-livering coherent user experiences across browsers [DLF06]. 24

3. A desktop application is usually used by a single user at a time, whereas a Web application typically supports multiple users [DLF06]. The effective management of resources (HTTP 26

connections, database connections, files, threads, etc.) is crucial to the security, scalability, usability and functionality of a Web application. The multi-threaded nature of Web applica- 28

tions also makes it more difficult to detect and reproduce resource contention issues.

4. A multitude of Web applications development technologies and frameworks are being pro- 30

posed, actively maintained and fast evolving. such constant evolution requires testing

tech-niques to stay current. 32

The aim of Web application testing consists of executing the application using combination of input and state to reveal failures. A failure is the manifested inability of a system to perform 34

(23)

Automated Web Application Testing

application implementation [DLF06]. In a web application, it is not possible to test faults sepa-rately and establish exactly which of them is responsible for each exhibited failure, because the

2

application is strictly interwoven to the whole infrastructure (composed of hardware, software and middleware components).

4

Since the infrastructure mainly affects the non-functional requirements of a Web application (such as performance, stability, or compatibility), while the application is responsible for the

func-6

tional requirements, Web application testing will have to be considered by two different perspec-tives:

8

• Non-functional testing: Comprehends the different types of testing that need to be executed for verifying the conformance of the Web application with the specified non-functional

re-10

quirements. The most common testing activities are performance, load, stress, compatibility, usability and accessibility testing.

12

• Functional testing: It has the responsibility of uncovering failures of the application that are due to faults in the implementation of the specified functional requirements. Most of the

14

methods and approaches used to test functional requirements of traditional software can be used for Web application too. Testing the functionality relies on test models, testing levels,

16

test strategies and testing processes.

Both are complementary and not mutually exclusive, therefore a Web application must be

18

tested from these two perspectives [DLF06].

2.3

Test Automation

20

Test automation of a software consists of using a computer program to execute system or user transactions against an IT system, which is typically achieved by using an automated testing tool.

22

Automated testing is typically used in functional regression testing, performance testing, load testing, network testing and security testing. The tools are very useful to speed up the test cycle

24

as they can replicate manual testing processes at a much faster rate [TSSC12]. An effective test automation strategy calls for automating tests at three different levels, which are, as show in figure

26

2.1, Unit/Component, Acceptance and GUI Tests.

Advantages

28

Test automation has its benefits, which include the development of tests that can be run faster, that are consistent, and tests can be run over and over again with less overhead. As more automated

30

tests are added to the test suite more tests can be run each time thereafter. Manual testing never goes away, but these efforts can now be focused on more rigorous tests [PRPE13].

32

• It can Save Time and Money: After each development of the Software product, the test has to be repeated to ensure quality of the software. With automation testing, only the initial

(24)

Automated Web Application Testing

Figure 2.1: Mike Cohn’s test automation pyramid

cost is there after that it runs over and over again at no additional cost, it can be executed as many times as it is needed and they are much faster than manual tests. However, since 2

test automation is an investment, the testing effort may take more time or resources in the

current release. 4

• Testing increases confidence in the correction of the software: The steps of the tests repeat each and every time when the source code changes, which maintain the accuracy of 6

a software system throughout the several iterations of its development.

• Increase Testing Coverage: Automated software testing process works on thousands of 8

different complex test cases which are not possible with manual testing. This allows more focus on the depth and scope of tests which increases the quality of software. Automa- 10

tion testing also facilitates testers to test the software on multiple computers with different

configurations. 12

• Helpful in Testing Complex Web Applications: Automation testing process is helpful for those web applications where millions of users interact together, by creating virtual users to 14

check load capacity of the web application. It can also be used where the application GUI will always be the same and features get changed always due to source code changes. 16

Challenges

It is important to point out that test automation actually makes the effort more complex since 18

there’s now another added software development effort. Automated testing does not replace good test planning, writing of test cases or much of the manual testing effort and has its own challenges. 20

(25)

Automated Web Application Testing

• Regression Test Cases Coverage: When software expands after every release, it becomes so wide that it is a challenge to cope up with the regression testing, verify the new changes,

2

test the old functionality, tracking of existing defects and logging new ones.

• 100% Automation: It is a challenging job to automate maximum number of scenarios

4

possible, since it is practically impossible to automate each and every test case.

• Required Skill Set: Tester has to have some programming knowledge to write the scripts

6

and also should show how to use automation tools really well.

• Time to write automated tests: When project has tight deadlines, it becomes difficult to

8

write automated tests, review them and then execute them. The tester has to be very skilled to perform all this within the given time.

10

• Environment set up: To carry out testing of some applications, it is required to set up an environment. They may be some kind of tools which are required or some pre-conditions to

12

fulfill, which need to be all set up properly to get the accurate results.

Limitations

14

As with most forms of automated testing, setting a regression-testing program on autopilot is not a surefire solution, and some conscious oversight and input is generally still needed to ensure

16

that tests catch all the bugs they should. When testers have the exact same suite of tests running repeatedly, night after night, the testing process itself can become static. Over time, developers

18

may learn how to pass a fixed library of tests, and then their standard array of regression tests can inadvertently end up not testing much of anything at all. If a created regression testing becomes too

20

automated, the whole point of doing it can backfire. It can end up guaranteeing a clear software-development trajectory for a software-development team while unwittingly ignoring components of the

22

application, letting the end users stumble upon undetected glitches at their own peril. Walking along a single path of least resistance is easier than stopping to sweep the entire application after

24

each new step, but it’s worth the effort to take regression testing all the way by frequently scanning a little further afield by complementing automation with some manual tests.

26

2.3.1 Automated Web Application Testing Techniques

Model based testing

28

Model based testing is a software testing technique in which the test cases are derived from a model that describes the functional aspects of the system under test. Its main purpose is to create

30

a model of the application. The test cases are derived on the basis of the model constructed and are generated according to either the all-statement or all-path coverage criterion [LDD14]. The

32

generated test case suite includes inputs, expected outputs and necessary infrastructure to execute the tests automatically. This technique depends on three key factors; notation used for data model,

34

(26)

Automated Web Application Testing

Mutation Testing

Mutation Testing is a fault-based testing technique that is based on the assumption that a program 2

is well tested if all simple faults are predicted and removed – complex faults are coupled with simple faults and thus detected by tests that detect simple faults. In this form of testing, some 4

lines of code are randomly changed in a program to check whether the test case can detect the change. It is aimed at detecting the most common errors that typically exist in a Web application 6

and is mainly intended to ensure that testing has been done properly and also to cover additional

faults [LDD14]. 8

Scanning and Crawling

In Scanning and Crawling techniques, a Web application is injected with input data that may result 10

in malicious modifications of the database if not detected. These are mainly intended to check the security of Web applications, while aiming to improve the overall security of a Web site [LDD14]. 12

In order to achieve page coverage, testing tools are typically based on the Web crawling. They can automatically navigate links starting from a given URL and use automated input generation 14

techniques to process forms [MTT+11].

Random Testing 16

In random testing, random inputs are passed to a Web application, mainly to check whether the Web application functions as expected and can handle invalid inputs [LDD14]. Actions are per- 18

formed randomly without knowledge of how humans use the application. This form of testing is good for finding system crashes, it is independent of GUI updates and need no effort in generating 20

test cases. However, it is difficult to reproduce the errors found because of its randomness, which

makes it unpredictable. 22

Fuzz Testing

Fuzz testing is a special type of random testing, where boundary values are chosen as input to 24

test that the Web site performs appropriately when rare input combinations are passed as

in-put [LDD14]. 26

Fuzzing is generally an automatic or semi-automatic process which involves repeated manipu-late target software and provide processing data for it. This process can be divided into identifying 28

target, recognizing input, generation of fuzzing data, performing fuzzing data, monitoring

abnor-malities and determining availability [LDLZ13]. 30

Concolic Testing

Concolic Testing is a hybrid software verification technique that, similarly to random and fuzz 32

testing, aims to cover as many branches as possible in a program. In this form of testing, random inputs are passed to a Web application to discover different branches through the combination of 34

(27)

Automated Web Application Testing

concrete and symbolic execution [LDD14]. This approach addresses the problem of redundant executions and increases test coverage [SMA05].

2

Capture & Replay

C&R tools have been developed as a mechanism for testing the correctness of interactive

applica-4

tions with graphical user interfaces. Using a capture and replay tool, a quality-assurance person can run an application and record the entire interactive session [PRPE13]. The tool records all

6

the user’s events, such as the keys pressed or the mouse movements, in a log file. Given that file, the tool can then automatically replay the exact same interactive session any number of times

8

without requiring a human user. By replaying a given log file on a changed version of the ap-plication, capture & replay tools support fully-automatic regression testing of graphical user

in-10

terfaces [LCRT13]. However, the generated tests are often brittle and include high maintenance costs.

12

2.4

Software Testing Metrics

As time proceeds, software projects become more complex because of increased lines of code as

14

a result of added features, bug fixes, etc. Also, tasks must be done in less time and with fewer people. Complexity over time has a tendency to decrease the test coverage and ultimately affect

16

the quality of the product. Other factors involved over time are the overall cost of the product and the time in which to deliver the software. Carefully defined metrics can provide insight into the

18

status of automated testing efforts. [DGG09]

In software testing, metric is a quantitative measure of the degree to which a system, system

20

component, or process possesses a given attribute. Most software testing metrics fall into one of three categories [DGG09]:

22

• Coverage: meaningful parameters for measuring test scope and success.

• Progress: parameters that help identify test progress to be matched against success criteria.

24

Progress metrics are collected iteratively over time. They can be used to graph the process itself (e.g., time to fix defects, time to test, etc.).

26

• Quality: meaningful measures of testing product quality, such as Usability, performance, scalability, overall customer satisfaction and defects reported.

28

2.4.1 Automated Software Testing Metrics

Automated testing metrics are used to measure past, present and future performance of the

im-30

plemented automated testing process and related efforts and artifacts. They serve to enhance and complement general testing metrics, providing a measure of the automated software testing

cov-32

erage, progress and quality, instead of replacing them. This metrics should have clearly defined goals for the automation effort and relate to its performance in order to be meaningful. [DGG09]

(28)

Automated Web Application Testing

Some metrics specific to automated testing are as follow:

• Percent automatable: It can be defined as the percentage of a set of given test cases that is 2

automatable. This could be represented by the following equation:

PA(%) =ATCTC =No.o f testcasesautomatableTotalno.o f testcases 4

• Automation progress: It refers to the number of tests that have been automated as percent-age of all automatable test cases. This metric is useful to track during the various stpercent-ages of 6

automated testing development.

AP(%) =ATCAA = No.o f testcasesautomatableNo.o f testcasesautomated 8

• Percent of automated testing coverage: It determines what percentage of test coverage the automated testing is actually achieving. Various degrees of test coverage can be achieved, 10

depending on the project and defined goals. Together with manual test coverage, this metric measures the total completeness of the test coverage and can measure how much automation 12

is being executed relative to the total number of tests. However, it does not say anything about the quality of the automation. For example, 1,000 test cases executing the same or 14

similar data paths may take a lot of time and effort to execute, but they do not equate to a larger percentage of test coverage. The goal of this metric is to measure its dimension, 16

instead of the effectiveness of the testing taking place. [DGG09]

PATC(%) =ACC = AutomationCoverageTotalCoverage 18

2.5

Tools Survey

2.5.1 Automated Software Testing Tools 20

In an era of highly interactive and responsive software processes, test automation is frequently becoming a requirement for software projects. Test automation means using a software tool to 22

run repeatable tests against the application to be tested and there are a number of commercial and open source tools available for assisting with the development of test automation. 24

2.5.1.1 Selenium

Selenium is an open source set of different software tools each with a different approach to sup- 26

porting test automation across different browsers and platforms [HK06]. The entire suite of tools result in a rich set of testing functions specifically geared to the needs of testing of web applications 28

of all types [BKW09]. These operations are highly flexible, allowing many options for locating UI elements and comparing expected test results against actual application behavior [Sel]. 30

The tools and API’s that Selenium includes are Selenium IDE, Selenium Remote-Control,

(29)

Automated Web Application Testing

• Selenium IDE: Firefox plugin which allows users to record and play back actions in the browser. Scripts are recorded in Selenese, which is a special test scripting language for

2

Selenium that provides commands for performing actions in a browser (i.e., click a link, select an option), and for retrieving data from the resulting pages.

4

• Selenium Remote-Control: The first tool in the Selenium project that allowed automation of web applications in browsers. This has been deprecated, although it is still functional in

6

the project, and WebDriver is now the recommended tool for browser automation.

• Selenium WebDriver: This Selenium project’s compact Object Oriented API refers to

8

both the language bindings and the implementations of the individual browser controlling code. It consists of a set of libraries for different programming languages and drivers which

10

can automate actions in browsers.

• Selenium Grid: It allows automated tests to be run remotely on multiple browsers, and on

12

other machines in parallel.

2.5.1.2 Watir

14

Watir(Web Application Testing in Ruby) is an open-source family of Ruby libraries for automating web browsers that consists of three projects— also called gems—which are Watir, Watir-Classic

16

and Watir-Webdriver [Wat].

• Watir: This gem will load either Watir-Classic or Watir-Webdriver, based on the browser

18

and operating system being used.

• Watir-Classic: This is the original Watir gem that drives Internet Explorer.

20

• Watir-Webdriver: This gem allows the driving of additional browsers, like Chrome and Firefox. It is an API wrapper around the Selenium-Webdriver gem.

22

Watir is designed to mimic a user’s actions, which means that these same directions can be used when creating an automated script.

24

Understanding the gem relationship is crucial because, when adding functionality to Watir, the code may need to consider which browser/gem is being used and since there are some API

26

differences between the Watir-Classic and Watir-Webdriver projects, despite striving to be API compatible through a common specification called Watirspec.

28

2.5.1.3 Sahi

Sahi is an open-source testing tool that uses JavaScript to execute events on the browser, with the

30

ease to record and playback scripts for any web applications on any browser and any operating system. Some of the features of Sahi are In-browser controls, intelligent recorder, text-based

32

(30)

Automated Web Application Testing

intercepts traffic from the web browser and records the web browsing actions. It injects JavaScript to access elements in the web page which makes the tool independent of the website or web 2

application [Sah].

Sahi also has a proprietary version called Sahi Pro which has all the features of the Sahi OS 4

plus it stores reports in Database, takes snapshots, has custom report generation and a script editor

to create functions. 6

2.5.1.4 DalekJS

DalekJS is a UI testing tool that uses a browser automation technique, with which the Web-Driver 8

JSON-Wire protocol is used to communicate with the browsers. Tests are JavaScript-based that can check page properties such as title and dimensions, as well as perform actions such as clicking 10

links and buttons and filling forms. DalekJS is still under development and is not recommended

for production use by its creators [Gol]. 12

2.5.2 Test Logging and Reporting Tools

After the testing cycle it is important to provide information by communicating the test results and 14

findings to the team so that risks can be assessed. There are different types of tools to handle this issue such as test report tools, test frameworks and test management tools, that can be integrated 16

with automated testing tools. What follows is a description of specific tools for each one of those

types. 18

2.5.2.1 Allure

Allure Framework is a flexible lightweight multi-language test report tool with the possibility to 20

add screenshots and logs. It provides modular architecture and web reports with the ability to store

attachments, steps, parameters, among others. 22

2.5.2.2 Mocha

Mocha is an open source JavaScript test framework running on node.js, featuring browser support, 24

asynchronous testing and runs tests serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases [Moc]. 26

Mocha has multiple add-ons that allow this framework to be used with most JavaScript asser-tion libraries, to perform headless testing, test coverage, addiasser-tional interfaces and reporters. 28

2.5.2.3 PractiTest

PractiTest is a proprietary test management tool that is able to manage requirements, tests and 30

(31)

Automated Web Application Testing

integrates with bug tracking, automation and continuous integration tools. Additionally, it is cus-tomizable enough to fit specific processes and can automatically generate reports through the use

2

of dashboards.

2.5.2.4 Serenity BDD

4

Serenity BDD is an open source reporting library that enhances the writing of automated accep-tance criteria in order to be more maintainable and better structured. It also produces rich

mean-6

ingful test reports that report on the test results and what features have been tested by documenting what was executed, in a step-by-step narrative format that includes test data and screenshots for

8

web tests.

2.6

Conclusions

10

In this chapter, an analysis of the state of the art of Automated Web Application Testing was made. It was clear that software testing is a vital part in the software development process and more time

12

is needed to guarantee the quality of the software. Consequently, test automation is becoming increasingly important. With the increased usage of Web applications and the fact that these

ap-14

plications are unique compared to traditional desktop applications, a number of testing techniques were designed for automated Web application testing, along with specific testing metrics to better

16

evaluate and measure the automated testing process.

A survey of software tools for test automation was performed in order to study frameworks that

18

could be used or influence the design and implementation of the prototype. The main conclusions of this survey will be discussed in Section 3.2.

(32)
(33)

Chapter 3

Methodology

2

As explained in the introduction, it is clear that the testing phase in software systems development

4

is increasingly more time consuming but also more important to guarantee product quality. Regarding the creation of tests, as seen in section 2.5, there are tools that can capture and

6

playback user actions when interacting with a GUI. These tools allow the fast creation of test cases, but the tests generated with this technique are often brittle, failing in moments due to timing issues.

8

The tests often require to capture the same actions to reach different parts of the application, which increase maintenance costs. For example, most web applications require a user to authenticate in

10

order to access its services. Through capture & replay tools, each test case would require to repeat the steps required to login in order to reach the component of the application to test the GUI. If

12

the application changed in a way that would make one of those test steps fail, it would be required to fix each test case that uses that test step separately.

14

In addition, managing the test environment can become a time consuming task, specially when involving recorded test cases, which, as discussed above, can be easily created but often include

16

high maintenance costs.

To be able to create a prototype that can ease the automated testing process, a study of the

tech-18

nologies to aggregate is necessary. A number of requirements were decided upon the prototype, which filtered the technologies that could be used.

20

3.1

Requirements

1. Open-source Technologies: The technologies gathered need to be open source because it

22

allows to freely experiment with the source code and tinker it in order to change or create new features. Furthermore, the cost of proprietary software could be a issue.

24

2. Capture & Replay feature:

C&R feature is essential to create a prototype that can record and playback user actions.

(34)

Methodology

3. Continuous Integration: CI is required because it allows the prototype to run full end-to-end automation testing in a single automated build process that could be run on-demand or 2

periodically.

4. Test Logging and Reporting: The prototype needs to provide good representation of the 4

tests’ execution output by identifying the step of the test where the failure occurred through

the use of a screenshot and its cause. 6

3.2

Technologies Comparison

3.2.1 Automated Web Application Testing Tools 8

There are many web testing application tools available. These tools differ in functionality, features and usability although the core functions are similar, as seen in table3.1. 10

Tool Features Selenium SAHI OS Watir DalekJS Telerik Test Studio

Open Source Yes Yes Yes Yes No

Capture & Replay Yes Yes Yes No Yes

Inbuilt Logs Yes Only propri-etary version (SAHI Pro)

No No Yes

Inbuilt Reports Only with support from other open source software

Yes No No Yes

Table 3.1: Web Automated Testing Tools Comparison

• Telerik Test Studio has a myriad of features, such as a comprehensive yet cost-effective automated testing suite and mobile application support. However, the fact that it is not open 12

source makes it hard to build upon, in order to implement new innovative features.

• DalekJS does not have implemented the Capture & Replay feature, which is crucial to this 14

work. Additionally, it is still in development and not recommended to be used for production

by its creators. 16

• Watir does not have inbuilt logs, but a log file can be created. The main issue is that there are not a lot of frameworks and tools that complement its lack of features. 18

• SAHI OS does fulfill most of the requirements, as a log file can be created. Despite this, it does not allow users to view the recorded steps in controller, lacks documentation and does 20

(35)

Methodology

• Selenium’s user action recorder (Selenium IDE) only works in Firefox - but the action play-back feature works on the most popular web browsers - and does not have inbuilt reports.

2

However, it is extensively documented and the Selenium WebDriver framework forms the basis for many other testing frameworks and tools.

4

Although Selenium has its issues, the Selenium WebDriver offers a lot of flexibility. There are a great number of different frameworks and tools that complements its lack of features. It is

6

seemingly established when it comes to automated testing of web applications, as shown by the extensive available documentation and active community [CSRM14], which makes it stood out as

8

the clear choice.

3.2.2 Continuous Integration Tools

10

Regarding Continuous Integration, there are few powerful open source tools available and they vary in their focus despite having similar features, as seen in the following table.

12

Tool Features Jenkins Travis CI Buildbot Strider

Open Source Yes Yes Yes Yes

Integration Integrates with every ma-jor tool thanks to plugins

Git - Git

Platform Cross-platform Hosted, accessed on Github

Python Node.js

Table 3.2: Continuous Integration Tools Comparison

• Jenkins (formerly known as Hudson) is a continuous integration and continuous delivery application. It is relatively easy to install and configure, has a rich plugin ecosystem and is

14

extensible enough to allow parts of the tool to be extended or modified to suit every project. • Travis CI is a hosted service used to build and test projects continuously. It is easy to install

16

and configure. However, it is not extensible and requires the project to be a Git project hosted in Github.

18

• Buildbot is a framework for automating software build, test, and release processes. It is written in Python and based on the Twisted framework. It does not have a GUI, instead it

20

works through commands in a terminal or Python scripts.

• Strider is an open source Continuous Deployment/Continuous Integration platform that is

22

written in Node.js/JavaScript and uses MongoDB as a backing store. It requires program-ming effort to setup and is customizable through plugins that increase that effort.

24

The two main open source CI Tools are Travis CI and Jenkins. On the one hand, both Buildbot and Strider include programming efforts in their setup and have little documentation available.

(36)

Methodology

Strider in particular, can be customized, but its plugins are used to extend its UI and backing store

and do not extend it with other tools. 2

On the other hand, Jenkins and Travis CI have extensive documentation available and active community that helps newcomers and presents issues to the developers. However, Jenkins ended 4

up as the tool chosen to implement the CI features required for this prototype. Even though it is comparatively harder to install and configure than Travis CI, its extensibility is a great boon. It 6

allows for the creation or modification of plugins that grant better user interaction with the test environment’s resources and better readability of the test results and the project status. 8

3.2.3 Additional Technologies

Having decided the frameworks for the two main components of the prototype, C&R and CI, the 10

following section will address the technologies chosen to integrate both of the main components. That integration is visually represented in figure3.1. 12

Figure 3.1: Integration between technologies

The Selenium’s C&R component, Selenium IDE, uses its own language-independent conven-tion called Selenese. It is used to specify the commands and any other parameters generated from 14

the user’s actions. However, in order for the test case to use the Selenium WebDriver API, it is necessary to export these commands into a object-oriented programming language. The language 16

chosen was C#, along with the unit testing framework NUnit to run the tests. Note that these gen-erated UI tests are integration tests and not unit tests. The tests attempt to verify that elements on 18

the interface behave as expected, instead of following a series of objective logic tests to confirm

the business logic. 20

Selenium IDE is a plugin for browsers based on the Mozilla platform. As a result, the develop-ment of new functionality for the IDE has to be done through Mozilla plugins. It should be noted 22

(37)

Methodology

requires the user to use the Firefox Developer Edition or Firefox Nightly instead of the standard version [Fir].

2

In the case of the CI component, the use of a Visual Studio C# NUnit project allows the automation of the test cases’ build process. It searches for errors that occurred during the test

4

case’s creation and in its exportation.

3.3

Test Cases

6

In GUI Testing, the generation of test cases has to deal with the domain size and with sequences because GUI’s have many operations to test in comparison to Command Line Interface systems.

8

A small program can have hundreds of GUI operations and in a large program, the number of operations can easily be an order of magnitude larger. Additionally, some functionality of the GUI

10

system may only be fulfilled with a specific sequence of GUI events. As an example, to open a file a user may have to click a "File" button in a toolbar, then select the "Open file..." operation and

12

use a dialog box to specify the file.

The GUI test cases involve testing UI components and attributes such as:

14

• Size, position, width and height of elements. • Error messages that are displayed after an action.

16

• Screen in different resolutions.

• The presence and availability of fields.

18

• Text found on the web page’s title and body.

The following sections describe the methods applied to the prototype that address the

genera-20

tion of test cases of GUI testing.

3.3.1 Locators

22

Due to the fact that web applications are dynamic and are the result of different technologies, it is a challenge to guarantee the robustness and durability of the recorded test cases. A small change

24

to the GUI could render a test case invalid.

In the standalone Selenium IDE, it is possible to locate elements through their ID, NAME, by

26

CSS, XPath or DOM.

The two first locators allow Selenium to test a UI element independent of its location on the

28

page. So if the page’s structure and organization changes, the test will still pass. However, the use of dynamic ID’s, which change the ID of an element whenever the page generates the element,

30

is an issue because the test will not be able to identify the element on different playbacks. The NAME locator does not have this issue, but it is a weaker locator since it is possible to have

32

(38)

Methodology

One of the main reasons for using the other alternatives is when there is not a suitable ID or NAME attribute for the element to locate. The CSS locator uses CSS selectors for binding 2

style properties to elements in the document, while DOM’s location strategy takes JavaScript that evaluates to an element on the page, which can just be the element’s location. 4

The XPath locator can be absolute or relative. Both support and extend beyond the methods of locating by ID or NAME attributes. However, as absolute XPaths contain the location of all el- 6

ements from the HTML page’s root, they are likely to fail with only the slightest adjustment to the web application. On the other hand, relative XPaths locate the element based on the relationship 8

of the target element and a parent element with an id or name attribute, which are much less likely to change. This opens up new possibilities such as locating the third check-box on the page. 10

Specific locators are required to handle this issues. They must allow the recorder to specify the element in a way that they are correctly identified during all test runs. 12

3.3.2 Recurrent Steps

In web applications, it is common to repeat a set of actions to reach a certain point of the applica- 14

tion, such as logging in, see user profile and go to settings.

Recording user actions would make several steps repeated across different test cases. In case of 16

a failure on one of those steps, it would be required to fix every one of those test cases separately. To avoid this problem, the development of a specific type of file will allow the tester to record and 18

store sequences of actions that are commonly used to navigate through the GUI. On recording a test case that uses those steps, the user can then referenced them as a command of the test case, as 20

exemplified in the following figure.

(39)

Methodology

3.3.3 Implicit waits

Waiting is having the automated task execution elapse a certain amount of time before continuing

2

with the next step. There are two types of waits in Selenium, Explicit and Implicit waits.

An explicit wait is code that is defined by the user himself to wait for a certain condition

4

to occur before proceeding further in the code. The worst case of this is through the use of Thread.Sleep(), which sets the condition to an exact period of time to wait. The Selenium

Web-6

Driver API, on the other hand, provides methods that wait up to an amount of time.

An implicit wait is used to tell WebDriver to poll the DOM of the GUI for a certain amount of

8

time when trying to find an element or elements if they are not immediately available.

In order to avoid long waiting times or unnatural user actions during test recording, such as

10

explicitly specify that a test must hold until an element is present, the generated test case will implicitly wait for page loads and for the presence of elements. For example, while recording a

12

test case, if a user clicks on an element that prompts another page, it will record a single click command. Yet, the associated C# test case exported code must first check if the element is present

14

in the GUI. After confirming that the element is present, the click is simulated and then it checks if the page is loaded to continue the test.

16

This approach is designed to:

• Avoid overloading the user by remembering to explicitly add commands to wait for an

18

element.

• Use the Selenium WebDriver API to implement different implicit timers for different

occa-20

sions, such as to wait for input elements or error messages to be visible.

3.4

Test Environment

22

The test environment used by the prototype automates its management and supports test execution. It aids the software testing process by providing a stable and usable environment to run the test

24

cases. This includes building the test cases, selecting which test cases to run and managing both the test results and screenshots. The decision of which CI tool would be used to integrate in the

26

prototype took into account the interaction with the test environment that it provided for the user. As mentioned in section3.2, the tool chosen was Jenkins which allows the development of custom

28

features to its web UI—due to its extensibility through the use of plugins.

3.4.1 Selection of Test Cases

30

For the purpose of running tests, it was required to allow the user to select the test cases to run instead of having to always run all the generated tests. To do so requires the development of a

32

(40)

Methodology

This page will consist of a list that uses a file that stores information regarding the test cases in the test environment, including those that were generated but not executed, and groups them by 2

their test suite. The user will be able to select specific test cases or whole test suites to execute. The test selector page is also used to visualize which tests failed and see their associated 4

screenshot, which is taken moments before the unsuccessful test exits.

3.4.2 Automation 6

Regarding the automation of the test environment’s management, the process will be conducted during the Jenkins build operation. This process is separated in two phases, Build and Post-Build. 8

It is configured in a way that executes a set of developed programs in a series of steps.

Figure 3.3: Jenkins Build Process

In the build phase, the first step is the automatic compilation of the test cases selected by the 10

user in the Visual Studio C# NUnit project. Following their execution, a file with all the tests’

results will be generated. 12

During the second phase, the generated file with the test results is used to update the file that is integrated with the developed test selector page in the Jenkins interface that updates its list, as 14

mentioned in the previous section. In addition, an e-mail notification is sent to the configured addresses in a list of recipients. This notification is sent only on certain occasions such as when a 16

build fails, becomes unstable or returns to stable.

3.4.3 Test Reports 18

Reporting is an important part of any test execution, as it helps the user understand the result of the test execution, point of failure, and the reasons for failure. Logging, on the other hand, is 20

important to analyze the execution flow or for debugging in case of any failures.

The test logging and reporting capabilities of the prototype will consist of Jenkins plugins. 22

(41)

Methodology

graphical visualization of the historical test results. Additionally, the Jenkins web UI will be used for viewing test reports, tracking failures, visualizing the test results trend and accessing the test

2

logs.

3.5

Conclusions

4

The issue of minimize the maintenance costs of automated C&R tests was separated in two main approaches. The first focuses on the creation of test cases by maintaining the advantages of C&R

6

generated tests and increasing the tests’ robustness. The prototype does so by implementing spe-cific locator strategies, avoiding timing issues through the use of implicit waits and resorts to a

8

new method called RecurrentSteps.

The other approach deals with the test environment’s setup, interaction and automated

man-10

agement by resorting to continuous integration features. In addition, it works on a way to use a web UI that allows the user to select the test cases to run through the CI tool and provide graphical

12

visualization of the tests’ results.

Tools to develop both approaches were chosen, as well as complementary technologies to

14

integrate the two main components that implement the methods and techniques analyzed in this chapter.

(42)
(43)

Chapter 4

Implementation

2

In this chapter, a description of the procedures and methods used in this research work and the

4

implementation of the developed prototype is conducted, following the more theoretical analysis in chapter3. It begins by presenting a low-level technical approach towards the prototype’s

incor-6

porated methods by describing the structure of the integrated technologies and how they interact with one another.

8

For the purpose of specifying the pieces of the puzzle and how they fit together, the next section analyzes the prototype’s architecture. It details every component, including the changes made

10

to existing technologies and the design of the new programs through the use of implementation models.

12

After presenting the prototype’s design, section4.3focuses on describing the main contribu-tions and changes implemented during this research work for each main component.

14

The chapter concludes with the discussion of the methods implemented and the results ob-tained.

16

4.1

Method

The development of a specialized automated web testing tool entailed a set of methods to structure

18

the test cases and organize the test environment.

4.1.1 Test Case Structure

20

Although the test cases were written in C# and run with NUnit, the prototype generates test cases from the recorded user actions through its C&R component, which uses its own

language-22

independent convention (Selenese) in order to represent the commands received.

To structure the test cases, rules to translate from Selenese to C# were formalized, as seen

24

in table4.1. These rules are used by the developed C# formatters (See section4.3.1, under "C# formatters").

(44)

Implementation

Formalized general rules

• Setup test: The generated C# NUnit test case specifies the browser’s driver where the tests 2

will be executed, its configuration and two timers used in different circumstances. One timer will be used in page loads and is defined to wait for a maximum of 60 seconds. The other 4

timer will be used whenever the test checks the presence of elements. It waits a maximum of 10 seconds. A detailed implementation of this process will be discussed in section4.3.1. 6

• Tear down test: Moments before the end of the test, the browser’s window is safely closed, followed by asserting if any error has occurred during test execution and publication of the 8

results in a .xml file.

• Implicit waits: Exported commands, such as click and type, will use the previously men- 10

tioned timers. They will first check the presence of the command’s target element, followed by simulating its associated action. Then it will check if the page is loaded and the test 12

proceeds.

User action Selenese NUnit C#

Click Commands: click and clickAndWait Target: <locator>

ListingB.1

Type Commands: type and sendKeys

Target: <locator> Value: <content>

ListingB.2

Open web page Commands: open Target: <URL>

ListingB.3

Check if element is present Commands: isElementPresent Target: <Locator>

ListingB.4

Select element from list inde-pendently from its position

Commands: addSelection Value: <listText>

ListingB.5

Table 4.1: Rules to handle user actions in Selenese commands and in NUnit C#

4.1.2 Test Case Life Cycle 14

The prototype works with multiple technologies, which require the test case to undergo changes from the moment it is generated to its execution. The main steps are as follows: 16

1. Record user actions: The first step to create a test case is to use the Selenium IDE to record user actions as the user navigates through the web application to test it. These actions will 18

be represented as Selenese commands.

2. Extract test case commands to a C# file: By using a developed C# NUnit formatter, the 20

Selenese test case will be exported into a C# file. Once complete, the exported test case is transferred to a Jenkins workspace ready to be run or edited using Selenium WebDriver. 22

(45)

Implementation

With regard to the test cases that use recurrent steps, a detail description of the process will be presented in section4.3.1, under "C# formatters".

2

3. Run test case through a CI tool: After setting up the test environment with Continuous Integration using Jenkins, the test case is executed using the NUnit console and the results

4

will be saved in a XML file, which can be analyzed through the use of different test report tools.

6

4.1.3 Test Results file structure

Following the tests’ execution, the tests’ results are stored in a generated XML file. The file

8

structures the test cases in different types of XML components that contain the tests’ results in-formation. These types of components are ordered hierarchically and they adopt the following

10

structure.

1. Assembly: At the top is the Assembly component which includes information regarding the

12

NUnit project where the tests are located and contains all Namespace components.

2. Namespace: These components are derived from the user specified test suite which groups

14

the test cases in other nested namespaces or in a TestFixture component.

3. TestFixture: It is related to the NUnit’s test case class and stores the respective TestCase

16

component.

4. TestCase: It is the basic component of the test. It contains an attribute that is used to specify

18

a unique identifier for the respective test case. In case of a failed test, this component will also nest a failure element with the message and stack trace of the failure that caused the

20

test’s unsuccessful execution.

As an example of this hierarchical behavior, the user creates two test cases, one has the test

22

suite "appTests.navigationTests" and the other "appTests.authenticationTests". The test results XML file generated from executing these two tests will consist of an assembly component that

24

has one namespace component called "appTests". This namespace will then have two children namespace components called "navigationTests" and "authenticationTests". Each one will have

26

their associated TestFixture and TestCase component.

Regarding the components’ attributes, excluding the TestCase component, all of them specify

28

a type attribute which specifies the type of suite represented by its element (Assembly, Namespace, TestFixture). What follows is a description of the attributes that every type of component contains.

30

• name: The display name of the test as generated by NUnit.

• executed: Boolean variable that indicated if the test was executed, independently of the

32

test’s outcome.

• result: The basic result of the test. May be Success, Failed, Inconclusive or Skipped.

(46)

Implementation

• success: Boolean variable that specifies if the tests executed successfully or failed.

• time: The duration of the test in seconds, expressed as a real number. 2

• asserts: The number of asserts executed by the test.

Note that components with nested components will sum the results of its children. For exam- 4

ple, a component will sum the time attributes of its nested components to update its own time. Additionally, if at least one of the nested components of a component result in a failed test, the 6

component in question will update its attributes to indicate that it failed.

4.1.4 Test Selector file structure 8

As mentioned in section3.4.1, a file is required to store the test cases’ information and group them by test suite in a list to allow the user to select them through an interface. For this purpose, a file 10

with a properties extensions was created, which contains the following five properties:

• tests: This property value is a JSON array that contains a JSON object for each test. 12

• enableField: The name of the field that will imply if the test is enabled or not. If the value in the specified field, for some tests, will be false then the test will not be shown at all. 14

• groupBy: The field that the list will group the tests by. It is related to the Namespace

component in the test results’ file. 16

• showFields: The fields that will be shown in the tests’ tree list.

• fieldSeparator: The character that will separate between the fields in the tests’ tree list. 18

This file serves as a configuration file for the list in the Test Selector interface, in addition to the JSON array. The implemented interface can be seen in figure4.20. 20

4.1.5 Test Environment Setup

For this research work, a test environment with continuous integration features was required in 22

order to run full end-to-end automation testing in a single automated build process that could be run on-demand or periodically, with minimal user action. 24

In order to setup the test environment, Jenkins is used to create a workspace that structures the test environment and to automate the build process. 26

The figure4.1illustrates the breakdown of the Jenkins workspace contents:

• Folder BuildSeleniumTests: Pre-built Visual Studio NUnit project that is used to build 28

the exported tests that were recorded with Selenium IDE using the C# NUnit formatter

Referências

Documentos relacionados

Constitui objeto deste contrato a prestação, pela CONTRATADA, de serviços de empresa de telecomunicações e comunicação de dados para a prestação de serviço “frame-relay” para

Concluindo que existe uma forte correlação entre o processo de microtrauma muscular e o sinal mioelétrico, enfatizando a eficácia do uso dessa variável para

Sistemas de Processa- mento de Transações Sistemas de Controle de Processos Sistemas Colaborativos Sistemas de Apoio às Operações Sistemas de Informação Gerencial Sistemas de Apoio

Não é possível aceitar documentação fora do prazo de inscrição, assim como documentação complementar na etapa recursal, conforme item 4.4 da Chamada Pública.. Recurso

O programa de lavagem seleccionado é demasiado suave A loiça colocada no cesto inferior não foi lavada A abertura da tampa do compartimento para o detergente foi impedida

Da mesma forma, o código malicioso, para onde o programa será desviado, pode ser colocado tanto no heap quanto na pilha.. Nas figuras 3.4 e 3.5 pode ser vista uma representação

i) A condutividade da matriz vítrea diminui com o aumento do tempo de tratamento térmico (Fig.. 241 pequena quantidade de cristais existentes na amostra já provoca um efeito

Treinamento de Especialistas no Exterior 700 Jovens Pesquisadores ( para expatriados e estrangeiros) 860 Cientistas Visitantes (para expatriados e estrangeiros) 390.