• Nenhum resultado encontrado

Portfolio Composition using Fundamental Analysis and Rule Optimization. Electrical and Computer Engineering

N/A
N/A
Protected

Academic year: 2021

Share "Portfolio Composition using Fundamental Analysis and Rule Optimization. Electrical and Computer Engineering"

Copied!
86
0
0

Texto

(1)

Portfolio Composition using

Fundamental Analysis and Rule Optimization

Renato Afonso Garcia Henriques

Thesis to obtain the Master of Science Degree in

Electrical and Computer Engineering

Supervisor:

Prof. Rui Fuentecilla Maia Ferreira Neves

Examination Committee

Chairperson: Prof. Teresa Maria Sá Ferreira Vazão Vasques

Supervisor: Prof. Rui Fuentecilla Maia Ferreira Neves

Member of the Committee: Prof. Susana Margarida da Silva Vieira

(2)
(3)

Declaration

I declare that this document is an original work of my own authorship and that it fulfills all the require-ments of the Code of Conduct and Good Practices of the Universidade de Lisboa.

(4)
(5)

Acknowledgments

Comec¸o por agradecer ao meu orientador, o Professor Rui Neves, pela disponibilidade. O seu conhec-imento e experi ˆencia foram fundamentais para a realizac¸ ˜ao deste trabalho.

Gostava de agradecer tamb ´em `as pessoas que tiveram ao meu lado durante esta jornada. Um obrigado aos colegas com quem partilhei as ang ´ustias e conquistas dos ´ultimos 5 anos e tamb ´em a todos os amigos que me proporcionaram momentos inesquec´ıveis e muitas hist ´orias para contar.

Um especial agradecimento `a minha fam´ılia. `A minha m ˜ae e ao Carlos que muito sacrificaram para o meu bem estar e a quem eu espero poder retribuir no futuro. `A Beatriz, por me tratar bem apesar de ser um irm ˜ao desnaturado. Ao meu pai, por sempre me ter encorajado a investir na minha educac¸ ˜ao e por me levar a conhecer o mundo. Agradec¸o tamb ´em ao meu av ˆo Afonso que tanto me apoiou durante o meu crescimento e que muito contribuiu para a pessoa que sou hoje.

Por fim, um grande obrigado `a Mariana Galrinho. Pela sua paci ˆencia e por ter estado ao meu lado durante toda esta etapa. Sem o seu apoio e motivac¸ ˜ao este trabalho n ˜ao teria sido poss´ıvel.

(6)
(7)

Resumo

A intelig ˆencia artificial e o r ´apido crescimento do poder computacional revolucionaram a forma como os investimentos s ˜ao realizados. Os computadores s ˜ao hoje capazes de tomar decis ˜oes mais r ´apidas e precisas, grac¸as `a sua capacidade de processar grandes quantidades de dados num reduzido per´ıodo de tempo. Neste trabalho, propomos um sistema de investimento que seleciona empresas com base no desempenho financeiro e dados hist ´oricos das suas ac¸ ˜oes para gerir ativamente uma carteira de ac¸ ˜oes. O sistema proposto comec¸a por filtrar as empresas com crescimento financeiro constante, a fim de restringir poss´ıveis investimentos a ac¸ ˜oes menos vulner ´aveis a quedas de prec¸os inesperadas. Um conjunto de regras de compra e venda s ˜ao usadas para avaliar o qu ˜ao pr ´oxima uma ac¸ ˜ao est ´a das suas condic¸ ˜oes desej ´aveis. Os pesos atribu´ıdos a cada regra s ˜ao otimizados de forma a adaptar a avaliac¸ ˜ao `as condic¸ ˜oes subjacentes ao mercado, com o objetivo de maximizar a rentabilidade do portf ´olio. Explo-ramos o uso do m ´etodo Particle Swarm Optimization para resolver este problema, testando variantes adaptativas e diferentes topologias. A estrat ´egia proposta foi simulada usando dados di ´arios dos con-stituintes do ´ındice S&P500 durante o per´ıodo entre janeiro de 2015 e janeiro de 2018, o que permitiu testar a abordagem em v ´arias condic¸ ˜oes de mercado. O sistema alcanc¸ou um retorno m ´edio anual de 11.23% e superou o desempenho do ´ındice em per´ıodos de alta volatilidade. O m ´etodo de filtragem financeira mostrou resultados promissores durante mercados em alta.

Palavras-chave:

Sistema de investimento, Composic¸ ˜ao de carteiras de ac¸ ˜oes, An ´alise Fun-damental, An ´alise T ´ecnica, Particle Swarm Optimization, Computac¸ ˜ao Evolucion ´aria

(8)
(9)

Abstract

Artificial intelligence and the rapid growth of computational power has revolutionized the way financial trading is done. Computers are now able to make faster and more accurate decisions thanks to their capacity to process larger quantities of data in a shorter amount of time. In this work, we propose a trading system that selects companies based on their financial performance and historical stock data to actively manage a portfolio. The proposed system starts by filtering companies with a steady financial growth in order to restrict possible investments to stocks that are less vulnerable to unexpected price drops. A set of trading rules is used to evaluate how far a stock is from its desirable conditions, buying and selling it accordingly. The importance weights given to each rule are optimized in order to adapt this evaluation to the underlying market conditions, with the goal of maximizing the portfolio’s profitability. We explore the use of Particle Swarm Optimization for solving this problem, testing adaptive variants and different topologies. The proposed strategy was simulated using daily data from S&P500 index constituents during the period between January 2015 and January 2018, which allowed to test the approach under several market conditions. The system achieved an annual average return of 11.23% and outperformed the benchmark during periods of high volatility. The financial filtering method itself showed very promising results during uptrending markets.

Keywords:

Trading system, Portfolio Composition, Fundamental Analysis, Technical Analysis, Particle Swarm Optimization, Evolutionary Computing

(10)
(11)

Contents

Acknowledgments . . . v

Resumo . . . vii

Abstract . . . ix

List of Tables . . . xiii

List of Figures . . . xv

List of Acronyms . . . xvii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Objectives . . . 2 1.3 Contributions . . . 3 1.4 Document Outline . . . 3 2 Background 5 2.1 Market analysis . . . 5

2.1.1 Stock Exchange Market . . . 5

2.1.2 Fundamental analysis . . . 7

2.1.3 Technical analysis . . . 7

2.2 Particle Swarm Optimization . . . 8

2.2.1 Evolutionary Computation . . . 8

2.2.2 Overview of Particle Swarm Optimization . . . 10

2.2.3 Variations and improvements . . . 11

2.2.4 Adaptive Particle Swarm Optimization . . . 14

2.3 Portfolio composition . . . 16

2.3.1 Portfolio theory . . . 16

2.3.2 Risk management techniques . . . 18

2.3.3 Performance metrics . . . 18

2.4 Rules Optimization . . . 21

2.4.1 Related works . . . 22

3 Proposed system 27 3.1 Overview . . . 27

(12)

3.2 System architecture . . . 28 3.3 Data Processing . . . 29 3.3.1 Financial Filters . . . 30 3.3.2 Trading Rules . . . 31 3.4 Rules Optimization . . . 34 3.4.1 Problem definition . . . 34

3.4.2 Particle Swarm Optimization . . . 35

3.5 Trading Simulator . . . 37

3.6 User Input . . . 39

4 Experimental Results 41 4.1 Definition of Financial Filters . . . 41

4.1.1 Analysis of the revenue and net income as fundamental indicators . . . 41

4.1.2 Prediction of revenue growth . . . 48

4.2 Comparison between Particle Swarm Optimization variants . . . 52

4.3 System performance . . . 55

5 Conclusions and future work 61 5.1 Conclusions . . . 61

5.2 Future work . . . 62

(13)

List of Tables

2.1 An example of a stock portfolio. . . 19

2.2 Related works on the application of Evolutionary Computation techniques to the Rule combination problem. . . 26

3.1 Description of each different type of stock price. . . 30

3.2 Summary of the score system used by every trading rule. . . 31

3.3 Rules defined for the EMA indicator. . . 32

3.4 Rules defined for the DC indicator. The EMA(S) corresponds to the shorter average and EMA(L) to the longer average. . . 33

3.5 Rules defined for the Bollinger Band indicator. The Bollinger bands are calculated using a simple moving average of 50 days and a standard deviation of 2.5$. . . 34

4.1 Percentage of companies with positive growth of revenue and stock price for different number of years. . . 43

4.2 Average of the annual percentage change of the stock price of every company whose revenue grew Y years during the last X years. The average stock price change of every company used in each each column is presented in the last row. . . 44

4.3 Number of companies used to calculate the annual average of the percentage change of the stock price of every company whose revenue grew Y years during the last X years in Table 4.2. . . 45

4.4 Annual average of the percentage change of the stock price of every company whose net income grew Y years during the last X years. . . 45

4.5 Number of companies used to calculate the annual average of the percentage change of the stock price of every company whose net income grew Y years during the last X years of the corresponding cell in Table 4.4 . . . 46

4.6 Annual average price growth of companies that have had the last X years with positive net income. . . 47

4.7 Parameters used by the Trading Simulator during the analysis of the PSO variants. . . 52

4.8 Average and standard deviation of the global best position’s fitness score for 10 executions of each algorithm. . . 53

(14)

4.10 Best solutions found by APSO of the best performing execution, for each training period. . 57 4.11 Evaluation metrics of the Buy&Hold strategy of the S&P500 index and the system

(15)

List of Figures

2.1 Apple Inc.’s stock price chart from 2010-01-01 to 2018-12-28. . . 6

2.2 Subdivisions of algorithms from the Evolutionary Computation field. . . 9

2.3 Representation of ring topology and fully connected topology. . . 12

2.4 Apple Inc.’s stock price from 2007-10-01 to 2008-09-13. The difference between point H and point L correspond to the biggest price decrease of this time series, i.e, its maximum drawdown. . . 21

2.5 Structure of the object of optimization used in [3]. . . 24

2.6 Division of the training and testing periods with and without the sliding window. . . 25

3.1 General representation of the proposed trading strategy. . . 28

3.2 Flowchart with the complete system procedure. . . 29

3.3 Example of the operation of the Double Crossover method using an EMA of 50 days and an EMA of 200 days for the stock price of Apple Inc. . . 33

3.4 Mapping of the 7-dimensional solution vector. . . 35

3.5 Flow chart of the procedure of Trading Simulator. . . 37

3.6 An example of the use of the ranking system to select each stocks to buy and sell. . . 38

4.1 Percentage change of revenue and stock price for each company in the last N years. . . 42

4.2 Percentage change of the revenue for each quarter compared to the corresponding quar-ter of the previous year. . . 49

4.3 Percentage change of revenue for each quarter compared to the corresponding prediction using the 4 previous quarters results. . . 49

4.4 Performance of the three different strategies and comparison to the S&P500 index mea-sured using the Return on Investment (%) from 2015-01-05 to 2018-01-03. . . 50

4.5 Average fitness score of the swarm and fitness score of the global best position of each algorithm for 50 iterations. . . 53

4.6 Mapping of the dates used for the test and training periods . . . 56

4.7 System performance, in terms of the ROI (%), from 2015-01-15 to 2018-01-18. It presents the average of every execution of the system and the Buy&Hold of the S&P500 index. . . 56 4.8 System performance, in terms of the ROI (%), over the period between 2017-02-15 and

(16)

4.9 System performance, in terms of the ROI (%), over the period between 2015-01-15 to 2018-01-18. It presents the average results of the system with and without rule optimiza-tion system, and the benchmark, the S&P500 index. . . 59

(17)

List of Acronyms

APSO Adaptive Particle Swarm Optimization BBANDS Bollinger Bands

COP Constrained Optimization Problem

DC Double Crossover

ELS Elitist Learning Strategy

EMA Exponential moving average

ESE Evolutionary State Estimation

GA Genetic Algorithm

Local-PSO Local Particle Swarm Optimization

Local-APSO Local Adaptive Particle Swarm Optimization

MDD Maximum Drawdown

PSO Particle Swarm Optimization

ROI Return on Investment

SMA Simple moving average

(18)
(19)

Chapter 1

Introduction

1.1

Motivation

The stock exchange market is known for moving a vast amount of capital. The fortunes of the top 1% richest people in the world derive mainly from investments held in this market. By trading stocks, investors seek to generate an income from their idle money, hoping to increase their wealth in the future. However, such reward is not easily obtained. Stock investing, as the majority of investment types, carries the potential risk of losing part or all of the money invested.

While some ventures are successful and bring fortune to the investor, others, due to bad luck or merely negligence, may remain fruitless, failing to provide any profit or even devalue the invested cap-ital. However, no intelligent investor likes to leave his hard-earned money in the hands of faith. For this reason, over the years, this subject has been thoroughly studied by many experts and financial re-searchers. From the field of security analysis, two main approaches for examining investments emerged, Fundamental and Technical analysis.

While Fundamental analysis selects companies to invest based on their financial performance, Tech-nical analysis uses past market data and charting techniques to identify patterns and trends, with the intention of predicting future movements of a company’s stock price. Both disciplines have been proved fairly effective at finding lucrative investments. However, as they have been, in general, manually exe-cuted, they suffer from longer reaction delays to market changes and are more prone to human mistakes. The uprise of a digital world and increase in computational power brought forth intelligent and au-tomatic trading systems, capable of mitigating most of the previous limitations. Artificial Intelligence allowed the creation of systems capable of making faster and more accurate investment decisions than what a human is capable of, mostly thanks to their ability to examine larger quantities of financial data in a shorter amount of time.

Nevertheless, the process of creating trading rules, or guidelines to decide when to buy or sell a company’s shares, remains a big challenge due to the immense amount of factors that move the stock market. Relations between meaningful financial events are hard to grasp even for experienced investors, and, therefore, difficult to translate into a machine. To circumvent such difficulties, a common approach

(20)

is to select optimal combinations of rules and find suitable parameters for indicators using optimization techniques, leaving the selection of the trading rules to use for the investor or expert to decide.

Problems with foundations like these are commonly classified as Rule Combination problems [1], and evolutionary computing algorithms are a typical choice to approach them [2–8]. This class of algorithms has proved to be capable of efficiently finding nearly optimal solutions for a wide-range of problems. Moreover, their adaptability makes them suitable for optimizing complex functions, which are typically hard to solve. Particle Swarm Optimization (PSO), a metaheuristic from this family of algorithms and inspired by the social behavior observed in flocks of birds and schools of fish, has been praised by its simplicity and low computational complexity. However, its application to solve Rule Combination problems has hardly been explored.

Previous works that focused on developing trading strategies featuring rule combination showed promising results [3–8], performing well during downtrend markets, and provided good indications of being a viable approach for investing on the stock market. That said, a strategy capable of performing well in both uptrend and downtrend markets has yet to be presented. In addition, none of the works found on the literature have introduced a strategy capable of competing with the most followed American stock indices, such as the S&P500 or the Dow Jones Industrial Average. Ultimately, it is possible to conclude that there are still open research opportunities to pursue regarding this topic.

1.2

Objectives

The goal of this thesis is to develop a system capable of actively managing a stock portfolio in a profitable way. The application must be able to select the best companies to invest by evaluating their financial performance and adapt the rules for buying and selling their shares according to the state of the stock market. Additionally, the trading strategy should be able to perform well regardless of how the market is trending.

A further aim of this thesis is to explore the Particle Swarm Optimization framework in the context of a Rule Combination problem, and compare it with a variant called Adaptive Particle Swarm Optimization (APSO), proposed in [9], that adapts the algorithm’s parameters according to the stage of the optimiza-tion process to improve its efficiency. In addioptimiza-tion, we intend to determine if it is possible to improve the convergence speed of a local-topology PSO by applying the techniques employed by APSO.

In a concise and organized manner, the specific objectives of this thesis are:

– Develop a trading strategy capable of performing well regardless of how the stock market is trend-ing;

– Explore the application of fundamental analysis to improve the performance of trading strategies during uptrend markets;

– Explore how a company’s revenue and net income affect the evolution of its stock price;

(21)

– Create a set of trading rules and use Particle Swarm Optimization to select the most prominent ones based on past market conditions;

– Analyze and compare the standard Particle Swarm Optimization with the Adaptive Particle Swarm Optimization;

– Find if the techniques employed by APSO can improve the convergence speed of Particle Swarm Optimization featuring local topology.

1.3

Contributions

The main contributions of this thesis are:

– Implementation of a trading system capable of automatically managing a portfolio. The proposed strategy is able to outperform the S&P500 index during periods of high volatility;

– Application of Adaptive Particle Swarm Optimization to a Rule Combination problem;

– Development of a new PSO variant called Local-APSO that consists on APSO featuring a local-topology. The experimental results show that this variant outperforms the standard local-topology PSO;

– Introduction of three fundamental indicators based on the companies’ revenue and net income that can successfully find companies with stock less likely to devalue;

1.4

Document Outline

This document is composed by a total of five chapters. In Chapter 2, we introduce the most important concepts to grasp in order fully comprehend the developed system. Additionally, we discuss related works on Rule Optimization, inspecting their contributions to the field and search for possible aspects where they can be improved. Chapter 3 presents the proposed system and describes, in detail, its various stages. Chapter 4 presents the experimental results and it is divided into three sections. In Section 4.1, we perform several analyses on the influence of a company’s revenue and net income on its stock price movement, and use the relations found to define a set of financial filters. In Section 4.2, we compare the performance of several variants of Particle Swarm Optimization to choose the most suitable to the given problem. In Section 4.3, we evaluate the performance of the proposed system by testing it through several market conditions. Finally, Chapter 5 summarizes the main conclusions obtained throughout the thesis and proposes possible future research directions.

(22)
(23)

Chapter 2

Background

This chapter presents an overview of important concepts regarding stock investing, along with a review of Particle Swarm Optimization and its state-of-art variations. Additionally, we discuss related works in the field of Rule Combination.

2.1

Market analysis

In this section, we introduce the Stock Exchange Market and describe the main schools of stock analysis, Fundamental and Technical analysis.

2.1.1

Stock Exchange Market

A stock (also known as “share”) is a type of security that signifies ownership of a corporation and repre-sents a claim on part of the corporation’s assets and earnings. Thereby, one could state that a holder of stock is an owner of a company. This ownership is determined by the number of shares a person owns relative to the number of total shares. For example, if a company has 1000 shares available and one person owns 100 shares, that person would own and have a claim of 10% of the company’s assets.

Stocks can be profitable to its owner in either two ways: by trading stocks or by receiving dividends. While trading stocks, there are two types of positions an investor can take, long positions and short positions. The former method can be observed from the chart of Figure 2.1. When an investor buys Apple inc.’s shares at instant A and later sells them at instant B, he ends with a profit of priceB −

priceA ≈ 35$. On the other hand, this approach has risks associated. If the investor decided to buy

Apple Inc.’s shares at instant B and sell them at instant C, then the investor would end in a loss of priceC− priceB ≈ −30$. In addition, a trader can also perform a short sell from instant B to instant C

to earn money. Short selling is the act of borrowing shares and selling them on the market, planning to buy it back later for less money, ending up earning the price difference. However, as opposed to long trades, this type of position may lead, not only to the loss of the invested money, but also to additional debt, since theoretically the price of the stock could increase to infinity. For this reason, short selling is considered to have a high risk/reward ratio.

(24)

2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Date 50 100 150 200 St oc k Pr ic e ($ ) A B C

Figure 2.1: Apple Inc.’s stock price chart from 2010-01-01 to 2018-12-28.

Shares of publicly traded companies can be bought from the Stock Market, which is responsible for protecting the investor, regulating corporations, validating transactions, and more. The stock market’s performance is usually characterized on how it is trending. Two terms widely used for identifying a trending market are: Bull market, when the market is going through an uptrend, and a Bear market, when the market is going through a downtrend. When the stock market does not have a clear trend, is moving, instead, within a tight range of prices, it is usually said that a sideways market is taking place.

Another important service often offered by the Stock Exchange Market is the creation and mainte-nance of market-level and sector-specific indicators. One of the most famous is the S&P 500 Index, a market-capitalization-weighted index of the 500 largest U.S. publicly traded companies by market value. Since these companies come from a wide range of industries, this index has become a leading economic indicator of the performance of the country’s economy. It has also been widely used as a benchmark by many hedge funds to compare their trading strategies to the overall market performance.

From the previous example we can conclude that stock investing carries risks associated. Therefore, investors needs to use intelligent approaches for profiting while keeping the risk low, whether by timing the market correctly or choosing better investments. However, the Efficient Market Hypothesis intro-duced by Eugene Fama [10] states that stocks always trade at their fair value, making it impossible for investors to purchase undervalued stocks or sell stocks for inflated prices. Because of this assumption, many investors tend to believe that it should not be possible to outperform the overall market through expert stock picking or market timing, and the only way to achieve higher returns is by entering riskier in-vestments. Nonetheless, stock analysis have been used successfully by many investors to find valuable investments. The two main types of stock analysis are Fundamental analysis and Technical analysis.

(25)

2.1.2

Fundamental analysis

The field of fundamental analysis provides means of evaluating an investment based on an in-depth analysis of related financial, economic, qualitative and quantitative factors. Its main goal is the prediction of the future price evolution of a company’s stock price based on its financial state. Fundamental analysis is also used to provide a quantitative value for the expected price of a stock and compare it with its current real price, estimating if the stock price is undervalued or overvalued.

The three most important tools for evaluating stocks using fundamental analysis are the Income statement, the Balance sheet and the Cash flow [11]. These are the financial statements of a company, whose purpose is to report its financial status and activity, and can be used to evaluate its performance and position changes. These documents are provided by companies periodically, usually quarterly or annually. The measures of the financial performance are commonly called fundamental indicators. They can be growth rates of specific figures, such as the revenue or profit, ratios like the debt-to-equity ratio, which compares the debt taken by a company with the value of its assets, or even the absolute value of some incomes.

It is important to understand the fundamental indicators used and their relevance when performing an analysis. Most indicators only provide meaningful information about a company’s performance if they are compared with companies from the same sector or industry. For example, according to Buffett and Clark [11], the capital spent by a company on Research and Development (R&D) is a important indicator and must be taken into consideration when choosing a company to invest. At the same time, this value only provides useful information if it is compared with companies with similar products or services. A technology company typically spends a high percentage of its capital in R&D, while a company from the Financial sector usually spends very little. An investor should not invest in the financial company just for this reason, since it is expected for a company from the Technology and Information sector to spend more in R&D.

This type of analysis has been successfully applied by many investors over the years, being one of the most famous Warren Buffet. The advocates of fundamental analysis favor this type of approach, since it tends to provide a more concrete and justifiable explanation for the price evolution of an asset, unlike other types of analysis that have a stronger subjective basis. On the other hand, due to the nature of this type of analysis and the fact that financial statements are only provided each quarter or yearly, the investors cannot expect to obtain high profits quickly, which is a condition unfavorable to some.

2.1.3

Technical analysis

Technical analysis focus on identifying trading opportunities by examining trends in historical stock data, like price movement or volume1. This field was first introduced by the works of Charles Dow, and

considers that fundamental properties of a company are already represented on its stock price, and therefore, it is the only relevant information that an investor needs to analyze. Furthermore, technical analysis assumes that the market price movements are not random, but move in identifiable patterns

(26)

and trends that repeat over time, mainly due to the collective, patterned behavior of investors. Technical indicators typically fall into one of these four categories: Trend, momentum, volatility and volume.

Trend indicators: These indicators measure the direction and strength of a trend using some form of price averaging to establish a baseline. The core principle is that when price moves above the average, a bullish trend (prices increasing) may be taking place. Likewise, when prices fall below the average, it signals a bearish trend (prices decreasing). Examples of trend indicators include: Moving Averages, Moving Average Convergence Divergence (MACD), and Parabolic stop and reverse.

Momentum indicators: This type of indicators help identify the speed of price movement by compar-ing prices over time. Typically, these indicators appear as a line below the price chart that oscillates as momentum changes. A change in future prices may be signalized when there is a divergence between the price and a momentum indicator. The same can be done to analyze volume. Examples of mo-mentum indicators include: stochastic oscillators, commodity channel index and relative strength index (RSI).

Volatility indicators: Indicators of volatility measure the rate of price movement. They are useful for learning the stability state of a given market and help traders predict when the market may change direction. Examples of volatility indicators include: Bollinger bands, average true range, and standard deviation.

Volume indicators: They measure the strength of a trend or confirm a trading direction based on some form of averaging of raw volume. It is based on the fact that the strongest trends often take place while volume increases. Examples of volume indicators: On-balance volume (OBV), Chaikin oscillator, and volume rate of change.

2.2

Particle Swarm Optimization

In this section, we introduce Particle Swarm Optimization. We start by presenting Evolutionary Com-putation, the family of algorithms which PSO is part of, and explain some key features about these algorithms. Then, we describe, in detail, the procedure of Particle Swarm Optimization. Additionally, we present some variations and improvements that have been proposed for the original method over the years. Finally, we describe, in particular, an adaptive variant of the algorithm called Adaptive Particle Swarm Optimization.

2.2.1

Evolutionary Computation

Evolutionary computation is a family of algorithms for global optimization inspired by biological evolu-tion and social behavior. These algorithms are populaevolu-tion-based trial-and-error problem solvers with a

(27)

stochastic character. They are considered metaheuristics since they are designed to find sufficiently good solutions to an optimization problem without a guarantee that a global optimum will be found.

Linking the concept of Natural Evolution with the field of problem solving, a natural environment can be compared to a optimization problem. Each individual (living being) is a candidate solution to the problem and its fitness, i.e., its capability to achieve a certain goal, is a measure of quality of the solution that depends on the problem. Evolutionary algorithms typically apply a set of operations to a population of individuals, iteratively, to find and create individuals with higher fitness. In other words, new solutions are repeatedly tested until a global optimum or a sufficiently good solution is found in the amount of time available [12].

According to Hu et al. [1], Evolutionary Computation algorithms have the advantage of having a reduced probability of being trapped into local optima thanks to their population-based and stochastic characteristics. This is the opposite of the classical local optimization methods, such as the gradient-based and hill-climbing methods, whose deterministic rules provide a higher chance of getting trapped into local optima. Furthermore, compared to a completely random search, these optimization methods direct, successively, their search toward better regions of the search space by resorting to fitness or objective functions. Although the stochastic nature of these methods cannot guarantee global-optimal results, Evolutionary Computation techniques are effective in generating nearly optimal results within a relatively short time. For these reasons, these algorithms are effective and robust in dealing with problems of discontinuity and non-linearity. The easiness of representing solutions is also a reason why these techniques are convenient and suitable to address a wide range of problems with many different characteristics.

Evolutionary

Computation

Evolutionary

Algorithms

Intelligence

Swarm

Figure 2.2: Subdivisions of algorithms from the Evolutionary Computation field.

In the field of Evolutionary Computation, there are two main sub-fields (Fig. 2.2): Evolutionary Al-gorithms and Swarm Intelligence. Evolutionary AlAl-gorithms represent optimization methods that search the space by resorting to techniques based on the Theory of Evolution of Charles Darwin, such as selection, reproduction and mutation. Some of these algorithms include Genetic Algorithms (GA), Evo-lutionary Programming, EvoEvo-lutionary Strategy, and Genetic Programming. Swarm Intelligence, on the other hand, includes algorithms that simulate social behavior, which is regarded as an evolutionary pro-cess. These systems consist typically of a population of simple agents that communicate between each other and with their environment. The agents follow very simple rules while crossing the search space, and despite not existing a centralized control system to indicate how each one should behave or move,

(28)

the interactions between these agents lead to a global intelligent behavior, capable of finding global optima. Some of these algorithms include Ant Colony Optimization and Particle Swarm Optimization.

2.2.2

Overview of Particle Swarm Optimization

Particle Swarm Optimization is a population-based stochastic optimization technique, inspired by social behaviors in flocks of birds and schools of fish. The algorithm was introduced by Kennedy and Eberhart in 1995 [13], and its name and technical terminology are influenced by research on particle physics. According to several researchers [14, 15], the rising popularity of this algorithm is a result of its simplicity, which makes it easy to implement and adapt to several different applications, especially when compared to the GA. Another reason is that the PSO proved to be capable of delivering fairly good results for several different problems within a reasonable amount of time. The appeal of these features by many practitioners made this algorithm one of the most famous inside the domain of Evolutionary Computation. In Particle Swarm Optimization, a population or swarm of particles move around the search space to find solutions to a given continuous optimization problem. Each particle i has a position xi∈ Rn and

a velocity vi ∈ Rn. The position of a particle represents a solution to the problem and has a score (or

fitness) given by a fitness function, f (x) : Rn → R. The velocity vector v

iis used to determine the next

movement of particle i. During the optimization process, the swarm moves around the search space and the score of each particle’s position is continuously calculated. The position where particle i found the highest score, pi, is saved. At the same time, the swarm communicates in order to share the best

position, g, found by any particle until the moment.

When the algorithm starts, the initial position of each particle is randomly generated using an uniform distribution between LB and UB, which correspond to the lower and upper bound of the search space. The initial velocity of each particle is also randomly generated using an uniform distribution bounded by Vminand Vmax, whose values are set by the user. Next, for each iteration k, the position of each particle

is updated using the equation

xki = x k−1 i + v k i, (2.1) where vk i is given by vki = ωvk−1i + c1r1(pk−1i − x k−1 i ) + c2r2(gk−1− xk−1i ) (2.2)

This updated velocity vector is composed by the weighted sum of three components: the velocity of the previous iteration, the difference between the best position that particle i had in past and its current position, and the difference between the best position found by any particle and the particle’s current position.

To introduce randomness into the search operation, each iteration the scalars r1and r2are randomly

drawn from the standard uniform distribution. The weights ω, c1 and c2 are predefined parameters

(29)

influence of the previous velocity vector on the calculation of the current velocity. Its value influences the type of search that the swarm is performing, that is, if it focusing more on a global search or a local search. A large ω promotes exploration, i.e., the particles will try to search new areas across the search space. A small ω promotes exploitation, which is the exploration around the current search area of the particle. The parameters c1and c2are called acceleration coefficients, and correspond, respectively, to

the learning rate for the personal influence and the learning rate for the social influence. As the name indicates, c1 pulls the particle closer to the best position found by itself. This promotes exploration of

local niches and helps maintaining a diverse swarm. The parameter c2 controls the attraction of the

swarm to the current globally best region, which helps with achieving a faster convergence.

The termination of PSO occurs either when all the particles present the same solution (convergence), when the previously defined fitness level is reached by at least one particle, or when a maximum number of iterations is performed. To sum up, Algorithm 1 presents the procedure of a canonical Particle Swarm Optimization for minimizing a function f (x).

Algorithm 1 Canonical Particle Swarm Optimization

1: Input: Maximum iterations T , Swarm size Np, LB, U B, f , c1, c2, Vmin, Vmax

2: Step 1. Initialization

3: for each particle i = 1, 2, ..., Npdo

4: Initialize the particle’s position with a uniformly distribution: x0

i ∼ U (LB,UB)

5: Initialize personal best position to its initial position: p0 i = x0i

6: Initialize global best position to the minimal value of the swarm: g0= argmin f (x0i) 7: Initialize velocity: v0i ∼ U (Vmin, Vmax)

8: end for

9: Step 2. Repeat until a termination criteria is met or k = T

10: for each particle i = 1, 2, ..., Npdo

11: Pick random scalars: r1, r2∼ U (0, 1)

12: Update vki using Eq. (2.2)

13: Update xki using Eq. (2.1)

14: if f (xki) < f (pk−1i )then

15: Update the best known position of particle i: pki ← x k i

16: if f (xki) < f (gk−1)then

17: Update the swarm’s best known position: gk ← xki

18: else 19: gk ← gk−1 20: end if 21: else 22: pk i ← p k−1 i 23: end if 24: k ← k + 1 25: end for

26: Step 3. Return the best found solution, gT

2.2.3

Variations and improvements

According to Ye et al. [16], there are two conditions that affect the performance of PSO the most. The first one is the balance between global exploration and local exploitation in order to obtain quality solutions faster. The second condition is the ability to maintain a diverse population or be able to jump out of local optima, both to avoid premature convergence. In literature, many mechanisms and variations have been

(30)

added to the Particle Swarm Optimization method to attempt to improve both these abilities.

Topology

While in the traditional PSO every particle communicates globally, many different topologies have been studied. The topology of a swarm describes the degree of connectivity between its members by defining a subset of particles with whom a particle can exchange information. The two original topologies pro-posed by Kennedy [17] correspond to the global-topology and the local-topology. In a global-topology, every particle shares its best position, pi, with every other particle in order to find the best position found by the swarm, g. In a local-topology, on the other hand, each particle only shares its best position, pi,

with a defined number of other particles. This structure leads to multiple best positions, thus the swarm is not attracted towards any single global best but rather to a combination of subswarm best positions. This decreases the convergence speed of the algorithm in comparison to the global version, but significantly improves the possibility of finding global optima.

Mendes et al. [18] tested different types of local topologies, as well as the global-topology, in terms of performance and effectiveness at achieving global optima for a set of test functions. It showed that the ring-topology was the most reliable at finding global optima of the five topologies tested, despite being one of the slowest in terms of convergence speed. Medina et al. [19] also compared the performance of PSO with ring topology to PSO using fully connected topology and showed that higher convergence rates can be obtained with the former. In a ring-topology each particle is affected by the best performance of its two adjacent neighbors in the topological population. As the neighbors are closely connected, they react when one particle has a raise in its fitness. This reaction propagates proportionally with respect to the topological distance of the particles. This means that, it is possible that one segment of the population is converging on a local optimum, while another segment is converging to a different point or continuing the search. Despite this, the optima will eventually pull the swarm to the same area. The ring topology and the fully connected topology (global-topology) are represented in Figure 2.3.

(a) Ring topology (b) Fully connected topology

Figure 2.3: Representation of ring topology and fully connected topology.

Inertia weight and acceleration coefficients

The inertia weight ω has an important role for the Particle Swarm Optimization’s convergence behavior. Selecting a proper value for ω allows a better a balance between exploration and exploitation, and, thus

(31)

achieve better results. This weight is not necessarily static throughout the process. It can be changed along the execution of the algorithm. Shi and Eberhart [20] linearly decreased the inertia weight during the run from 0.9 to 0.4, and, for the test function analyzed, it provided better results than using a static value of ω. Suganthan [21] varied the inertia weight using a linear time varying approach expressed as

ωk = (ωmax− ωmin)

kmax− k

kmax

+ ωmax, (2.3)

where k is the current iteration, kmaxis the number of iterations and ωkis the value of the inertia weight in

the kthiteration. The variation of ω may also follow a non-linear approach. For example, (2.4) expresses

the method proposed in [22] to control ω. The value of ωk=0is defined initially by the user.

ωk+1= (ωk− 0.4) kmax− k kmax+ 0.4

(2.4) According to the authors, this method allows the inertia weight to have more time to fall off toward the lower values of the dynamic range, thereby enhancing exploitation. Additionally, with a distinct approach, Shi and Eberhart [23] uses fuzzy sets and membership rules to dynamically update ω according to the fitness value of the global best position.

The acceleration coefficients also play an important role regarding the influence on the exploration and exploitation behavior. As stated previously, c1 and c2, control how much a particle should move

towards its cognitive attractor (pi) or its social attractor (g). As with the inertia weight, several time

varying approaches have been proposed. For example, Ratnaweera et al. [24] decreases the value of c1 while increasing c2during the execution of the algorithm. As result, particles favor exploration at

early stages of the optimization process and exploitation closer to its end. Additionally, some works use adaptive methods to change the acceleration coefficients during the search process according to the current conditions of the swarm, such as space distribution [9] or success rate [25].

Constraint handling techniques

Numerous problems from fields such as engineering, finance and allocation, are subjected to con-straints. Despite Particle Swarm Optimization not being able to solve constrained optimization problems (COP) using its standard formulation, several methods have been proposed for handling constraints [26]. A constraint handling mechanism commonly used by heuristics are penalty function methods. These techniques transform a constrained optimisation problem into an unconstrained one by adding a certain value to the fitness function based on the amount of constraint violation or the number of constraint being violated. The main problem of these approaches is that they are usually problem-dependent and setting suitable penalty parameters is a difficult task [27].

Hu and Eberhart [28] proposed some simple changes to the canonical PSO method based on the preservation of feasible solutions. The two modifications compared with the canonical PSO are:

1. When calculating the best position of a certain particle, pi, and the best position of all particles, g,

(32)

2. All particles are initialized as feasible solutions.

These modifications are simple, easy to implement and effective. Its main disadvantage, however, is that, since infeasible regions of search space are not explored, the valuable information existing in those regions cannot be extracted, and, due to diversity loss, there is a higher potential for premature convergence [29].

Pulido and Coello [30] presented a mechanism to handle constraints based on the closeness of a particle to the feasible region. While in the canonical PSO, pk

i = xki only if f (xki) < f (p k−1 i ), the

mechanism presented only replaces pk

i by xki in the following scenarios: – pk−1i is infeasible, but x

k

i is feasible; – Both pk−1i and xk

i are feasible, but f (xki) < f (p k−1 i ); – Both pk−1i and xk

i are infeasible, but CV(xki) < CV(p k−1 i ).

CV(x)is a function that calculates the constraint violation value of an infeasible solution given by

CV(x) =

N

X

j=1

max (hj(x), 0) , (2.5)

where hj(x)represents the jth inequality constraint of N number of constraints. The same process is

applied when estimating the best global value, g. The main advantages of this feasibility-based rule is that it does not require additional parameters, which is one of the main drawbacks of penalty function methods, and also that it does not discard potentially useful information from the particles on the non-feasible space. Cheng et al. [27] used a similar approach with the addition of a new term to the constraint violation function to take into consideration equality constraints. The problem with these approaches is that with an existent over-pressure towards feasible regions, diversity decreases, thereby premature convergence is more likely to occur [29]. For that reason, this method should be joined by additional techniques that aim to avoid premature convergence, thus minimizing its drawbacks.

2.2.4

Adaptive Particle Swarm Optimization

In the previous section, we presented several changes to the PSO that could improve the quality of its solutions and some that can be used increase its convergence speed. However, we found that there is usually a trade-off between both improvements. For example, increasing the convergence speed typi-cally has a negative effect on the quality of solutions, since the swarm is more likely to converge to local optima. Therefore, to achieve a good balance between both goals, resorting to a suitable combination of techniques to improve PSO is a logical path to take.

Zhan et al. [9] combined two methods, an evolutionary state estimation (ESE) technique and an elitist learning strategy (ELS), to improve convergence speed of the search process and avoid possible local optima during the convergence state, respectively. The authors called this self-adaptive variant of the algorithm, Adaptive Particle Swarm Optimization. Algorithm 2 describes the procedure used by this variant to minimize a function f (x).

(33)

During the execution of PSO, the population distribution changes with the evolutionary state. At an early stage, the particles are expected to be scattered in various areas of the search space, which means that the population distribution is more dispersive. As the evolutionary process proceeds, particles start to cluster together and converge to optimal areas, changing again the population distribution. By using this characteristic to estimate the stage of the evolutionary process, one could possibly adapt the parameters of the algorithm to each state, improving its search efficiency and speed. This is the principle behind the Evolutionary State Estimate technique used by the APSO.

The ESE technique starts by computing, in each iteration, the mean distance from each particle i to all the other particles using the Euclidean metric. These distances are used to calculate the evolu-tionary factor, fevol. This value is used to determine the stage of search process by assigning it to an

evolutionary state using classification techniques. These evolutionary states are: 1. Exploration: the swarm is spread around the search space;

2. Exploitation: the particles are making use of the local information and are grouping towards possi-ble local optima;

3. Convergence: the swarm seems to find the global optimal region and the particles begin to con-verge closer to this zone;

4. Jumping out: the global best particle jumps out of a local optimum toward a better optimum, likely far away from the majority of the swarm.

The PSO parameters are controlled depending on the present state. First, the inertia weight will follow the evolutionary states using a sigmoid mapping that depends on the value of fevol. During

exploration or in a jumping out state, the inertia weight will be large, benefiting the global search. When exploitation or convergence state is detected, the inertia weight decreases to assist the local search. Second, the control of the acceleration coefficients will directly depend on the evolutionary state. In an exploration state, c1is increased and c2decreased to help particles explore individually rather than

support their convergence to the best particle, that is likely to be a local optimum at this stage. In an exploitation state, c1 is slightly increased to emphasize the particle’s search around their personal

best. At the same time, c2 is slowly decreased to avoid the deception of a local optimum. During the

convergence state, both c1 and c2 are slightly increased to draw particles closer to expected globally

optimum region. Finally, in a jumping out state, c1is decreased and c2is increased to force the swarm to

be drawn to the newly found region more quickly. To avoid adjustments too irruptive to these coefficients, the maximum and minimum increment that these coefficients take between iterations are bounded by the acceleration rate δ, which is a parameter of the APSO. Furthermore, both c1 and c2are limited by

the interval [1.5, 2.5], and their sum cannot be larger than 4.

This mechanism was able to improve significantly the convergence speed of the algorithm, however, at the same time, it was found to be insufficient, since it created a higher vulnerability to convergence around local optima. Therefore, in order to enhance the globality of the search algorithm and prevent the swarm to become trapped in these regions, an elitist learning strategy is used. This technique involves

(34)

randomly choosing a dimension of g and changing its value through a Gaussian perturbation, whose magnitude is controlled by the elitist learning rate, σ. If this new position is found to have a higher score or fitness than the current global best position, then g is replaced by the new position. Otherwise, the new position is used to replace the particle with the worst fitness value of that iteration. The ELS is applied when the ESE technique identifies that the search is in a convergence state.

With both techniques in place, Zhan et al. [9] compared APSO with 7 other PSO variations and showed to be very reliable and efficient in solving both unimodal and multimodal problems. Regarding the convergence speed, APSO used the least CPU time and the smallest number of function evaluations to reach acceptable solutions on 9 out of 12 test functions. The algorithm was also capable of finding the global optimum in 11 of the 12 test functions, presenting the highest mean reliability out of the 8 variations tested.

To conclude, this algorithm not only seems to improve the search efficiency of the standard PSO, but also frees the user from the need to set its acceleration coefficients and inertia weight, which, as discussed throughout this section, have a major impact on its performance. On top of that, APSO also deals with the problem of convergence to local optima, common on variants using self-adaptive parameters, using the ELS.

2.3

Portfolio composition

In this section, we introduce concepts regarding portfolio composition, a strategic investment process used as a mean to reduce inherent risk of investing. In addition, we describe some risk management techniques used by investors, as well as a set of commonly used performance metrics.

2.3.1

Portfolio theory

One of the most common ways to invest is by composing a portfolio. A portfolio is a composition of assets, such as stocks or bonds, that reflect the investor’s objectives and risk tolerance. In 1952, Harry Markowitz introduced the modern portfolio theory [31], which describes ways of diversifying and allocate assets in a financial portfolio with the goal of maximizing its expected return given the owner’s risk tolerance. The goal is to minimize the idiosyncratic risk, which is defined as the “risk inherent to a particular investment due to its unique characteristics”. When creating these types of portfolios, investors group different types of investments together so that when some of the assets fall in value, others rise, with the portfolio value staying even. Meanwhile, if the whole market is rising, then the portfolio’s assets rise along, leading to a global increase in the return of the investments.

The process of creating a portfolio is usually composed by four steps: the creation of an investment policy statement, which intends to state the investor’s goals and constraints, the development of an investment strategy that matches the objectives, the implementation of the created plan, and the control of the plan according to market changes.

(35)

Algorithm 2 Adaptive Particle Swarm Optimization (APSO)

1: Input: Maximum iterations T , Swarm size Np, LB, U B, f , c1, c2, ω, Vmin, Vmax

2: Step 1. Initialization

3: for each particle i = 1, 2, ..., Npdo

4: Initialize the particle’s position with a uniformly distribution: x0i ∼ U (LB,UB)

5: Initialize personal best position to its initial position: p0i = x 0 i

6: Initialize global best position to the minimal value of the swarm: g0= argmin f (x0i) 7: Initialize velocity: v0i ∼ U (Vmin, Vmax)

8: end for

9: Step 2. Repeat until a termination criteria is met or k = T 10: Execute ESE technique

11: Update the values of c1and c2according to the estimated state

12: Update the value of the inertia weight ω 13: if State == Convergence then

14: Randomly select a dimension d

15: Update the dimension d of gk−1using a Gaussian perturbation, creating a new particle P

16: if f (P ) < f (gk−1)then 17: gk← P 18: else 19: xk ← P , where k = argmax n=1,...,Np (f (xn)) 20: end if 21: end if

22: for each particle i = 1, 2, ..., Npdo

23: Pick random scalars: r1, r2∼ U (0, 1)

24: Update vk i using Eq. (2.2) 25: Update xk i using Eq. (2.1) 26: if f (xk i) < f (p k−1 i )then

27: Update the best known position of particle i: pk i ← xki

28: if f (xki) < f (g k−1)

then

29: Update the swarm’s best known position: gk ← xk i 30: else 31: gk ← gk−1 32: end if 33: else 34: pki ← p k−1 i 35: end if 36: k ← k + 1 37: end for

38: Step 3. Return the best found solution, gT

portfolio consists mainly of composing a portfolio that mimics or tracks a market-weighted index. Since there is not a frequent need to buy or sell the positions of the portfolio, this strategy produces low trans-action costs and, consequently, low management costs as well. Furthermore, since an index typically has many types of companies, the portfolio gets good diversification, and hence lower risk. However, mimicking an index means that the portfolio will not be able to achieve a higher return than its own, which is not good enough for some investors. For that reason, they resort to active portfolio manage-ment. This strategy tries to outperform the market by selecting assets and making investment decisions based on research, forecasts and reliance on the investor’s own experience. Since it usually relies on taking higher risks and spending more on transaction costs, this type of strategy may lead to higher losses. At the same time, this type of portfolio management permits the use of techniques to reduce the risk of the investments.

(36)

2.3.2

Risk management techniques

The unpredictability of the stock market brings a need to apply measures for managing risk. While choos-ing the right investments is a very important step to achieve profitable returns, additional techniques can be used to prevent big losses or retain gains.

When the price tendency does not follow as expected, a stop-loss may be used to prevent bigger losses. A stop-loss point is the price limit at which a trader will sell a company’s stock to prevent losing more than a certain percentage of his initial investment. For example, for a established stop-loss of 20%, positions whose shares cost 1$ must be closed when their value decrease below 0.8$ as a safety measure. The stop price is calculated according to (2.6), where StopLoss is a value between 0 and 1 that reflects the percentage of the purchase price that the investor is willing to lose. Pbuy is the price of

the stock when it was bought.

Pstop= Pbuy∗ (1 − StopLoss) (2.6)

A stop-loss can also be used to help traders to keep gains when their positions are already profitable. This type of stop-loss is called trailing stop-loss and, similarly, can be calculated using (2.7), where Phighestis the highest price that a share achieved since the user bought it.

Pstop= Phighest∗ (1 − StopLoss) (2.7)

Another technique commonly used by investors consists setting take-profit points. Sometimes the price of a stock increases by a unusual high percentage in a short amount time. Since this rate of growth is most times unsustainable, a price decrease is expected to follow shortly after. By defining a reasonable price level where the investor closes the position in order to keep the profits, a higher overall return may be achieved. The take-profit can be calculated using (2.8), where T akeP rof it is a value higher than 0.

Pprof it= Pbuy∗ (1 + T akeP rof it) (2.8)

2.3.3

Performance metrics

There are several metrics to measure the performance of a portfolio. Some estimate how much the investor profited, while other take also the risk into accounts. Here, we present the most widely used performance metrics and how they are calculated.

Return on Investment

The Return on Investment (ROI) is one of the main tools for evaluating a portfolio or any investment. This measure lets the investor calculate the growth of the value of an asset (or a group of assets) during a specific period of time. Since this metric only considers the return, its calculation is simple and its interpretation straightforward. The general formula for calculating the ROI for a single period of time is

(37)

ROI = Gain − Cost

Cost , (2.9)

where Gain is the money earned from the investment and Cost is the money spent on it. The ROI formula can also be adapted to measure a portfolio’s earnings. For example, Equation (2.10) can be used to calculate the ROI of the portfolio presented in the Table 2.1.

ROI = 4 X i=1 CPi∗ Shi− 4 X i=1 P Pi∗ Shi 4 X i=1 P Pi∗ Shi ∗ 100 = 25% (2.10)

Table 2.1: An example of a stock portfolio.

Stock Purchase Price (PP) Current Price (CP) Noof Shares (Sh)

X 20 25 10

Y 10 8 5

W 5 9 30

Z 12 11 15

Sharpe Ratio

The success of a portfolio may not be based uniquely by the its return. The risk an investor took to achieve a certain return is also considered a crucial part of the performance measure of an investment. The Sharpe Ratio [32], one of the most used measures when evaluating a portfolio, follows this basis.

The formula to calculate the ratio is given by

SharpeRatiop=

E[Rp− Rf]

σp

,

where the numerator corresponds to the expected value of the the portfolio’s excess return, Rp, over the

risk-free rate, Rf. The risk-free rate is the return of an investment that does not provide any risk of a

loss to an investor. Its value is typically the value of the 3-month Treasury yield provided by the Federal Reserve Bank of St. Louis, USA. In this work, we consider Rf to be 2%, which is approximately the

average value of this yield. In the denominator, the volatility, σp, is calculated as the standard deviation

of the portfolio excess return, also written as σp=pvar[Rp− Rf].

In addition to being useful for measuring the risk-adjusted return of a portfolio, it is also very helpful when comparing investment strategies. Consider two portfolios, A and B, with 10% and 12% average yearly returns. Regarding the volatility, portfolio A has a standard deviation of 4% while B has a standard deviation of 10%. For a risk-free rate of 3%, the Sharpe Ratio for each portfolio would be the following:

(38)

SharpeRatioA=

10% − 3%

4% = 1.75 SharpeRatioB=

12% − 3%

10% = 0.9

As can be observed, despite the portfolio B having yearly returns higher than those of portfolio A, according to the Sharpe Ratio, the former followed considerable higher risks to achieve only a slightly better return. Using this measurement, an investor adverse to risks would choose to hold portfolio A instead of portfolio B.

Sortino Ratio

Another common metric to measure the risk-return tradeoff is the Sortino Ratio. The calculation of this ratio is similar to the Sharpe Ratio, the only difference being in the way the two ratios calculate the volatility. The Sharpe Ratio is indifferent to the direction of the volatility, so it treats the volatility equally for the upside and downside deviation. The Sortino Ratio, however, only considers the downside deviation during the calculation of the volatility. It means that it only considers the periods when the return was lower than the risk-free rate. The reasoning behind this is that the ratio should not be lowered when the investment has big positive deviations, since it is not a risky movement but rather a good occurrence to investors.

The formula to calculate the Sortino ration is given by

SortinoRatiop=

E[Rp− Rf]

σd

,

where σdis the standard deviation of the downside. The remaining variables are the same as the Sharpe

Ratio correspondents. As in the Sharpe Ratio, a higher Sortino Ratio means a better investment. Its limitation is that, if the investment returns had not have many negative movements, the ratio may not be considered very conclusive.

Information Ratio

While the two previous ratios provide a measure of the excess return that an investment provides over the risk-free rate, the Information Ratio compares the returns with the most relevant benchmark index, such as the S&P500 index, for example, in the case of equities.

This ratio can be calculated using the expression

Inf ormationRatiop=

E[Rp− Rb]

σa

,

where the numerator is the active return, which is the expected value of the difference between the portfolio return, Rp, and the benchmark return, Rb. The σa is the standard deviation of the active return,

also called the tracking error. It can also be written as σa=pvar[Rp− Rb].

This measure is very useful to check if the performance of a hedge fund is good enough for the investor to apply money into compared to the index it is tracking, since, otherwise, the investor could be

(39)

better off by investing on the index directly, avoiding the additional costs associated with funds.

Maximum Drawdown

The Maximum Drawdown (MDD) is an indicator of the downside risk taken during an investment. It measures the largest peak-to-trough decline of the historical values of a portfolio before a new peak ar-rives. It is calculated as being the difference between the lowest value (before a new high is established) and the peak value (before the largest drop), divided by this peak value. To help visualize, Figure 2.4 presents a chart with Apple Inc.’s stock price movement with the lowest value L and peak value marked H. Here, the maximum drawdown corresponds to the price fall from the point H to the I, which is the highest price decrease (-40%) between two points during the total interval.

2007-1 0 2007-1 2 2008-0 2 2008-0 4 2008-0 6 2008-0 8 Date 18 20 22 24 26 28 St oc k Pr ic e ($ ) L H

Figure 2.4: Apple Inc.’s stock price from 2007-10-01 to 2008-09-13. The difference between point H and point L correspond to the biggest price decrease of this time series, i.e, its maximum drawdown.

The Maximum Drawdown permits an investor to assess the relative riskiness between strategies. If an investor needs to choose between two strategies that provided the same return during the same period of time, the strategy that presents the lowest Maximum Drawdown is the best from a capital preservation standpoint, which means it is the best at preventing losses.

2.4

Rules Optimization

Rules optimization is the process of optimizing trading rules in order to achieve an investment goal, such as profitability, and may be performed in several ways. Hu et al. [1] separates works on Rules Optimization in three different types, depending on the optimization objects or models used. These are: Rule Parameter, Rule Combination and Rule Construction. The authors enumerated the algorithms from the field of Evolutionary Computation that are usually used to solve each type of these problems.

(40)

Rule Construction problems consist on searching for the optimal structure of trading rules. Genetic Programming and Genetic Network Programming are usually the methods used to solve this type of problem. The Rule Parameters problems deal with the optimization of parameters of membership func-tions (such as moving averages) from which the trading rules will be based on. In addition, it may deal with the optimization of the parameters of the rules themselves. Rule Combination problems combine rules with logical statements or weights that measure importance. These last two optimization objects are often optimized using Genetic Algorithms or Particle Swarm Optimization.

In this section, we present and discuss related work done on Rule Combination problems. The goal here is to analyze the objectives of each work, how they formulated the problem and which algorithms they resorted to. We also search for limitations and possible aspects where they can be improved.

2.4.1

Related works

Gorgulho et al. [4] proposed a trading strategy based on the Rule combination problem that estimates the importance to give to each technical indicator in a set of several to achieve the highest profit. The authors started by defining a discrete set of trading rules based on underlying technical indicators. From the current condition of each one, a certain score level is assigned. There are five score levels which consist on Very Low Score, that assigns -1.0 point and indicate a strong sell signal, Low Score, that assigns -0.5 points and indicates a potential signal to sell, High Score , that assigns 0.5 points and indicates a reasonable instant to buy, Very High Score, that assigns 1.0 points and marks a strong buy signal, and, finally, No Score, that does not provide any indication to buy or sell the position.

To find the appropriate moments to buy and sell a stock, they introduced a trading signal generator, which they called classification model, that is given by (2.11), and subjected to the constraints (2.12) and (2.13). The variables Wiare unknown weights that reflect the importance assigned to the technical rule

i, N is the number of technical indicators used, and Score(X, i) is the score level given by the technical rule i to the stock X, in a given day.

ψ = N X i=0 Wi∗ Score(X, i) (2.11) 0 ≤ Wi≤ 1 (2.12) 0 ≤ N X i=0 Wi≤ 1 (2.13)

Periodically, ψ is calculated for every company and compared with two thresholds called BuyLimit and CloseBuy. If a company has a classification ψ higher than the BuyLimit, then its shares are added to the portfolio. At the same time, if a company is already part of the portfolio and gets a ψ lower than the CloseBuy limit, the position is closed. If the company was not part of the portfolio, then the system performs a short sell. The authors do not explicit if the BuyLimit must be larger than CloseBuy. However, if that is not the case, the system would sell stocks with higher rank, i.e., better

Referências

Documentos relacionados

Focar-me-ei no ouro, no dated brent e no Euro/Dólar, pois, são ativos de imenso relevo, que sofrem, em geral uma forte variação de preços e dado que o risco de preço

Results indicate that companies that had a better social performance are not the ones who had a better economic and financial performance, and suggest that the middle path -

Results indicate that companies that had a better social performance are not the ones who had a better economic and financial performance, and suggest that the middle path -

This paper analyses the effects in terms of size and volatility of government revenue and spending on growth in OECD and EU countries. The results of the paper suggest that both

VW revenue consist of sales and services on passenger cars, commercial vehicles, power engineering and financial services – Figure 32.The company's outlook for 2016

This paper analyses the effects in terms of size and volatility of government revenue and spending on growth in OECD and EU countries. The results of the paper suggest that both

Como qualquer outra doença crónica com diferentes especifidades, também a infecção pelo VIH/SIDA exige ao sujeito a capacidade para continuar a viver apesar da doença, a

At the time of the survey (2011), these generations lived in the LMA, either in the city of Lisbon or in both of its peripheries: the Northern or the Southern Lisbon