Abstract: The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.
Based on the assumption that people present risk aversion and are fully rational, that all information is efectively processed by decision- making subjects and that markets are eicient, decisions are made in order to maximize expectedutility. However, criticism of the current paradigm by several studies led to the emergence of a new inancial theory: Behavioral Finance, based on the premise, therefore, that decision-makers do not behave in a strictly rational way, but make judgments and choices under the inluence of emotional aspects, using mental shortcuts or simplifying rules that are called heuristics and can lead to systematic errors and deviations, considered cognitive biases.
For instance, there are generalized expectedutility models that relied on spe- cific properties of expectedutility functions and indifference curves to build a mod- el to predict choice anomalies, such as the common ratio and the common conse- quence effects. These reformed models are extensions of the expectedutility approach that provide an account of empirical deviations from expectedutility the- ory by resorting to mathematical devices, such as indifference curves that fan out (Starmer, 2000). There are also expectedutility models that take psychological vari- ables rather explicitly. Two examples are the models of regret and disappointment developed by Robert Sugden, David Bell and Graham Loomes (Muramatsu, 2006). Prediction of economically important choice anomalies seems to be the most important driving force behind the revival of the interdisciplinary field of psycho- logical economics. One may wonder why behavioral economists opted for an in- cremental reformist strategy. This may be because the profession prefers progress
this case, it is easy to see that player 2 may, for example, keep σ (L ) fixed (just changing the values of σ (M) and σ (R)). In such situation, player 1’s expectedutility reduces by σ (L )/2. Since player 2 has a range of values (σ (L ) ∈ [1/2, 3/4]) for which he can manipulate σ (L), in general, it is impossible to say how he will react to any change in payoffs made by player 1. Formally, this problem always happens every time that we are unable to establish a well defined functional form of how the mixed strategy chosen by player j depends on player i’s payoffs. In fact this could also happen in 2 × 2 games, for example: Based on Figure 2, suppose that player 1 has a strongly dominant strategy, say strategy U . Moreover, for player 2, assume that e = f . If this condition occurs, then the game has two pure equilibria, (U, L) and (U, R), and infinitely many mixed equilibria on the form (M, N ), where M = (1, 0) and N = (q ∗ , 1 − q ∗ ) for q ∗ ∈ [0, 1]. Thus, the expectedutility of player 1 is EU 1 = aq ∗ + b(1 − q ∗ ). Now suppose
This study aims to analyze risk preferences in Brazil based on prospect theory by estimating the risk aversion parameter of the expectedutility theory (EUT) for a select sample, in addition to the value and probability function parameter, assuming various functional forms, and a newly proposed value func- tion, the modiied log. This is the irst such study in Brazil, and the parameter results are slightly diferent from studies in other countries, indicating that subjects are more risk averse and exhibit a smaller loss aversion. Probability distortion is the only common factor. As expected, the study inds that behavioral models are superior to EUT, and models based on prospect theory, the TK and Prelec weighting function, and the value power function show superior performance to others. Finally, the modiied log function proposed in the study its the data well, and can thus be used for future studies in Brazil.
This paper begins by observing that any re‡exive binary (preference) relation (over risky prospects) which satis…es the Independence Axiom admits a form of expectedutility representation. We re- fer to this representation notion as coalitional minmax expectedutility representation. By adding the remaining properties of the expectedutility theorem, namely, continuity, completeness and transitivity, one by one, we …nd how this representation gets sharper and sharper, thereby de- ducing the versions of this classical theorem in which any combination of these properties are dropped from its statement. This approach also allows us to weaken transitivity in this theorem, rather than eliminating it entirely, say, to quasitransitivity or acyclicity. Apart from providing a uni…ed dissection of the expectedutility theorem, these results are relevant for the growing literature on boundedly rational choice in which revealed preference relations often lack the prop- erties of completeness and/or transitivity (but often satisfy the Independence Axiom). They are also especially suitable for the (yet overlooked) case in which the decision maker is made up of distinct individuals and, consequently, transitivity is routinely violated. Finally, and perhaps more importantly, we show that our representation theorems allow us to answer many economic questions that are posed in terms of nontransitive/incomplete preferences, say, about the max- imization of preferences, existence of Nash equilibrium, preference for portfolio diversi…cation, and possibility of the preference reversal phenomenon.
Recently Kajii and Ui (2009) proposed to characterize interim efficient allocations in an ex- change economy under asymmetric information when uncertainty is represented by multiple posteriors. When agents have Bewley’s incomplete preferences, Kajii and Ui (2009) proposed a necessary and sufficient condition on the set of posteriors. However, when agents have Gilboa–Schmeidler’s MaxMin expectedutility preferences, Kajii and Ui (2009) only propose a sufficient condition.
To summarize, our rule guarantees from any expert who maximizes expectedutility, that the reported interval only contains those events that the expert thinks are most likely to occur and have a minimal confidence of γ. The actual degree of confidence may be larger than γ, depending on the degree risk aversion of the expert. More risk averse ex- perts will tend to submit larger intervals to guarantee a pos- itive payoff. In principle, one could try to counter act this tendency, with the aim to elicit an interval with confidence close to γ, by designing a different rule for each expert. In most applications there is not enough information about the expert to do so. Therefore, our rule is designed to capture at least confidence γ for any risk averse or risk neutral ex- pert. Experimental evidence indicates that the large majority of people are either risk averse or risk neutral (e.g. Holt & Laury, 2002).
Previous studies in behavioral economics have shown that risk-attitudes can be distinguished experimentally from ambiguity attitudes . Risk attitudes are usually modeled by the curvature of the utility function [34, 35]. This model of risk is also included in Eq (3). The ambiguity atti- tude in the free energy model is expressed by an additional temperature parameter that quanti- fies deviations from a Bayesian model . The same variational principle can also be applied to acting of bounded rational decision-makers. In this case, the temperature parameter β can be interpreted in terms of the degree of control a decision-maker has as a result of the available computational resources. Accordingly, one could interpret Eq (3) equivalently as anticipating the choice of a bounded rational opponent with boundedness parameter β. Therefore, our results encourage a more general investigation of free energy variational principles for percep- tion and action. One such avenue might be the study of decision-makers’ perceived degree of control, for example in the context of illusions of control [36, 37]. In the case where utilities are restricted to informational surprise or absorbed into prior distributions, such free energy varia- tional principles have for example been recently investigated by Friston and colleagues [38, 39]. In the economic literature there have been an extensive effort in developing models that for- malize decision-making under ambiguity. From the first models where decisions are evaluated by looking exclusively at its worst possible outcome , to models that take into account both the worst and the best possible outcome . There are also more mathematically elaborate models such as Choquet ExpectedUtility (CEU) model  where beliefs are not considered subjective probabilities but by capacities that can possibly be non-additive. Extensions of CEU include the Cumulative Prospect Theory  that uses two capacities, one for gains an another one for losses. There are other popular models such as the Maxmin expectedutility model from Gilboa and Schmeidler that use multiple priors to define the beliefs of decision-makers with built-in ambiguity aversion  and also a variation of it that drops the axiom of ambigu- ity aversion . The smooth ambiguity aversion model  can be viewed as an extension of the maxmin model. It regards the maxmin criterion as too extreme and opts for modeling sec- ond order beliefs and introducing a convex function to model ambiguity aversion—in the same way that the curvature of the utility function models risk aversion.
The details of our scheduling are described in algorithm 3. There are five main parts in the scheduling. They are the preemption checking, feasibility checking, task selecting, scheduling point checking and critical point checking. When new tasks are added in to ready queue, no matter whether there is preemption or not, the feasibility checking will work to check if the new ready queue is feasible or not. If any task cannot meet the requirement, it will be removed from the ready queue. Scheduling point checking makes sure all the left tasks in the expected accrued utility density task to run when the server is idle. The critical point checking will always monitor the current running task’s state to prevent the server wasting time on the non-profitable running task. The preemption checking works when there is a prosperous task wants to preempt the current task. The combination of these parts guarantees to judiciously schedule the tasks for achieving high accumulated total utilities. It is worthy to talk more about the preemption checking part in details, because improper aggressive preemption will worsen the scheduling performance. From Algorithm.4 we can see that if a task can be finished successfully before its deadline even in its worst case, the scheduling will protect the current running task from being preempted by any other tasks. Otherwise, if a prosperous task has an expected accrued utility density which is larger than the current running task’s conditional expectedutility density by at least a value equals to the pre-set preemption threshold, the preemption is permitted.
We start by building an economy where initially the level of government expenditures is growing through time, providing utility to citizens, and is covered only partially by taxes, generating an increase in the level of debt. In this setup, a stabilization is a set of actions undertaken by the government at a …scal level in order to cut the growth of current expenditures and to eliminate all de…cits in the economy. That is, as the stabilization is postponed successively, public expenditures continue to grow larger, and so does public de…cits, but, when an economic reform is implemented, current expenditures stabilize and taxes increase, in order to bring the level of de…cit back to zero. The government is a populist one, at least in the short run, in sense that her actions re‡ect the median voter’s will. Our objective is to compare the outcome of this process with the optimal one, i.e., if the stabilization date was chosen by a powerful and benevolent social planner, who would not seek to adopt populist measures, but instead would undertake only policies which maximize the intertemporal expectedutility of the society.
Up to now, we have shown how the cumulative prospect theory describes the risk preferences of a Portuguese sample and some demographic differences between those preferences, which by itself is an important step to improve our knowledge about the risk decision made by the Por- tuguese. However, if we want this theory to have a bigger role in measuring the risk preferences of an individual it is extremely important to simplify the calculations presented in section 3.2. As so, in a partnership with Silva (2012) we try to find a relation between the loss aversion coefficient and the DOSPERT-Scale. This linkage between CPT and the DOSPERT-scale is not enough to apply this theory to the financial markets. The theory that is commonly used to choose the efficient portfolio, better known as ”Portfolio Selection” by Markowitz (1952) is based on the expectedutility theory so it is necessary to adapt it to the cumulative prospect theory.
The main question of this paper is how companies should customize their products and services in settings such as the Internet. The analysis unambiguously shows that the pre-dominant paradigm of maximizing the expectedutility of the current transaction is not the best strategy. Optimal policies must take into account future profits and balance the value of learning with the risk of losing the customer. This paper describes two ways of obtaining good recommendation policies: (i) applying POMDP algorithms to compute the exact optimal solution, (ii) approximating the optimal value function. The heuristic we provide worked well on the tests we performed, but we do not claim this heuristic to be the best one for all possible applications. Indeed, there are better heuristics for the solution of POMDPs with certain characteristics and the development of such heuristics for specific managerial settings is a topic of great interest to current researchers. The bounds analyzed in the previous section make explicit why agents acting according to a myopic policy perform so poorly. These agents fail to recommend products that can be highly rewarding for some segments because they fear the negative impact that these recommendations could have on other segments. This fear is legitimate – companies can lose sales and customers can defect – but it can be quantified through a matrix 3(X ) that captures the risk associated with each possible recommendation for each segment. Optimally-behaving agents make riskier suggestions than myopic agents, learn faster, and earn higher payoffs in the long-run.
Virtually all conceptual models of risky choice, including ExpectedUtility Theory (EUT) and the behavioral alternatives such as prospect theory, are deterministic. The deterministic nature of the theories presents a challenge for applied economists attempting to econometrically estimate risk preferences in a sample of individuals. In essence, the analyst must make assumptions about the decision making process that go above and beyond the content of the theory, making it difficult to conduct clean tests of the underlying theory itself and to confidently identify underlying structural parameters. The litera- ture on stochastic error specifications is not negligible but is by no means a large one. While a few previous studies have analyzed the extent to which different stochastic error specifications influence estimates of risk preferences [1,2], there have been new developments in the field  that have not been thoroughly addressed in previous model comparisons, and there has been an almost exclusive focus on the ability of models to fit the data in- sample (with few exceptions) over the recent years.
Aiming at the maximization of the satisfaction of users of NRT and RT services, two scheduling algorithms are proposed: Modified Throughput-based Satisfaction Maximization (MTSM) and Modified Delay-based Satisfaction Maximization (MDSM), respectively. The modification of parameters of the shifted log-logistic utility function enables different strategies of distribution of resources. Seeking to track satisfaction levels of users of NRT services, two adaptive scheduling algorithms are proposed: Adaptive Throughput-based Efficiency-Satisfaction Trade-Off (ATES) and Adaptive Satisfaction Control (ASC). The ATES algorithm performs an average satisfaction control by adaptively changing the scale parameter, using a feedback control loop that tracks the overall satisfaction of the users and keep it around the desired target value, enabling a stable strategy to deal with the trade-off between satisfaction and capacity. The ASC algorithm is able to ensure a dynamic variation of the shape parameter, guaranteeing a strict control of the user satisfaction levels.
In this thesis, we consider that a recommendation was useful if the associated user’s feedback was positive – e.g., the user purchased the recommendation, gave it a high rating, or clicked on it. We then formalize the concept of co-utility, stated as the property any two items have of being useful to a user, and exploit it to improve recom- mendations. We then present different ways of estimating co-utility probabilities, all of them independent of content information, and compare them with each other. We embed these probabilities, as well as normalized predicted ratings, in an instance of an N P − hard problem named Max-Sum Dispersion Problem. A solution to this problem corresponds to a set of items for recommendation. We study two heuristics and one exact solution to the Max-Sum Dispersion Problem and perform comparisons among them. According to our experiments, the three solutions have similar performance in practice. We also contrast our method to different baselines by comparing the ratings users give to different recommendations. We obtain expressive gains in the utility of recommendations, up to 106%, and our method also recommends higher rated items to the majority of users. Finally, we show that our method is scalable in practice and does not seem to affect recommendations’ diversity.
The aim of this study was to describe concepts of health econo- mics in order to update and provide the orthopedic practitioner decision making parameters based on preferences. Four ba- sic types of studies of economical evaluation were presented (cost minimization analysis, cost-benefit, cost-effectiveness and cost-utility), as well as the origin, the concept, advantages and disadvantages of using QALY and utility. It was discussed the importance of costs and of SF-6D, an instrument able to get