by evaluation of typed terms. Due to discovering a connection between linearlogic and type theory - a phenomenon of Curry-Howard correspondence , we are able to consider types as propositions and proofs as programs. Then we can consider a program as a logical deduction within linear logical system. Precisely, reduction of terms corresponding to proofs in the intuitionistic linearlogic can be regarded as computation of programs. Thus computation of any resource-oriented program is some form of goal-oriented proof search in linearlogic. One of the main goals of this approach is to avoid eventual mistakes of correctness generated by implementation of a programming language. This approach also keeps us away from potentional problems in verification of programs.
proposed, improvement): fabric consumption (54.50%, 100%, 45.50%), thread consumption (49.89%, 99.96%, 50.08%), labor cost (58.55%, 99.72%, 41.18%), material cost (42.51%, 100%, 57.49%), cutting time (40.20%, 99.12%, 58.92%), sewing time (37.54%, 98.36%, 60.82%) and finishing time (41.70%, 92.84%, 51.14%). From this point of view, it can be concluded that with the existing system the company uses its resources inefficiently. Thus, it is possible to argue that the global competitiveness of the company can be improved to a significant level by implementing the proposed solution. Currently the company produces five types of products with the following production volumes (pieces) per month (Men's Polo- shirt, 15886; Men’s s/s Basic T-shirt, 23916; Short pant, 12864; Singlet, 25667; Men's V- neck T-shirt, 13319). However, these volumes of production will only give a total profit of 365,699 per month for the company. Basedon this study, the company can significantly increase its profitability and global competitiveness by producing only two types of products (Men's s/s Basic T- shirt, 8774; Short pant, 128315) and nothing of the other three. In this case the profit of the company can be improved by 145.5 % (i.e., from 365,699 birr per month to 897, 844 birr per month). In general, from this study, it can be said that the new solution provides very significantly improvement in organizational resource utilization and profitability. Finally, we concluded that this remarkable profit increment of the company can certainly enhance the company’s global competitiveness.
Aspect-orientedprogramming is a programming paradigm that is basedon the idea we can specify certain concerns (properties or areas of interest) of a system separately from the business logic, and then relying on mechanisms in the underlying AOP environment to weave or compose them together into a coherent program [EFB01]. One of the main advantages of AOP is that it allows us to achieve modularity through the separation of concerns. In order to separate concerns, AOP in- troduces to us the concept of Aspects, which are "mechanisms beyond subroutines and inheritance for localizing the expression of a crosscutting concern" [EFB01]. After being defined, a particular Aspect will then contain several joint points, which are the instructions in which the aspect code will interact with the entire environment, allowing us to perform tasks like: manipulating code, function calls, logging, unit testing, etc.
A vast number of tools provide static program verifiers for object-oriented languages relying on various external theorem provers to discharge verification conditions. Boo- gie [Barnett et al., 2006] is one such tool meant for Spec] programs. In the same spirit, Why3 [Filliˆatre and Paskevich, 2013] uses WhyML, a first-order specification and pro- gramming language, as an intermediate language for the verification of C, Java and Ada programs. This is somewhat different from other verification systems where a general purpose language is usually equipped with a specification language. These systems can verify invariants, track mutable references, aliases and side effects statically. While for- mal verification is more expressive than dependent types, it is also more complex, and still too costly for mainstream adoption. Language-based verification methods, such as the one we propose in DOL via types, are closer to existing programming methodologies. The main benefit of our approach is to provide lightweight verification without requiring prior training in logic or theorem proving.
The Brazilian and world- wide electrical sector have been suffering several transformations . The change from the monopoly model to the competitive model demands new operational and planning philosophies of electrical systems, which involve generation, transmission and distribution. Fur- thermore, in the biggest part of the system, the rapid demand of energy has forced the systems to operate in the limit of their capacity and, on the other hand, the tentative to expand has faced problems with environmental, social charac- teristics and also financial crisis that has reduced investments in this sector.
Our mapping allows generating answer sets capturing errors and justifications for (intended) models. As expected, they are exponential. One direction to explore is to obtain prime implicants by optimising these models using reification and then subset inclusion preference ordering Gebser et al. (2007a, 2011b) via a saturation tech- nique Eiter & Gottlob (1995b). Note that deciding if an AS is optimal for some disjunctive logic program is a Π 2 p -complete problem. Alternative offline justifica- tions Pontelli et al. (2009) (which are also exponential) can be extracted from models of J(Π) by adding extra constraints to the transformed program guaranteeing: only one rule is kept for true atoms (providing support); literals assumed false have all rules removed (which are undefined in the WFM); false literals have to keep all their rules; and the dependency graph is acyclic. The major difference to Pontelli’s ap- proach is that we provide justifications for the full model from which we may obtain their justifications, but our approach subsumes it since we are capable of finding more justifications as well as errors in the program.
Obviously, the above problem cannot be solved by any standard linearprogramming method, because it is nonlinear. However, it can be solved if μ is pre- determined. That is, for each specified value of μ , one can get an optimal solution for the original solution. Therefore, one may choose n number of experiments (n different μ
This work presents a comparison between a family of simple algorithms for linearprogramming and the optimal pair adjustment algorithm. This family originated from the generalization of the idea presented by Gonc¸alves, Storer and Gondzio in  to develop the OPAA. Hence, the optimal adjustment algorithm for p coordinates was developed. Indeed, for different values of p, a different algorithm is defined, in which p is limited by the order of the problem, thus resulting in a family of algorithms. This family of simple algorithms maintains the ability to exploit the sparsity from the original problem and a fast initial convergence. Significant improvements over OPAA are demonstrated through numerical experiments on a set of linearprogramming problems. The paper is organized as follows. Section 2 contains a description of von Neumann’s algorithm. Section 3 presents both the weight reduction algorithm and the OPAA. Section 4 discusses the family of simple algorithms, theoretical properties of convergence of the optimal adjustment algorithm for p coordinates, and a sufficient condition for it to present better iterations than the iterations of von Neumann’s algorithm. Section 5 describes the computational experiments comparing the family with the OPAA. The conclusions and perspectives for future work are presented in the last section.
Information in this document is subject to change without notice and does not represent a commitment on the part of Paragon Decision Technology B.V. The software described in this document is furnished under a license agreement and may only be used and copied in accordance with the terms of the agreement. The documentation may not, in whole or in part, be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine-readable form without prior consent, in writing, from Paragon Decision Technology B.V.
In the Agentcities project , FIPA ACL  is used as the agent communication language; FIPA SL  and KIF  may be used as message content languages; and DAML+OIL  is used to represent ontologies. These choices were driven by a set of well founded reasons including project management reasons, current industrial and standardisation trends, and existing technological support. In spite of being well justified, these choices are not free of problems. Namely, they imply the harmonisation of the logic-based agent communication framework and the object oriented ontology representation framework. Whereas FIPA ACL,
Hence the problem consists of determining the set of DCs needed to supply the demand for products in the customer zones with a lower operational cost. For this it is necessary to consider the quantities of each of the raw materials that must be acquired from each supplier, and also the amount of product that will be produced at each plant. However, since it is a design project, the company has not deined the distribution strategy involved in supplying the customer zones. Thus, in the problem at hand, it has not been deined whether a customer zone can be supplied by more than one DC, or whether this supply will be single-source. Therefore, in this study, the problem is approached presenting two mathematical models for the logistics design of the supply chain. The approach, called “Single-source” presupposes that each customer zone received products from only one DC. On the other hand, in the second approach, here called “Arc-based”, the structure of cost of transportation between plants, DCs and customers is represented by arcs that connect each of these players, and there is no exclusive attention to a customer by a single DC. 3.2 Formulations and mathematical
Finally, (3) procedural techniques are those in which the terrains are generated programmatically. This category can further be divided into physical, spectral synthesis and fractal techniques. The physical approach aims to simulate real phenomena such as erosion , or plate tectonics movements. Physically-based techniques generate highly realistic terrains, but require an in- depth knowledge of physical laws to be properly implemented and used. Another procedural approach is the spectral synthesis. Random frequency data is generated in the frequency domain and then converted into altitudes, in the space domain, by applying the inverse Fast Fourier Transform (FFT). The problem of using this technique for simulating real world terrain is that it is statistically homogeneous and isotropic, two properties that real terrain does not share . Furthermore, it does not allow much control on the outcome of terrains' features. Fractal techniques are basedon the self-similarity concept. An object is said to be self-similar when magnified subsets of the object look like the whole and to each other . This allows the use of fractals to generate terrain which still looks like terrain, regardless of the LOD in which it is displayed . This is one of the reasons why fractal techniques are popular among game's designers, besides their speed and ease of implementation. Several tools exist that are predominantly basedon fractal algorithms (e.g. Terragen 4 and GenSurf 5 ). However, not all terrain types present the self-similarity characteristic. Furthermore, generated terrains by this technique are easily recognised because of the self-similarity pattern and the designer has little control on the resulting terrain features.
Compared with traditional software products, the degree of current software products’ integration,modulization and complication is becoming higher and higher, which makes the testing of large-scale distributed software as a research hotspot. There are two reasons: the first is the development of Internet makes Web application system’s complexity and scale increasing. The traditional centralized software testing methods have certain limitations, and the virtual testing environment and data is difficult to be realistic. The second is that global software development has become a trend, making software development is from centralized to the distributed method to realize software continuous integration and test anywhere and anytime. Aiming at the problems of software testing, domestic and foreign scholars have put forward a series of test tools and frameworks. In accordance with the realization way,software testing platform can be mainly divided into centralized testing and distributed test. Basedon the basis of local testing, centralized software testing framework usually is only for certain type or structure of software testing, which is characterized as higher testing cost and lower efficiency. Li proposed a testing framework for model transformation, Stocks and Carrington put forward a testing framework basedon specification introduction, Hartman proposed a model based testing tools AGEDIS, Marinov proposed a new automation testing in Java program test framework, realizing automatic generation of test cases and operation .
The resource-based view (RBV) argues that valuable, rare, inimitable resources and organization (VRIO) lead to competitive advantage. Dynamic capabilities (DC) are a comparatively new field and the related literature is mainly conceptual. Capabilities can be considered as the firm’s routines and processes. We argue that the “O” in VRIO refers to DC. DCs are the “organization” needed to transform bundles of resources into competitive advantage. Consequently, does competitive advantage stem from VRIO resources or from VRI capabilities? Through a case study we analyzed the development of one capability in a medium-sized Portuguese footwear manufacturer. After reviewing the process of development of the capability, we performed a VRIO test for each of the resources it exploits and a VRI test of the capability. We can conclude that none of the resources contributing to the capability are VRIO, but the capability is VRI.
In last few decades, many researchers have been devoted for the query processing in grid environment [1, 2, 3, 4, 5]. In this context, design and implementation of an efficient query optimization technique for grid environment is utmost important. Taking into account the constraints of the grid, a cost model for calculating the query execution cost, was introduced in . In order to optimize the cost of query processing considering the constraints in grid environment, a linearprogramming optimization problem (LPP) is formulated basedon the cost model., and they also deal a constraint-based query optimization technique using the linearprogramming optimization problem. In , another cost model is defined for dynamic grid database environment, and also gives the dynamic query optimization algorithm used for the query plan to make adaptive evolvement along with the fluctuation of gird environment. The authors of  propose a new model for distributed query optimization that integrates three distinct phases namely, (1) creation of single node plan, (2) generation of parallel plan, and (3) optimal site selection for plan execution. They also present different heuristic approaches for solving the proposed integrated distributed query processing problem. In , a semantic query optimizer for a grid environment is proposed; it mainly implements optimization of the following three modules: semantic extension of the user query, resources selection, and parallel processing. In , the Hameurlain team defined an execution model basedon mobile agents to the distributed dynamic query optimization in large-scale systems. The idea is to execute each relational operator using a mobile agent, which allows decentralizing the decisions taken by the optimizer and adapting dynamically to estimation errors on the profile of relations.
Lee-Kelley et al. (2003) provided some evidences of how to improve planning for customer management by presenting and testing a conceptual model of the process by which the implementation of e-CRM, can enhance loyalty. Luck and Lancaster (2003) explored the degree to which UK based hotel groups had exploited the medium of e-CRM. They reported that the majority of the hotel groups had only embraced a few elements of E-CRM and even reported that they had no intention of being led online by the concept. Although the results of their questionnaire implied that hotel groups were generally aware of the potential of Web technologies and strategies, their results also demonstrated that firms were not putting this knowledge into practice when it came to implementing E-CRM. They reported that hotel groups based in the UK were failing to take advantage of the many opportunities identified through the secondary research.
Abstract: This study presents performance of call admission control and resource reservation schemes basedon the mobility of the users in WCDMA cellular systems. In order to guarantee the handoff dropping probability, the mobility of the user is predicted basedon a realistic mobility model. The mobility prediction scheme used in this study is used to estimate the set of candidate cells into which the mobile may move in the near future and calculates the likeliness value for each candidate cell. It also estimates the time slot at which the mobile may enter into the candidate cells basedon the distance between the current location of the mobile and the candidate cell center distance. Basedon the mobility prediction, resource is reserved in terms of Interference Guard margin (IGM) to guarantee some target handoff dropping probability. The admission threshold is adaptively controlled to achieve a better balance between guaranteeing handoff-dropping probability and maximizing resource utilization. Simulation results show that the call admission control scheme with mobility basedresource reservation scheme outperforms well when compared with the fixed reservation scheme. Key words: Blocking probability, dropping probability, most likely cell time
In this paper, we extend the keyword programming and present the idea of comment-based keyword programming, which provides a tool to create executable code fragment from a single line comment input by the user programmer. Thus it can further reduce the burden of programmers. Programmers generally have to write comments explaining what an executable code means for making it easier to understand. Comment-based keyword programming utilizes their works of writing comments for creating candidates of an executable code fragment.
Most efforts focus on partially automating the processes of digital forensics to save time and resources. According to , computational intelligence has a strong chance of being applied successfully to a steganalytic system or steganasystem. Three major methods of computational intelligence have been identified to be useful in steganalysis: Bayesian, neural network, and genetic algorithm. These three techniques are applied in image steganalysis, audio steganalysis, and video steganalysis. The result of  and other studies discussed in this paper, shows that the application of computational intelligence methods has had a high effect on the performance of steganalysis.
However, when it comes to the study of change detection for high resolution images, methods mentioned above have some drawbacks, for the reason that the traditional pixel-based change detection methods are built on the assumption that neighbouring pixels are relatively independent to each other while in high resolution images, several adjacent pixels combine together to make up a significant geographical object. A wide variety of experiments have shown that for images with a resolution higher than 10 meters, object-oriented change detection methods perform better than the traditional ones. On the other hand, traditional remote sensing image change detection methods built on the pixel level are mainly basedon the analysis of the spectral information and hardly analyse the shape features and structure features of ground objects. However, high resolution remote sensing image has brought significant changes to the remote sensing technology. It can very clearly show the structure, texture and detail information of the landscape. In addition to getting the spectral features, object-based methods can also get the structure, shape and texture information of the surface objects, making it easier to solve problems in high resolution remote sensing image change detection. In general, it is of great