Top PDF Design based Object-Oriented Metrics to Measure Coupling and Cohesion

Design based Object-Oriented Metrics to Measure Coupling and Cohesion

Design based Object-Oriented Metrics to Measure Coupling and Cohesion

Every estimation model makes assumptions about the scope covered by the estimates. The Object Oriented development model differs in significant ways from structured development models. Misunderstanding the capabilities of Object Oriented methods leads to infeasible plans. Aggressive claims have been made for the levels of productivity, quality, and reuse. Data compiled in failure to recognize and address the uncertainty inherent in software estimates. The increased uncertainty due to the factors discussed above makes understanding the confidence associated with an estimate more important for Object Oriented development. Some specific factors to consider in selecting an estimation approach include range of activities covered by the model or approach. Does it match the activities to be estimated for the project under study? Documentation, requirements analysis, and final qualification testing are some of the activities that models often fail to include, Range of activities covered by the model or approach. Does the approach address estimating size, effort, schedule, quality, and reuse? Does it provide guidance for periodic re-estimation of these dimensions? Is the model a theoretical exercise, or has it been proven in software engineering situations? Consider the depth of data, number of users/evaluators, and robustness of the evaluation methods. Availability of appropriate local historical data to calibrate the model or approach. Is data available from past projects for the specific parameters used in the model? Summarizes relevant data from several sources. Ability to capture the necessary data from the current project. Does the Object Oriented development method used produce the artifacts, such as “key classes,” that get counted to drive the model? Ability to approximate input parameters. Is there a reasonable method for approximating the values of parameters for which no historical data is available? A sample of software might be examined in detail, or data from a commercial database could be used to establish a starting point. Availability, prices, documentation, and support for estimation tools that implement the model, if any.
Mostrar mais

8 Ler mais

Passage-Based Bibliographic Coupling: An Inter-Article Similarity Measure for Biomedical Articles.

Passage-Based Bibliographic Coupling: An Inter-Article Similarity Measure for Biomedical Articles.

Several previous hybrid techniques employed out-link citations as well. Some studies reported that the hybrid techniques did not necessarily perform significantly better than the techniques based on bibliographic coupling, even when full-text articles and text classifiers were employed [16, 23]. Some studies reported that the hybrid techniques could perform bet- ter. Bibliographic coupling similarity and full-text similarity were integrated to cluster articles [36]. However, the performance heavily depended on the evaluation criteria: link-based (hybrid) evaluation criteria tended to favor link-based techniques (hybrid techniques). There- fore, to measure the contributions of a hybrid technique more properly, the evaluation crite- rion should be independent of texts and citations [37]. Our evaluation criterion is based on semantic relatedness: it checks whether the systems can retrieve those highly related articles that are judged by biomedical experts. A more recent work employed the relatedness between biomedical articles as an evaluation criterion as well [37], without relying on co-words and co- citations to design the criterion. It integrated bibliographic coupling similarity with text-based similarity by treating a co-word in two articles as a citation co-cited by the two articles. This integrated similarity measure (named HybridK50) was tested in the biomedical domain as well, and performed better than bibliographic coupling [37]. We thus implement HybridK50 as a baseline, and will show that PBC performed significantly better than HybridK50.
Mostrar mais

22 Ler mais

Development of Dynamic Coupling Measurement of Distributed Object Oriented Software Based on Trace Events

Development of Dynamic Coupling Measurement of Distributed Object Oriented Software Based on Trace Events

Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
Mostrar mais

15 Ler mais

Cohesion Metrics for Ontology Design and Application

Cohesion Metrics for Ontology Design and Application

This study, proposes a set of ontology cohesion metrics to measure the modular relatedness of OWL ontologies. These metrics are Number of Root Classes (NoR), Number of Leaf Classes (NoL) and Average Depth of Inheritance Tree of all Leaf Nodes (ADIT-LN). The metrics are collected by using a standard XML DOM parser that parses the XML-based OWL ontology syntactically, but computes cohesion metrics conceptually based on predefined OWL primitives which explicitly defined tree-based semantic hierarchies in OWL ontologies. These metrics are theoretically validated using standard metrics validation frameworks, then are empirically validated by comparing them statistically to assessments performed by a human team of evaluators.
Mostrar mais

7 Ler mais

Towards A Dynamic Object-Oriented Design Metric Plug-in Framework

Towards A Dynamic Object-Oriented Design Metric Plug-in Framework

Measurement is recognized as a key element of any engineering process. We use measures to assess the quality of an engineered product. Examples are analysis models, design models and other software artifacts. The IEEE-computer society, with the support of a consortium of industrial sponsors, has published a guide to the Software Engineering Body of Knowledge and, throughout this guide, measurement is pervasive as a fundamental engineering tool [1] . A few researchers such as Harrison, Counsell and Nithi, Chidamber and Kemerer and Lorenz and Kidd have proposed a number of design metrics for object-oriented systems. Examples are weighted methods per class, depth of inheritance tree, number of children, coupling between object classes, response for a class, lack of cohesion in methods, class size, number of operations overridden by a subclass, number of operations added by a subclass and method inheritance factor [3,4,7,11,14,16] . Most of the software metric tools such as Resource Standard Metrics,
Mostrar mais

7 Ler mais

Detecting Bad Smells in Object Oriented Design Using Design Change Propagation Probability Matrix

Detecting Bad Smells in Object Oriented Design Using Design Change Propagation Probability Matrix

Considering one widely accepted suit of metrics [5], the CBO (Coupling between object classes) is defined as a count of the number of other classes to which it (a class under consideration) is coupled. This definition of coupling counts the classes to which a particular class has some sort of interaction. It does not measure the amount (strength) of coupling between any two classes. Considering the number of discrete messages exchanged between classes, the god classes are identified using link analysis method [6]. The god classes in the system imply a poorly designed model. The paper [7] describes the ripple effect metric. It considers its applicability as a software complexity measure for object oriented software. It is mentioned that this approach has potential to improve the stability and efficiency of object oriented software and cut the cost of software maintenance. A list of metric based detection strategies for capturing flaws of object oriented design are defined in paper [8]. Papers [7][8] have not included how the strength of dependency (coupling) between artifacts (which are connected through intermediate artifacts in more than one path) is calculated.
Mostrar mais

7 Ler mais

Impact of Software Metrics on Object-Oriented Software Development Life Cycle

Impact of Software Metrics on Object-Oriented Software Development Life Cycle

The Empirical model for the Application Generator and System Integrator sectors is based on the Application Composition model (for the early prototyping phase), and on the Early Design (analysis of different architectures) model and finally on the Post- Architecture model (development and maintenance phase). In Empirical metric, the effort is expressed in Person Months (PM). Person Months (also known as man-months) is a measure that determines the amount of time one person spends working in the software development project for a month. The number of PM is different from the time the project will take to complete. For example, a project may be estimated to have 10 PM but have a schedule of two months. Equation 15 defines the Empirical effort metric.
Mostrar mais

10 Ler mais

Identification of Nominated Classes for Software Refactoring Using Object-Oriented Cohesion Metrics

Identification of Nominated Classes for Software Refactoring Using Object-Oriented Cohesion Metrics

Class cohesion is defined as the degree of the relatedness of the members in the class [ 2], [3]. Various metrics were developed to measure the similarity between the class elements. Many cohesion measurements are based on the Low-Level Design (LLD) information. LLD class cohesion metrics require to analyze the algorithms used in the class methods or the code itself (if available) in order to measure the class cohesion [1], [2 ]. Another approach for class cohesion measurement is based on the High-Level Design (HLD) information. HLD class cohesion metrics rely on information related to class and method interfaces [ 2]. Many of the proposed LLD cohesion metrics focus on measuring the correlation between pairs of methods in the class. Such as Chidamber and Kemerer Lack of COhesion in Methods (LCOM1 and LCOM2) metrics [6], [7 ], Bieman and Kang Tight Class Cohesion (TCC) and Loose Class Cohesion (LCC) metrics [ 4], Badri et al. Lack of Cohesion in the Class-Direct (LCC D ) and Lack of
Mostrar mais

9 Ler mais

A Framework for Validation of Object-Oriented Design Metrics

A Framework for Validation of Object-Oriented Design Metrics

Analyzing object-oriented software in order to evaluate its quality is becoming increasingly important as the paradigm continues to increase in popularity. A large number of software product metrics have been proposed in software engineering. While many of these metrics are based on good ideas about what is important to measure in software to capture its complexity, it is still necessary to systematically validate them. Recent software engineering literature has shown a concern for the quality of methods to validate software product metrics (e.g., see [1][2][3]). This concern is due to fact that: (i) common practices for the validation of software engineering metrics are not acceptable on scientific grounds, and (ii) valid measures are essential for effective software project management and sound empirical research. For example, Kitchenham et.al. [2] write: "Unless the software measurement community can agree on a valid, consistent, and comprehensive theory of measurement validation, we have no scientific basis for the discipline of software measurement, a situation potentially disastrous for both practice and research." Therefore, to have confidence in the utility of the many metrics those are proposed from research labs, it is crucial that they are validated.
Mostrar mais

7 Ler mais

Program and aspect metrics for MATLAB : design and implementation

Program and aspect metrics for MATLAB : design and implementation

Software metrics have a long history, since the very beginning of the software industry, companies were already concerned with the quality of their product and (mainly) with their coast. So although the first dedicated book on software metrics was not published until 1976 [16], the history of active software metrics dates back to the late-1960’s. In that early times, the Lines of Code measure (LOC or KLOC for thousands of lines of code) was used routinely as the basis for measuring both programmer productivity (LOC per programmer month) and program quality (defects per KLOC). In other words LOC was being used as a surrogate measure of di fferent notions of program size. The early resource prediction models (such as those of [27] and [5]) also used LOC or related metrics like delivered source instructions as the key size variable. In 1971 Akiyama [32] published what we believe was the first attempt to use metrics for software quality prediction when he proposed a crude regression-based model for module defect density (number of defects per KLOC) in terms of the module size measured in KLOC. In other words he was using KLOC as a surrogate measure for program complexity.
Mostrar mais

82 Ler mais

BenchXtend: uma ferramenta para medir a elasticidade de sistemas de banco de dados em nuvem

BenchXtend: uma ferramenta para medir a elasticidade de sistemas de banco de dados em nuvem

Cassandra versions prior to 1.2.x require to evenly set initial token for all nodes in the ring to have a perfect data distribution. It is also possible to let Cassandra decide how to split the data range, setting auto bootstrap to true. The last option may incur in the risk of having hot spots, i.e. nodes receiving more queries than other ones, since there is no guarantee that the range will be evenly partitioned. In our experiments, we adopted the first option because it evenly divides the data range. initial token was set for each node. For new nodes, the token is set to be in the middle of two tokens to avoid move operations that are known to be resource expensive. We do not execute cleanup operations during workload to remove keys no longer belonging to nodes that streamed data to the new one. Cleanup may be safely postponed for low-usage hours [51]. From versions 1.2.x, Cassandra provides vnodes feature that does not required to set initial token anymore and that may reduce the recover and bootstrap time. After preliminary tests with version 1.2.6, we faced recurrent hang problems that led us to keep using version 1.1.12 in our experiments.
Mostrar mais

88 Ler mais

Lat. Am. j. solids struct.  vol.12 número10

Lat. Am. j. solids struct. vol.12 número10

At conceptual phases of designing a vehicle, engineers need sim- plified models to examine the structural and functional character- istics and apply custom modifications for achieving the best vehi- cle design. Using detailed finite-element (FE) model of the vehicle at early steps can be very conducive; however, the drawbacks of being excessively time-consuming and expensive are encountered. This leads engineers to utilize trade-off simplified models of body- in-white (BIW), composed of only the most decisive structural elements that do not employ extensive prior knowledge of the vehicle dimensions and constitutive materials. However, the ex- tent and type of simplification remain ambiguous. In fact during the procedure of simplification, one will be in the quandary over which kind of approach and what body elements should be re- garded for simplification to optimize costs and time, while provid- ing acceptable accuracy. Although different approaches for opti- mization of timeframe and achieving optimal designs of the BIW are proposed in the literature, a comparison between different simplification methods and accordingly introducing the best mod- els, which is the main focus of this research, have not yet been done. In this paper, an industrial sedan vehicle has been simplified through four different simplified FE models, each of which exam- ines the validity of the extent of simplification from different points of views. Bending and torsional stiffness are obtained for all models considering boundary conditions similar to experi- mental tests. The acquired values are then compared to that of target values from experimental tests for validation of the FE- modeling. Finally, the results are examined and taking efficacy and accuracy into account, the best trade-off simplified model is presented.
Mostrar mais

19 Ler mais

Radiol Bras  vol.40 número4

Radiol Bras vol.40 número4

Figure 4 shows the results of the execu- tion of the first module with mean values of precision-recall curves of the Euclidean distance between characteristic vectors of the reference images in relation to the im- ages of the database. This result allowed the evaluation of the CBIR effectiveness utilizing the texture in the classification of the most similar images for the second module. Although the mean precision ob- tained in the experiments is 0.54 (sagittal knee), and 0.40 (axial head), it is sufficient for filtering the images to be submitted to the second module. In the second module, the images are processed with the similar- ity measurement algorithm on the compu- tational grid. CBIR with the similarity measurement algorithm resulted in a satis- factory precision for both anatomical re- gions — 0.95 (sagittal knee) and 0.92 (axial head) —, according to the mean precision- recall curves between the reference images and those classified by the first module (Figure 5).
Mostrar mais

7 Ler mais

A Methodology to Evaluate Object oriented Software Systems Using Change Requirement Traceability Based on Impact Analysis

A Methodology to Evaluate Object oriented Software Systems Using Change Requirement Traceability Based on Impact Analysis

quality by assessing the probability that each class will change in a future generation. Peter Zielczynski [12] explained an approach which is applied to software writing in an object-oriented language to trace object oriented code into functional requirements. Here, it is addressed the problem of establishing traceability links between the free text documentation associated with development and maintenance cycle of a software system and its code. Further, vector space models to compare different model and to assess the relative influence of affecting factors are not considered.
Mostrar mais

14 Ler mais

Maude Object-Oriented Action Tool

Maude Object-Oriented Action Tool

In this paper we have presented MOOAT, the first implementation of Object- Oriented Action Semantics. Our first idea consisted on developing an isolated OOAS tool from scratch. However, it seemed impracticable knowing the existence of MAT. The better choice, in our view, was creating an extension of MMT. The idea was to aggregate the object-oriented apparatus to a brand new tool based on MAT and developed using MMT. In MOOAT, the user can create OOAS specifications, and perform the necessary tests, inside the standard Maude environment. This is interesting, considering that the user can create Maude, MMT, and MOOAT specifications using the same tool.
Mostrar mais

17 Ler mais

Qualitative Assessments of the Software Architectures of Configuration Management Systems

Qualitative Assessments of the Software Architectures of Configuration Management Systems

Here we compare the results of the two methods, the results comprise of quantitative rating of various quality parameters present in three different architectures of CM tools: “2 nd generation,” CM Services Model” and the “Proposed Architecture Model”. As per Table 2 the value of interoperability is significantly high in case of proposed model, this due to the presence of special components, C10 (Interface to other tools) and C11 (configuration data router). Same results could be seen for flexibility of proposed model, due to average module sizes and special component, C11 (configuration data router). In other parameters, like, traceability, testability, portability, the proposed model has also done the best. As we see “CM services Model,” is best in case of simplicity, because it is offering discrete atomic service for each of the requirements (mostly representing configuration management activities). The proposed model having average number of modules and average module size,
Mostrar mais

6 Ler mais

Class Cohesion Metrics for Software Engineering: A Critical Review

Class Cohesion Metrics for Software Engineering: A Critical Review

From Eighty-Twenty Rule, 20% of the time is spent creating, and 80% of the time is spent maintaining. Of the maintenance time, 20% is spent changing, while 80% of the time is spent just trying to understand the code. Therefore, understandability or program comprehension is one of the important characteristics of software quality, because it concerns the ways software engineers maintain the existing source code. In order to maintain the software, the programmers need to comprehend the source code. The understandability of the source code depends upon the cohesion, coupling and complexity of the program. A class with single responsibility is easier to understand compared to a class with multiple responsibilities. Class cohesion assessing metrics evaluate the cohesion of a class for determining the level of understandability of a class; a class with higher cohesion is more understandable than a class with low cohesion. For example, in our designed classes, since the class B only does “one thing,” its interface usually has a small number of methods that are fairly self explanatory. It should also have a small number of member variables, thus it is more understandable than class A.
Mostrar mais

31 Ler mais

Core-TyCO, The Language Definition, Version 0.1

Core-TyCO, The Language Definition, Version 0.1

This is the second report on TyCO [3], a (still) experimental strongly and implicitly typed concurrent object oriented programming language based on a predicative polymorphic calculus of objects [4, 5], featuring asynchronous mes- sages, objects, and process declarations, together with a predicative polymor- phic typing assignment system assigning monomorphic types to variables and polymorphic types to process variables.

14 Ler mais

Greeks and Trojans Together

Greeks and Trojans Together

One of the advantages of having a stepwise translation process is that it allows the agent designer to use only the steps found appropriate in each situation. In some circumstances it may be useful to be able of using only the third step of the translation. There are messages, such as request-when, request-whenever, cfp, and propose that cannot be translated to OQL because they are not queries. However, some of the translation steps may be used to process parts of the message. The first and second steps can always be used to decompose the message and its contents in its several parts. For instance, steps 1 and 2 can be used to isolate the condition part of action-condition expressions. Once the condition part has been isolated, the agent can then use steps 3 and 4 to determine whether or not the condition is true. It may also be possible, at least in the cases of database actions, to use the translator to create the database command that implements the action.
Mostrar mais

8 Ler mais

An empirical study of aspect-oriented metrics

An empirical study of aspect-oriented metrics

We did our best to come out with clear and unambiguous definitions, that are the basis for correctness and consistency, but these properties can only be satisfied through formalisation and proof that the software application used to collect the metrics is derived from the formal specification (which is outside the scope of this paper). We partially assessed the correctness property through test cases. Although the use of test cases does not ensure correctness, it provides a certain degree of confidence in the results. In the aopmetric tool, there are JUnit test cases covering each kind of module that can be targeted by the metric. The tests for all the selected metrics are described in five test classes and 42 test cases to explore different combinations of modules.
Mostrar mais

28 Ler mais

Show all 10000 documents...