component quality
verification framework
Fernando Ferreira de Carvalho
Advisor: Silvio Romero de Lemos Meira
Informatics Center - Federal University of Pernambuco ffc@cin.ufpe.br
• High Quality, and others
to be more efficient and competitive (Brown, 2000)
The CBD with reuse technique had been a nice direction to reach this objectives…
The Robust framework for Software Reuse [Almeilda et al.,
2004] is good way to support CDB approach and component reuse for embedded systems.
But, Component reuse without quality assurance give catastrophic results [Jezequel et al., 1997].
It is composed of four inter-relation module:, based on a set of activities, metrics and guidelines.
• Embedded software component Quality Model (EQM)
• Maturity Level evaluation Techniques
• Metrics Approach
• Component Certification
based on a set of activities, metrics and guidelines.
• ISO/IEC 9126, 2001 - Quality Model for Software Product
• ISO/IEC 14598, 1998 - Software Product Evaluation Process
This two standards converged to:
• ISO/IEC 25010, 2005 - Software product quality - requirements and evaluation
The Framework adapted the quality model and evaluation to
expected since initial definition. Nevertheless, other process can be added in the future.
•Cost Model
•Formal Proof
•Prediction of the component assembly
• Ultra-small device x simple functionality
• Small system x sophisticated functions
• Large systems and distributed systems
• Systems produced in large quantities x low production cost
• Systems produced in low volume x important features
A common characteristic in different area of embedded domain is increasing importance of software [Crnkovic, 2003].
Example, the software cost in embedded systems:
• in industrial robots constitute about 75% of total cots
• in car industry it is about 30%
Fifteen year ago:
• 25% of total cots in industrial robots
• Negligible for cars
• Functional property (component interface)
• Non-functional or Extra-functional property, so called Quality attributes, fox example:
• Timing
• Performance
• Consumption
• Resource Behavior, and others.
This properties can be classified in run-time and life-cycle.
In the most of case, embedded system is real-time with limited
resource. So, it has specifics characteristics which depends on domain application, but it have strong implication on requirements.
The REQUIREMENTS are related Extra-functional
property or Quality attributes, and its priority depends on the domain application.
• Industrial Automation
• Automotive
• Medical
• Electronic consumer
• Other domain
There has been developed a research in order to find the most important characteristics in different areas in embedded
domain.
At low level:
a. Availability b. Timeliness c. Reliability
The most important characteristics, following the research:
Industrial Automation was classified by research’s Larsson, [Larsson, 2002]
At high level:
a. Performance b. Usability
c. Integrability
1. Safety
2. Reliability 3. Predictability 4. Usability
5. Extendibility 6. Maintainability 7. Efficiency
8. Testability
The resulting list of characteristics is presented below
The resulting list of characteristics is presented below
characteristics in the development of medical imaging family.
1. Reliability 2. Safety
3. Functionality 4. Portability 5. Modifiability
a. Configurability
b. Extensibility and Evolvability 6. Security
7. Serviceability
The table show the results.
the main characteristics and sub-
characteristics in the CBD approach apply to embedded system in his research.
Real-time properties Response time or latency execution time
worst case execution time Deadline
Dependability Reliability Availability integrity
confidentiality safety
Resource
consumption Power consumption
computation (CPU)
So, embedded software component quality verification must be
different that general propose component, because the component evaluation is realized focused in specifics requirements
We divided the quality verification in two groups:
• General propose software component quality process
o desktops, servers, x86 architecture
• Specific propose software component quality process
o embedded systems
The relevant research explore the theory of component quality and certification in academic scenarios, but not rich in reports in practical experience.
The pioneering works focus in mathematical and test model, while recent researchers have focused in techniques and model based on predicting quality requirements.
Timeline of research in the embedded software component quality and certification area
X fail
→ a work was extended by another proposed standard
In 1993, Poore [Poore et al., 1993] develop an approach based on three mathematical model (sampling, component and certification models), using test cases to report the failures to achieve a reliability index
Poore estimated the reliability of a complete system, and not of individual software units, although, they did consider how each component affected the system’s reliability.
Wohlin [Wohlin et al., 1994] presented the first method of component certification using modeling techniques, making it possible not only to certify components but to certify the system.
•It is composed of the usage model and the usage profile.
•The failure statistics from the usage test form the input of a certification model.
•An interesting point of this approach is that the usage and profile models can be reused in subsequent certifications
However, even reusing those models, the considerable amount of effort and time that is needed makes the certification process a hard task.
In 1994, Merrit (Merrit, 1994) presented an interesting suggestion: the use of components certification levels. These levels depend on the
nature, frequency, reuse and importance, as follows:
• Level 1: No tests are performed; the degree of completeness is unknown;
• Level 2: A source code component must be compiled and metrics are determined;
• Level 3: Testing, test data, and test results are added; and
• Level 4: A reuse manual is added.
However, this is just a suggestion of certification levels and no practical These levels represent an initial component maturity model.
In 1996, Rohde (Rohde et al., 1996) provided a reuse and certification of embedded software components at Rome Laboratory of the US Air Force, a Certification Framework (CF), that included:
• To define the elements of the reuse context that to certification;
• To define the underlying models and methods of certification; and,
• To define a decision-support technique to construct a context-sensitive process for selecting and applying the techniques and tools to certify
components.
• A Cost/Benefit plan that describes a systematic approach of evaluating the costs and benefits.
Rohde et al. considered only the test techniques to obtain the defects result in order to certify software components. This is only one of the important techniques that should be applied to component certification.
Voas [Voas, 1998] defined a certification methodology using automated technologies, such as black-box testing and fault injection to determine if a component fits into a specific scenario.
This methodology uses three quality assessment techniques:
(i) Black-box component testing determine if the component quality is high enough;
(ii) System-level fault injection determine how well a system will tolerate a faulty component;
(iii) Operational system testing determine how well the system will tolerate a properly functioning component
According to Voas, this approach is not foolproof and perhaps not well suited to all situations. The methodology does not certify that a
component can be used in all systems. This approach certify a component within a specific system and environment.
Wohlin and Regnell [Wohlin and Regnell, 1998] extended their previous research (Wohlin et al., 1994), now, focusing on techniques for
certifying both components and systems.
Thus, the certification process includes :
(i) usage specification (consisting of a usage model and profiles), and (ii) certification procedure, using a reliability model.
The main contribution of that work is the division of components into classes for certification and the identification of three different ways of certifying software systems:
i. Certification process, the functional requirements are validated during usage-based testing;
ii. Reliability certification of component and systems, the component models that were built are revised and integrated to certify the system that they form;
and,
iii. Certify or derive system reliability, where the focus is on
However, the proposed methods are theoretical without experimental study. According to Wohlin et al., “both experiments in a laboratory environment and industrial case studies are needed to facilitate the understanding of component reliability, its relationship to system reliability and to validate the methods that were used only in laboratory case studies” (pp. 09).
Until now, no progress in those directions was achieved.
In 2000, Jahnke, Niere and Wadsack [Jahnke, Niere and Wadsack, 2000] developed a methodology for semi-automatic analysis of
embedded software component quality.
This approach evaluates data memory (RAM) utilization in Java technology by the component.
The work is restricted because:
- Verifies the component quality from only one point of view, use of data memory in a specific language,
- Java is widely used for the development of desktops systems not useful for embedded development.
Stafford (Stafford et al., 2001) developed a model for the
component marketplaces that supports prediction of system properties prior to component selection.
The model use functional verification and quality-related values associated with a component, called credentials.
This work introduced notable changes in this area.
It use a specific notation such as <property,value,credibility>.
Through credentials, the developer chooses the best components to use in the application development based on the “credibility” level.
Stafford introduced the notion of active component dossier, its is an abstract component that defines credentials.
Stafford et al. finalized their work with some open questions, such as:
• how to certify measurement techniques?
• What level of trust is required under different circumstances?
• Are there other mechanisms that might be used to support trust?
proposed a COTS software product evaluation process. The process contains four activities, as follows:
i. Planning the evaluation -> evaluation team, stakeholders, required resources, basic characteristics of the evaluation
ii. Establishing the criteria -> evaluation requirements , evaluation criteria;
iii. Collecting the data -> component data are collected, the evaluations plan is done and the evaluation is executed; and
iv. Analyzing the data -> the results of the evaluation are analyzed and some recommendations are given.
The proposed process is an ongoing work and, no real case study was accomplished, becoming unknown the real efficiency.
measure quality characteristics of COTS components, based on the international standards for software product quality (ISO/IEC 9126, ISO/IEC 12119 and ISO/IEC 14598). The method is composed of four steps:
i. Establish evaluation requirements, specifying the purpose and scope of the evaluation, specifying evaluation requirements;
ii. Specify the evaluation, selecting the metrics and the evaluation methods;
iii. Design the evaluation, considers the component documentation, development tools, evaluation costs and expertise required in order to make the evaluation plan; iv. Execute the evaluation, the execution of the evaluation methods and the analysis
In 2003, Hissam (Hissam et al., 2003) introduced Prediction- Enabled Component Technology (PECT) as a means of packaging predictable assembly.
This work, which is an evolution of Stafford et al.’s work (Stafford et al., 2001), attempts to validate the PECT and its components, giving credibility to the model
During 2003, a CMU/SEI’s report, Wallnau extended Hissam work (Hissam et al., 2003), in order to achieve Predictable Assembly from Certifiable Components (PACC).
This novel model requires a better maturation by the software engineering community in order to achieve trust in it
Magnus Larsson, in 2004 (Larsson, 2004), define A predictability approach of the quality attributes, where one of the main objectives is to enable integration of components as black boxes.
According to composition principles, results types of attributes:
• Directly compassable attributes. is a function of only the same attribute.
• Architecture-related attributes. is a function of the same attribute and of the software architecture.
• Derived attributes. depends on several different attributes
• Usage-depended attributes. is determined by its usage profile.
This work is very useful, but before the component quality must be known.
Finally, in 2006 Daniel Karlson (Karlson et al., 2006) presented the verification of component-based embedded system designs. These techniques is Formal Methods based modeling approach(Petri net), called PRES+.
Two problems are addressed:
• component verification and
• Integration verification.
This approach verifies the component from only one perspective:
functionality.
Formal verification, it is used only in few cases when it is mandatory,
Failures in Software Component Certification
Two failure cases that can be found in the literature . First failure occurred in the US government, when trying to
establish criteria for certificating components (NIAP). Thus, from 1993 until 1996, NSA and the NIST used the Trusted Computer Security Evaluation Criteria (TCSEC), “Orange Book”.
It had defined no means of features across classes of components, but only for a restricted set of behavioral assembly properties
(Hissam et al., 2003).
Failures in Software Component Certification
The second failure happened with an IEEE committee, in an attempt to obtain a component certification standard.
The initiative was suspended, in this same year.
The committee came to a consensus that they were still far from getting to the point where the document would be a strong candidate for a
standard. (Goulao et al., 2002a).
One of the main objectives of software engineering
• Improve the quality of software products,
• Establishing methods and technologies to build software products.
The quality area could be basically divided into two main topics (Pressman, 2005):
• Software Product Quality: aiming to assure the quality of the generated product; and
• Software Processes Quality: looking for the definition, evaluation and improvement of software development processes.
Software Product Quality:
• ISO/IEC 9126 (ISO/IEC 9126, 2001),
• ISO/IEC 12119 (ISO/IEC 12119, 1994),
• ISO/IEC 14598 (ISO/IEC 14598, 1998),
• SQuaRE project (ISO/IEC 25000, 2005) (McCall et al., 1977), (Boehm et al., 1978), among others
• Capability Maturity Model (CMM) (Paulk et al., 1993),
• Capability Maturity Model Integrated (CMMI) (CMMI, 2000),
• Software Process Improvement and Capability dEtermination
Software Processes Quality:
standards to properly evaluate the quality and development processes of the software
product, in different domain.
The Table shows a set of national and international standards.
systems
RTCA DO 178B guidelines for development of aviation software ISO/IEC 61508 Security Life cycle for industrial software
ISO/IEC 9126 Software Products Quality Characteristics
ISO/IEC 14598 Guides to evaluate software product, based on practical usage of the ISO 9156 standard
ISO/IEC 12119 Quality Requirements and Testing for Software Packages SQuaRE project
(ISO/IEC 25000) Software Product Quality Requirements and Evaluation IEEE P1061 Standard for Software Quality Metrics Methodology ISO/IEC 12207 Software Life Cycle Process.
NBR ISO 8402 Quality Management and Assurance.
NBR ISO 9000-1-2 Model for quality assurance in Design, Development, Test, Installation and Servicing
NBR ISO 9000-3 Quality Management and Assurance. Application of the ISO 9000 standard to the software development process (evolution of the NBR ISO 8402).
CMMI (Capability Maturity Model
SEI’s model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these
Quality Requirements and Evaluation has been created specifically to make two standards converge:
• ISO/IEC 14598, 1998 - define a software product evaluation process, based on the ISO/IEC 9126.
• ISO/IEC 9126, 2001 - define a quality model for software product
Trying to eliminate the gaps, conflicts, and ambiguities that they
The objective is :
• To respond to the evolving needs of users through an improved, and
• Unified set of normative documents covering three complementary quality processes:
• Requirements specification,
• Measurement and
• Evaluation.
The motivation is to supply for developing and acquiring software products with quality engineering instruments supporting both the specification and evaluation of quality requirements.
• Criteria for the specification of quality requirements
• Evaluation of quality requirements,
• Recommended measures of software product quality attributes.
which can be used by:
• Developers,
• Acquirers, and
• Evaluators.
Quality Requirements Division (ISO/IEC 2503n)
Quality requirements and guide: to enable software product quality to be specified in terms of quality requirements;
ISO/IEC25030 - 2007, standard for supporting the specification of quality requirements, either during software product quality requirement elicitation or as an input for an evaluation process:
Quality Model Division (ISO/IEC 2501n)
Quality model and guide: to describe the model for software product internal and external quality, and quality in use. The document present the characteristics and sub-characteristics for internal and external quality
ISO/IEC 25010 – 2005, contains the detailed quality model and its specific characteristics and sub-characteristics for internal quality, external quality and quality in use. This division includes:
Product Quality General Division (ISO/IEC 2500n)
•Guide to SQuaRE: to provide the SQuaRE structure, terminology, document overview, intended users and associated parts of the series, as well as reference models;
•Planning and management: to provide the requirements and guidance for planning and management support functions for software product evaluation.
ISO/IEC 25000 – 2005 contains the unit standards defining all common models, terms and definitions referred to by all other standards in the SQuaRE series.
This division includes two unit standards:
Quality Measures Division (ISO/IEC 2502n)
ISO/IEC 25020 - 2007 were derived from ISO/IEC 9126 and ISO/IEC 14598.
This division covers the mathematical definitions and guidance for practical measurements of internal quality, external quality and quality in use.
It will include the definitions for the measurement primitives and the Evaluation Module to support the documentation of measurements.
Quality Measures Division (ISO/IEC 2502n)
Measurement reference model and guide
Measurement primitives
Measures for internal quality
Measures for external quality
Measures for quality in use
Quality Evaluation Division (ISO/IEC 2504n)
Quality evaluation overview and guide
Process for developers
Process for acquirers
Process for evaluators
Documentation for the evaluation module
ISO/IEC - 25040 contains the standards for providing requirements, recommendations and guidelines for software product evaluation, whether performed by evaluators, acquirers or developers:
ISO/IEC 2501n (Quality Model Division)
ISO/IEC 2501n is composed of the ISO/IEC 9126 -1 standard, which provides a Quality Model for software product.
At the present time, this division contains only one standard: 25010 – Quality Model and guide. This is an ongoing standard in development.
Quality Model Division does not prescribe specific quality
requirements for software, but rather defines a generic quality model, which can be applied to every kind of software.
ISO/IEC 2501n (Quality Model Division) Characteristics and Sub-Characteristics
in SQuaRE project
Accuracy Interoperability Security
Functionality Compliance Reliability Maturity
Fault Tolerance Recoverability
Reliability Compliance Usability Understandability
Learnability Operability Attractiveness Usability Compliance Efficiency Time Behavior
Resource Utilization Efficiency Compliance Maintainability Analyzability
Changeability Stability Testability
Maintainability Compliance
The ISO/IEC 25010 defines a quality model that comprises six characteristics and 27 sub-characteristics:
• effectiveness,
• productivity,
• security and
• satisfaction
The main drawback of the ISO/IEC 25010, is that they provide very generic quality models and guidelines, which are very difficult to apply to specific domains such as embedded components and CBSD.
ISO/IEC 2504n (Quality Evaluation Division)
The ISO/IEC 2502n is composed of the ISO/IEC 14598 standard, which provides a generic model of an evaluation process, supported by the quality measurements from ISO/IEC 9126. This process is specified in four major sets of activities for an evaluation:
ISO/IEC 2504n (Quality Evaluation Division)
The ISO/IEC 2504n is divided in five standards:
•ISO/IEC 25040 – Evaluation reference model and guide;
•ISO/IEC 25041 – Evaluation modules;
•ISO/IEC 25042 – Evaluation process for developers;
•ISO/IEC 25043 – Evaluation process for acquirers; and
•ISO/IEC 25044 – Evaluation process for evaluators.
ISO/IEC 2502n (Quality Measurement Division)
The ISO/IEC 2502n - 2007 improve the quality measurements provided by ISO/IEC 9126-2/3/4 (external metrics), (internal metrics) and (quality in use metrics)
The most significantly is the adoption of the Goal-Question- Metrics (GQM) paradigm (Basili et al., 1994), thus, the metrics definition becomes more flexible and adaptable to the software product evaluation context.
ISO/IEC 2502n (Quality Measurement Division)
The ISO/IEC 2502n is divided in five standards:
•ISO/IEC 25020 - Measurement reference model and guide;
•ISO/IEC 25021 – Measurement primitives;
•ISO/IEC 25022 – Measurement of internal quality;
•ISO/IEC 25023 – Measurement of external quality; and
•ISO/IEC 25024 – Measurement of quality in use.
These standards contain some examples in how to define metrics for different kinds of perspectives, such as internal, external and quality in use.
“certification, in general, is the process of verifying a property value associated with something, and providing a certificate to be used as proof of validity”. (Stafford et al., 2001)
“Third-party certification is a method to ensure that software components conform to well-defined standards; based on this certification, trusted assemblies of components can be constructed.” (Councill, 2001)
Third party certification is often viewed as a good way of bringing trust in software components.
Components can be obtained from existing systems through reengineering, designed and built from scratch, or purchased.
After that, the components are certified, in order to achieve some trust level, and stored into a repository system
The CBSE community is still far from reaching a consensus:
•how it should be carried out,
•what are its requirements and
•who should perform it.
Some difficulties, was found due to the relative novelty of this area (Goulao et al., 2002a).
In a survey of the state-of-the-art was noted that there is a lack of processes, methods, techniques and tools available for evaluating component quality, specifically for embedded is much more scarce.
This necessity is pointed out by different researchers (Voas, 1998), (Morris et al., 2001), (Wallnau, 2003), (Alvaro et al., 2005), (Bass et al., 2003), (Softex, 2007) and (Lucrédio et al., 2007).
Most researchers agree that component quality is an essential aspect of the CBSE adoption and software reuse success.
Its idea is to improve the lack of consistency between the available standards for software product quality (ISO/IEC 9126), (ISO/IEC 14598), (ISO/IEC 25000), also including the software component quality context and extend it to the embedded domain.
These standards provide a high-level definition of characteristics and metrics for software products but do not provide ways to be used in an effective way, becoming very difficult to apply them without acquiring more knowledge from supplementary sources.
robust framework for software reuse context
The framework will allow that the embedded components produced in a Software Reuse Environment are certified before
being stored in a Repository System.
The Embedded Software Component Quality Verification Framework is composed of four modules:
• an Embedded software component Quality Model,
• a Maturity Level Evaluation Techniques,
• a Metrics Approach, and
• a Component Certification Process.
The framework cover two perspectives of the three considered in SQuaRE project : acquirers and evaluators.
acquirer’s perspectives is used to define which component best fits the customer’s needs and application/domain context.
evaluator’s perspectives should be considered for evaluation
required by companies in order to achieve trust in its components.
developer’s perspectives is not contemplate, because it very hard for only one developer to execute all activities, independent of his
knowledge
The evaluation occurs through models that measure quality
These models describe and organize the quality characteristics that will be considered during the evaluation
To measure the quality it is necessary to develop a Quality Model
The EQM proposed is based on SQuaRE project (ISO/IEC 25000, 2005), with adaptations for components and in embedded domain
Some definitions:
Quality characteristic is a set of properties by which its quality can be described and evaluated, and refined into sub-characteristics.
Attribute is a quality property to which a metric can be assigned.
Metric is a procedure for examining a component.
Quality model is the set of characteristics and sub-characteristics, that provide the basis for specifying quality requirements and for
evaluating quality (Bertoa et al., 2002).
Identifying important quality characteristics, classified in different criteria:
i. Local or Global characteristics
a. individual components (local characteristics )
b. software architecture level (global characteristics).
ii. Moment in which it can be measured (Preiss et al.,2001):
a. characteristics at runtime (e. g. Performance)
b. characteristics at cycle-life (e. g. Maintainability).
iii. Application Metrics
a. internal metrics (white-box) b. external metrics (black-box)
Functionality Real-time Accuracy Security
Suitability Interoperability Compliance Self-contained Reliability Recoverability
Fault Tolerance Safety
Maturity
Usability Configurability Understandability Learnability Operability Efficiency Time Behavior
Resource behavior Scalability
Energy consumption Memory utilization Maintainability Analyzability
Stability Changeability Testability Portability Deployability Replaceability
Flexibility Reusability Marketability Development time
Compatibles architectures Cost
The EQM follow the ISO/IEC 25010, some changes were made to adequate for software components in embedded context.
The characteristics :
• Relevant were maintained;
• Not interesting was eliminated;
• The name was changed to adequate it to new context;
• New important characteristics
was added
The use of attributes and metrics is used to determine whether a
component fulfills in the characteristics and sub-characteristics . The EQM consists of four elements:
• Characteristics,
• Sub-characteristics,
• Attributes and
• Metrics.
A quality characteristic is a set of properties through which its quality can be described and evaluated
Quality Attributes are observable at runtime and life-cycle
.Characteristic s
Sub- Characteristics
(Runtime)
CharacteristicSub- s
(Life-cycle)
Attributes
Functionality
Real-time
1.Response time (Latency)
a.Throughput (“out”) b.Processing Capacity (“in”)
1.Execution time
1.Worst case execution time
1.Dead line
Accuracy 1.Correctness
Security
1.Data Encryption 1.Controllability 1.Auditability
The table groups the attributes by
characteristics and sub- characteristics, and
indicates the metrics used for evaluating each
attribute.
Characteristi cs
CharacteristicsSub- (Runtime)
Characteristi cs
(Life-cycle)
Reliability
Recoverability 1.Error Handling
Fault Tolerance
1.Mechanism availability
1.Mechanism efficiency
Safety 1.Environment analyze
1.Integrity
Usability Configurability 1.Effort to configure 1.Understandability Resource behavior 1.peripheral utilization
1.Mechanism
Maintainability
Stability 1.Modifiability
Changeability
1.Extensibility 1.Customizability 1.Modularity
Testability
1.Test suite provided
1.Extensive component test cases
1.Component tests in a specific environment 1.Proofs the components tests
Portability
Deployability 1.Complexity level
Replaceability 1.Backward Compatibility Flexibility 1.Mobility
1.Configuration capacity
Reusability
1.Domain abstraction level 1.Architecture
compatibility 1.Modularity
The model is complemented with Quality in Use characteristics (ISO/IEC 25000, 2005) are composed of:
• Productivity,
• Satisfaction,
• Security, and
• Effectiveness.
Quality in Use characteristics are useful to show the
component’s behavior in different environments, it is measured
• Bring relevant information for new customers,
• This is the user’s view of the component,
• Obtained when the component in an execution environment, and
• Analyze the results according to their expectations.
The Additional Information characteristics complement the model and are composed of:
Technical Information is important for developers to analyze the actual state of the component ,
Organization Information is important to know who is the responsible for that component. Additional
Information Technical Information
Component Version
Programming Language
Patterns Usage
Architecture compatible
Program Memory used
Technical Support Organization Information
(E. g. evaluation of a component used in railway system and game).
Different evaluation levels must be used in order to provide degree of confidence for different domains and risk-levels.
The Details of an evaluation is a reflex of the evaluation techniques used.
So, an
Embedded software component Maturity Model
(EMM) was defined. It is based on CMMI (CMMI, 2000) and model for general propose component (Alvaro et al., 2007a).
the depth of the evaluation gives different degrees of confidence.
Each company/customer decides which level is better for evaluating its components, analyzing the cost/benefits of each level.
The evaluation levels can be chosen independently for each characteristic (e.g. functionality → EMM I, reliability → EMM III, usability → EMM IV).
Level Environment Safety/Security Economic Domain EMM I No damage Few material
damage; No specific risk
Negligible economic
loss
Entertainment ,
EMM II Small/Medium damage properly
Few people disabled
Few economic
loss
household
EMM III Damage
properly Large number of
people disabled Significant economic
loss
Security, Control systems EMM IV Recoverable
environment damage
Threat to human
lives Large
economic gross
Medical, Financial EMM V Unrecoverable
environmental damage
Many people
killed Financial
disaster Transportatio n, Nuclear
systems
EQM
A relation EQM - Quality Attributes X Evaluation Technique - EMM is necessary.
The objectve is not to propose a large amount of isolated
techniques, but to propose a set of techniques that are essential for measuring each quality attribute, complementing each other and, thus, becoming useful to compose the Maturity Level Evaluation Techniques.
Accuracy analysis Test, Regression Test (if possible) inspection Reliability Dependability analysis
Suitability analysis Programming Language Facilities (Best
Practices) Error Manipulation analysis
Fault tolerance analysis
Error Injection analysis
Error recover
Reliability growth model Formal Proof
Usability Effort to Configure analysis
Documentation analysis (Use Guide, architectural analysis, etc)
Interfaces inspection provided and
required Code and component’s interface
inspection correctness and completeness)
Analysis of the pre and post-conditions of the component
User mental model
Efficiency Constraint analyses
Accuracy analysis Evaluation measurement (memory, power and resource)
Memory Analysis
Power consumption Analysis
Resource Analysis
Tests of performance(memory, power and resource)
Algorithmic complexity
Performance optimization (memory, power and resource)
Performance profiling analysis
Formal Proof
Maintainability Customizability analysis
Extensibility analysis Inspection of Documents
Analysis of the provided test suite (if exists)
Code metrics and programming rules
Static Analysis
Analysis of the component development process
Traceability evaluation
Component Test Formal Proof Portability Component execution in specific
environment and architectural analysis
Cohesion, Coupling, Modularity and Simplicity analyses
Cohesion of the documentation with the source code analysis
Deployment analysis
Backward compatibility
Mobility analysis
Configurable analysis
Hardware/Software analysis
Conformity to programming rules Environment and architectural constraints evaluation
Domain abstraction analysis
Analysis of the component’s architecture
• Formal Proof
Execution time • Evaluation measurement
Worst case execution time • Evaluation measurement
• System Test
Dead line • Evaluation measurement
• System Test
Accuracy Correctness • Requirements and Documentation Analysis
• Accuracy analysis
• Functional Testing (black box),Unit Test, Regression Test (if possible)
• Functional Tests (white-box) with coverage criteria
Security Data Encryption • System Test
• Code Inspection
Controllability • System Test
• Code Inspection
Auditability • System Test
• Code Inspection
y (Best Practices)
• Error Manipulation analysis
• Error Injection analysis
• Error recover
• Reliability growth model
• Formal Proof
Fault Tolerance Mechanism available • Suitability analysis
• Dependability analysis Mechanism efficiency • Error injection analysis
• Programming Language Facilities (Best Practices)
• Fault tolerance analysis
• Reliability growth model
• Formal Proof
Safety Environment analyze • Dependability analysis
• Environment analyses
• System analyses
Integrity • System analyses
Usabilit
y Configurability Effort for configure • Effort to Configure analysis
• Interfaces inspection provided and required
• Code and component’s interface inspection correctness and completeness)
• Analysis of the pre and post-conditions of the component
• User mental model Understandability Documentation analysis (Use
Guide, architectural analysis, etc)
• Resource Analysis
• Tests of performance
• Performance optimization
• Performance profiling analysis
• Formal Proof
Energy consumption Mechanism available • Constraint analyses
• Evaluation measurement
• Power consumption analysis
• Tests of performance
• Performance optimization
• Performance profiling analysis
• Formal Proof Data Memory
Utilization Mechanism available • Constraint analyses
• Evaluation measurement
• Memory analysis
• Tests of performance
• Performance optimization
• Performance profiling analysis
• Formal Proof Program Memory
Utilization Mechanism available • Evaluation measurement
• Constraint analyses
nability programming rules
• Inspection of Documents
• Static Analysis
Changeability Extensibility • Effort for operating
• Extensibility analysis Customizability • Customizability analysis
Modularity • Code metrics and
programming rule
Testability Test suit provided • Analysis of the test-suite provided (if exists)
Extensive component test cases • Analysis of the component development process
Component tests in a specific
environment • Traceability evaluation
Proofs the components test • Component Test Formal Proof
environments and architectural analysis
• Deployment analyses
• Environment and architectural constraints evaluation
Replaceability Backward Compatibility • Backward compatibility analysis
Flexility Mobility • Mobility analyses
Configuration capacity • Configuration analyses
Reusability Domain abstraction level • Cohesion of the documentation with the source code analysis
• Domain abstraction analysis Architecture compatibility • Conformity to programming rules
• Analysis of the component’s architecture
• Hardware/Software analysis
Modularity • Modularity analyses
Cohesion • Cohesion analyses
Coupling • Coupling analyses
Simplicity • Simplicity analyses
improve of software process (Basili et al., 1994).
Measurement Objectives:
• to assess a project progress,
• to take corrective action based on this assessment, and
• to evaluate the impact of such action
Helps of Measurement
• Helps support project planning
• Allow to determine the strengths and weaknesses of processes/products
• Provides a rationale for adopting/refining techniques
• The Quality Function Deployment approach (Kogure & Akao, 1983),
• The Goal Question Metric approach (Basili et al., 1994), (Basili, 1992), (Basili & Rombach, 1988), (Basili & Selby, 1984), (Basili & Weiss, 1984), and
• The Software Quality Metrics approach (Boehm et al., 1976), (McCall et al., 1977).
In this Framework the Goal-Question-Metric (GQM)
approach was adopted, which was the same technique proposed to use in ISO/IEC 25000 looking for track the software product
properties.
1. Specify the goals for itself and its projects,
2. Must trace those goals to the data that are intended to define those goals operationally, and
3. Provide a framework for interpreting the data with respect to the goals.
1. Conceptual level - GOAL: A goal is defined for an object(product, process, resource)
2. Operational level - QUESTION: A set of questions is used to
characterize the way the assessment/achievement of a specific goal 3. Quantitative level - METRIC: A set of data is associated with every
question in order to answer it in a quantitative way (Objective/Subjective)
Summarized, A GQM model is starting with a goal. The goal is refined into several questions. Each question is then refined into metrics, some of them objective, some of them subjective.
ii. Evaluation techniques of EMM, and iii. Certification process
(i)Metrics to track the EQM Properties
Functionality Sub-Characteristic Accuracy Quality
Attribute Correctness
Goal Evaluates the percentage of the results that were obtained with precision Question Based on the amount of
tests executed, how much test results return with
Usability Sub-Characteristic Configurability
Quality Attribute Effort to Configure
Goal Evaluates the time necessary to configure the component.
Question How much time is needed to configure the component in order to work correctly in a system?
Metric Time spent to configure correctly
Objective Metric
Subjective
Each technique can be
measured in different ways and complexity, using
different tools, techniques, methods and processes.
and Modularity analyzes
EMM level I
Technique Coupling, Cohesion, Simplicity, Reusability, Modularity analyzes using Checkstyle tool[2]
Goal Evaluates the internal source code of the component
Question Is the Checkstyle tool efficient enough to measure those attributes?
Metric Analysis of the results and coverage of the Tool
Interpretation If the tool can mine these kinds of information from the source code and present them to be analyzed, it is good to evaluate the component. On the other hand, if it is not enough to evaluate some kind of attributes, other tool should be use to complement or to substitute this one. If this tool is good to evaluate the component, an analysis of the metrics collected in the tool can be used to define those attributes from the component. The idea is that the component should have: less coupling, high cohesion, high modularity, [2] Checkstyle – http://checkstyle.sourceforge.net
The idea is to obtain feedback from those metrics in order to improve the activities and
steps to assure the efficiency and efficacy of the process
Goal Adequately evaluate the software component embedded
Question Could the evaluation team evaluate
everything they planned to execute using the documents developed during the process activities?
Metric Total documented functionalities / Total component functionalities (or Total measurement accomplished)
Interpretation 0 <= x <= 1; which closer to 1 is better Component Certification Process
Goal Analyze the usability of the templates provided
Question Has the template helped during the certification development?
The evaluation team should define metrics as much as they think interesting in
framework for software reuse (Almeida et al., 2004).
It define a set of activities in order to guide the evaluation team during the component evaluation, and
It could be repeatable and reproducible, each activity contains a well-detailed description:
inputs and outputs,
mechanisms to execute, and
to control.
A set of works from literature which includes processes for software product evaluation and processes for software component
assessment aid during the definition of this process (McCall et al., 1977) , (Boegh et al., 1993), (Beus-Dukic et al., 2003), (Comella- Dorda et al., 2002)
(Ross, 1997).
The evaluation team is the main responsible for executing this
process and should be carefully defined in order to assure that the This information is important in the case that the
component is approved and, much more interesting if the component is rejected.
• short time to market, and
• high quality.
Assessment and evaluation of software components has become a compulsory and crucial part of any CBSD Assessment and
evaluation of software components lifecycle.
To properly enable the evaluation of embedded software components, supplying the real necessities of the embedded system design of building system fast, cheap and high quality systems, an Embedded software Component Quality
addressed by CBD approach
Definition (problem, objective and goals), Planning (design,
instrumentation and threats), Operation (measurements are collected)
, Analysis and Interpretation (data are analyzed and evaluated), Presented and Packaged (results are presented and packged) Definition, Planning activities will be presented.
The complete experiment study will be accomplished and described in next year.
Analyze the capacity to evaluate the quality of embedded software component for the propose of evaluating embedded software
component quality verification framework with respect to the efficiency of the framework from the point of view of the researchers, software and quality engineers (customers, evaluators) in the context of the embedded software component quality area.
Training - The training of the subjects using the process will be conducted in a classroom at the university
Pilot Project - Before the study, a pilot project will be
conducted, aiming to detect problems and improve the planned material.
Selection of Subjects - Ten students of pos-graduation at UFPE were selected by convenience sampling.
Subjects - According to its skills and technical knowledge to evaluate the embedded software components.
Instrumentation - the subjects will receive a questionnaires about
his/her education, experience and satisfaction using the framework, and