The Role of Software Engineering in Evolution

Words
3740 (8 pages)
Downloads
17
Download for Free
Important: This sample is for inspiration and reference only

The efficiency of these software metrics is made difficult by not having proper Thresholds. Based on expert opinion and a number of observations, thresholds for software are derived. Software metrics has been around in the world since the occurrence of Software Engineering. Some known source code metrics include lines of code, the McCabe complexity metric, and the Chidamber-Kemerer suite of object-oriented metrics. Metrics are thought as a tool which can help develop and maintain the software. For example, metrics were proposed to identify the situations where problems might appear in the source code so that we can effectively apply the maintenance resources in that place. Tracking the metrics values help us to assess progress in development of the software and also control the quality of the software during maintenance. The other applications of metrics involve, comparing or rating the quality of the software and come to an acceptance agreement with the consumer and the producer. [1] 2. Barakli B, Kilic U (2015) Due to the increase rapidly of electricity demand and the deregulation of electricity markets, the energy networks are usually run close to their maximum capacity to transmit the needed power. Furthermore, the operators have to run the system to ensure its security and transient stability constraints under credible contingencies. Security and transient stability constrained optimal power flow (STSCOPF) problem can be illustrated as an extended OPF problem with additional line loading and rotor angle inequality constraints. This paper presents a new approach for STSCOPF solution by a chaotic artificial bee colony (CABC) algorithm based on chaos theory.

The proposed algorithm is tested on IEEE 30-bus test system and New England 39-bus test system. The obtained results are compared to those obtained from previous studies in literature and the comparative results are given to show validity and effectiveness of proposed method.[2] This paper is an idea on an improved hierarchical model for high level design quality attributes assessments in objects oriented designs. In the above model, using a suite of object-oriented metrics, structural and behavioral design properties of classes, objects, and their relationships are evaluated. The object-oriented model relates the design properties with the high level quality attributes using empirical and anecdotal information. The properties of design to the quality attributes of the relationship or links are measured in accordance with their significance and importance. The object oriented model validation is done by using verifiable and expert suggestions to compare model results on several large commercial object-oriented systems. The important feature of this model is that it can be modified easily to add different links and weights, providing a realistic quality assessment tool which can be adapted to a variety of demands. [3]

Java software is written in large amounts ever since the language escaped into the software environment more than ten years ago. Very little is known about the Java program’s structure: about the way functions are grouped into classes and then into packages, the means by which the packages are linked to one another or the way inheritance and composition are used to put these programs together. There was an in-depth study conducted and the results of the structure of the Java Programs were given. A number of Java programs were collected and their structural attributes were measured. The evidence indicated that some links followed the power-laws, while some did not. There were also some variations observed that seemed to be related to the characteristic of the application itself. The above study helps to understand and provide information to the researchers who can investigate how and why the structural links we find may have been initiated, what they foretell and about how they can be managed. [4]

This paper includes a way for quantitative risk assessment in epidemiological studies investigating threshold effects. A simple logistic regression model is used to explain the relationship between a binary response variable and a continuous risk factor. The standard values of the risk factor can be calculated by modes of nonlinear functions of the logistic regression coefficients. Some benchmark mistakes and confidence intervals of the standard values are derived by means of the multivariate delta method. The proposed approach is compared with the threshold model of Ulm (1991) for assessing threshold values in epidemiological studies. [5] There are a lot of papers which have investigated the links between design metrics and the detection of faults in object oriented software. Some of the studies have shown these models can be helpful in predicting the faults in the software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. The paper introduces a novel technique to build fault-proneness models, whose functional form is unknown. The technique is known as MARS (multivariate adaptive regression splines). The advantage of the model is that one system can be accurately used to rank classes within another system based on their fault proneness. The disadvantage is that as there are system differences, the predicted fault probabilities are not representative of the system predicted. But the cost benefit model described in the paper illustrates that the MARS fault-proneness model is viable, from economic point of view. [6]

Cross-project defect prediction is very interesting because (i) it allows predicting defects in projects for which the availability of data is limited, and (ii) it allows producing generalizable prediction models. Although ongoing research suggests that the cross project is challenging and due to different variety of projects the prediction accuracy is always not good. The paper provides a novel method for cross project defect prediction, according to multi-objective logistic regression model built using a generic algorithm. It provides the software engineer with multi-objective approach and makes them choose predictors achieving a compromise between number of likely defect-prone artifacts and LOC to be analyzed. The results on 10 datasets indicate that the multi-objective approach is superior to the single-objective approach. [7] The efficient prediction of defect prone software modules will help the software developers to focus on quality assurance activities and allocate resources to other project events. Support vector machines (SVM) have been successfully used for solving both classification and regression problems in many applications. The paper evaluates the ability of SVM in predicting defect-prone software modules and comparing its prediction performance against four NASA datasets. The results suggest that the prediction performance of SVM is better than the compared models. [8]

A fault prone module in software development process enables effective detection of the defects. These modules are especially beneficial for large scale systems where testing team experts need to focus their attention and resources to problem areas in the system under development. In this paper, we will see a novel methodology for predicting fault prone modules based on random forests. Random forests are an extension of decision tree learning. This methodology generates hundreds of decision trees using subsets of the training data and thereby increasing the efficiency of finding faults. The accuracy to predict the fault prone modules of the proposed methodology is higher than that of achieved by the logistic regression, discriminant analysis and the algorithms in two machine learning software projects. The performance in this method has statistically increased over other methods. Also the classification accuracy is much better in methods consisting of larger datasets. [9] In this paper, a general method for the automatic generation of decision trees is investigated. The main objective of the approach is to provide insights through in-depth experimental characterization and evaluation of decision trees for a problem domain specially that of software resource data analysis. The objective of the decision trees is to note the classes of objects that have had high development effort. In a study conducted to analyze from NASA production environment, the decision trees had rightly noted 79.3% of the software modules that had high development effort or faults. The decision trees generated from the best parameter combinations rightly identified 88.4% modules. Some sample decision trees were also included.

The paper explains about a graph-theoretic complexity measure and demonstrates how it can be used to manage and control program complexity. It first explains about how the graph theory will be applied and then gives logical explanation of graph concepts in programming terms. The control graphs of several programs are presented to illustrate the correlation between intuitive complexity and the graph-theoretic complexity. Some of the properties of the graph-theoretic complexity are proved which show that the complexity is independent of physical size and that complexity only depends on the decision structure of the program. [11] The software metrics selection for building software quality prediction models is a search-based software engineering problem. Searching exhaustively for such metrics is not workable due to less project resources. The defect prediction models help in aiding the project managers for better quality of the software. The intendant result and usefulness of a fault-proneness prediction model is only as good as the quality of the software measurement data. The study involves the selection of the defect prediction models in the context of the software quality estimation. A hybrid selection approach is used in which the feature ranking is first used to reduce the search space, followed by a feature subset selection. There were seven different feature ranking techniques that were evaluated, while four different feature subset selection approaches are considered. The case study involves software metrics and defect data collected from a large real-world software system. The results demonstrate that some feature ranking techniques performed similarly, the automatic hybrid search algorithm performed the best among the feature subset selection methods.

The maintenance of an object oriented software system must be supported by evaluating by using adequate quantification means. Although the current extensive use of metrics if used in isolation metrics they are often times too fine grained to quantify comprehensively. With an aim to help the developers and software maintainers and to detect local problems in design, a novel mechanism has been proposed. This novel mechanism is called detection strategy is used for formulating metrics and is based on rules that capture deviations from good design principles and heuristics. An engineer can use some strategies and directly find a design flaw compared to having to infer the real design problem from a large set of abnormal metric values. The paper has provided ten detection strategies and has validated the approach experimentally on multiple large-scale case studies. [13] A novel algorithmic method is introduced for the calculation of thresholds for a metric set. Machine learning and data mining concepts are also involved in the paper. To simplify complex classification models and for the calculation of thresholds a data-driven methodology is used for efficiency optimization. It is not dependent on the metric set, any language, and paradigm or abstraction level. Four case studies were conducted on open source software metric sets for C functions, C++methods, C# methods and Java classes.

No time to compare samples?
Hire a Writer

✓Full confidentiality ✓No hidden charges ✓No plagiarism

The defective software modules in the software cause software failures causing lots of maintenance issues. Efficient and effective defect prediction models can help to reduce the defective software modules and help the developers to focus more on the quality assurance by using the resources more efficiently. Some of these models use static measures obtained from source code. [15] Software product quality can be enhanced significantly if we have a good knowledge and understanding of the potential faults therein. This paper describes a study to build predictive models to identify parts of the software that have high probability of occurrence of fault. We have considered the effect of thresholds of object-oriented metrics on fault proneness and built predictive models based on the threshold values of the metrics used. Prediction of fault prone classes in earlier phases of software development life cycle will help software developers in allocating the resources efficiently. In this paper, we have used a statistical model derived from logistic regression to calculate the threshold values of object oriented, Chidamber and Kemerer metrics.

Thresholds help developers to alarm the classes that fall outside a specified risk level. In this way, using the threshold values, we can divide the classes into two levels of risk – low risk and high risk. We have shown threshold effects at various risk levels and validated the use of these thresholds on a public domain, proprietary dataset, KC1 obtained from NASA and two open source, Promise datasets, IVY and JEdit using various machine learning methods and data mining classifiers. Interproject validation has also been carried out on three different open source datasets, Ant and Tomcat and Sakura. This will provide practitioners and researchers with well formed theories and generalised results. The results concluded that the proposed threshold methodology works well for the projects of similar nature or having similar characteristics.[16] A novel search-based approach to software quality modeling with multiple software project repositories is presented. Training a software quality model with only one software measurement and defect data set may not effectively encapsulate quality trends of the development organization. The inclusion of additional software projects during the training process can provide a cross-project perspective on software quality modeling and prediction. The genetic-programming-based approach includes three strategies for modeling with multiple software projects: Baseline Classifier, Validation Classifier, and Validation-and-Voting Classifier. The latter is shown to provide better generalization and more robust software quality models. This is based on a case study of software metrics and defect data from seven real-world systems.

A second case study considers 17 different (non-evolutionary) machine learners for modeling with multiple software data sets. Both case studies use a similar majority-voting approach for predicting fault-proneness class of program modules. It is shown that the total cost of misclassification of the search-based software quality models is consistently lower than those of the non-search-based models. This study provides clear guidance to practitioners interested in exploiting their organization's software measurement data repositories for improved software quality modeling.[17] This paper provides a systematic review of previous software fault prediction studies with a specific focus on metrics, methods, and datasets. The review uses 74 software fault prediction papers in 11 journals and several conference proceedings. According to the review results, the usage percentage of public datasets increased significantly and the usage percentage of machine learning algorithms increased slightly since 2005. In addition, method-level metrics are still the most dominant metrics in fault prediction research area and machine learning algorithms are still the most popular methods for fault prediction. Researchers working on software fault prediction area should continue to use public datasets and machine learning algorithms to build better fault predictors. The usage percentage of class-level is beyond acceptable levels and they should be used much more than they are now in order to predict the faults earlier in design phase of software life cycle.

Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles.

An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement.[19] The history of software metrics is almost as old as the history of software engineering. Yet, the extensive research and literature on the subject has had little impact on industrial practice. This is worrying given that the major rationale for using metrics is to improve the software engineering decision making process from a managerial and technical perspective. Industrial metrics activity is invariably based around metrics that have been around for nearly 30 years (notably Lines of Code or similar size counts, and defects counts). While such metrics can be considered as massively successful given their popularity, their limitations are well known, and mis-applications are still common. The major problem is in using such metrics in isolation. We argue that it is possible to provide genuinely improved management decision support systems based on such simplistic metrics, but only by adopting a less isolationist approach. Specifically, we feel it is important to explicitly model: (a) cause and effect relationships and (b) uncertainty and combination of evidence. Our approach uses Bayesian Belief nets, which are increasingly seen as the best means of handling decision-making under uncertainty. The approach is already having an impact in Europe. [26]

Software testing is a well-defined phase of the software development life cycle. Functional ('black box') testing and structural ('white box') testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible. Risk can be characterized by a combination of two factors: the severity of a potential failure event and the probability of its occurrence. Risk-based testing focuses on analyzing the software and deriving a test plan weighted on the areas most likely to experience a problem that would have the highest impact [McMahon]. This looks like a daunting task, but once it is broken down into its parts, a systematic approach can be employed to make it very manageable.

As software becomes more and more sophisticated, industry has begun to place a premium on software reliability. The telecommunications industry is no exception. Consequently software reliability is a strategic business weapon in an increasingly competitive marketplace. In response to these concerns, BNR, Nortel, and Bell Canada developed the Enhanced Measurement for Early Risk Assessment of Latent Defects (Emerald), a decision support system designed to improve telecommunications software reliability. Emerald efficiently integrates software measurements, quality models, and delivery of results to the desktop of software developers. We have found that Emerald not only improves software reliability, but also facilitates the accurate correction of field problems. Our experiences developing Emerald have also taught us some valuable lessons about the implementation and adoption of this type of software tool.

Predicting the fault-proneness of program modules when the fault labels for modules are unavailable is a practical problem frequently encountered in the software industry. Because fault data belonging to previous software version is not available, supervised learning approaches cannot be applied, leading to the need for new methods, tools, or techniques. In this study, we propose a clustering and metrics thresholds based software fault prediction approach for this challenging problem and explore it on three datasets, collected from a Turkish white-goods manufacturer developing embedded controller software. Experiments reveal that unsupervised software fault prediction can be automated and reasonable results can be produced with techniques based on metrics thresholds and clustering. The results of this study demonstrate the effectiveness of metrics thresholds and show that the standalone application of metrics thresholds (one-stage) is currently easier than the clustering and metrics thresholds based (two-stage) approach because the selection of cluster number is performed heuristically in this clustering based method. Despite the importance of software metrics and the large number of proposed metrics, they have not been widely applied in industry yet.

One reason might be that, for most metrics, the range of expected values, i.e., reference values are not known. This paper presents results of a study on the structure of a large collection of open-source programs developed in Java, of varying sizes and from different application domains. The aim of this work is the definition of thresholds for a set of object-oriented software metrics, namely: LCOM, DIT, coupling factor, afferent couplings, number of public methods, and number of public fields. We carried out an experiment to evaluate the practical use of the proposed thresholds. The results of this evaluation indicate that the proposed thresholds can support the identification of classes which violate design principles, as well as the identification of well-designed classes. The method used in this study to derive software metrics thresholds can be applied to other software metrics in order to find their reference values.[23] A comprehensive metrics validation methodology is proposed that has six validity criteria, which support the quality functions assessment, control, and prediction, where quality functions are activities conducted by software organizations for the purpose of achieving project quality goals. Six criteria are defined and illustrated: association, consistency, discriminative power, tracking, predictability, and repeatability. The author shows that nonparametric statistical methods such as contingency tables play an important role in evaluating metrics against the validity criteria. Examples emphasizing the discriminative power validity criterion are presented. A metrics validation process is defined that integrates quality factors, metrics, and quality functions. [24]

With the increasing use of object-oriented methods in new software development, there is a growing need to both document and improve current practice in object-oriented design and development. In response to this need, a number of researchers have developed various metrics for object-oriented systems as proposed aids to the management of these systems. In this research, an analysis of a set of metrics proposed by Chidamber and Kemerer (1994) is performed in order to assess their usefulness for practicing managers. First, an informal introduction to the metrics is provided by way of an extended example of their managerial use. Second, exploratory analyses of empirical data relating the metrics to productivity, rework effort and design effort on three commercial object-oriented systems are provided. The empirical results suggest that the metrics provide significant explanatory power for variations in these economic variables, over and above that provided by traditional measures, such as size in lines of code, and after controlling for the effects of individual developers.

You can receive your plagiarism free paper on any topic in 3 hours!

*minimum deadline

Cite this Essay

To export a reference to this article please select a referencing style below

Copy to Clipboard
The Role of Software Engineering in Evolution. (2020, October 20). WritingBros. Retrieved April 27, 2024, from https://writingbros.com/essay-examples/the-role-of-software-engineering-in-evolution/
“The Role of Software Engineering in Evolution.” WritingBros, 20 Oct. 2020, writingbros.com/essay-examples/the-role-of-software-engineering-in-evolution/
The Role of Software Engineering in Evolution. [online]. Available at: <https://writingbros.com/essay-examples/the-role-of-software-engineering-in-evolution/> [Accessed 27 Apr. 2024].
The Role of Software Engineering in Evolution [Internet]. WritingBros. 2020 Oct 20 [cited 2024 Apr 27]. Available from: https://writingbros.com/essay-examples/the-role-of-software-engineering-in-evolution/
Copy to Clipboard

Need writing help?

You can always rely on us no matter what type of paper you need

Order My Paper

*No hidden charges

/