SCG News

The Moldable Editor

Aliaksei Syrel. The Moldable Editor. Bachelor’s thesis, University of Bern, February 2018. Details.

Abstract

We present a scalable and moldable text editor modelled as a single composition tree of visual elements. It allows us to mix text and graphical components in a transparent way and treat them uniformly. As a result we are able to augment code with views by using special adornment attributes that are applied on text by stylers as part of the syntax highlighting process. We validate the model by implementing a code editor capable of manipulating large pieces of text; the Transcript, a logging tool being able to display non-textual messages; the Connector, an example dependencies browser and the Documenter, an interactive and live editor of Pillar markup documents.

Posted by scg at 13 February 2018, 2:15 pm 1 comment link

Issue Report Assessment — Assessment of Issue Report Quality and Class through Natural Language Processing

Simon Curty. Issue Report Assessment — Assessment of Issue Report Quality and Class through Natural Language Processing. Bachelor’s thesis, University of Bern, February 2018. Details.

Abstract

Issue reports are a crucial aspect of software development. They enable collaboration and communication of faults and task. However, issue reports are only helpful if they provide meaningful information, which is often not the case. Such low-quality issue reports induce a multitude of problems: Developers lose time figuring out the issue. Both time needed to complete the issue and its priority are harder to estimate. Assigning issues to the right developer is more difficult. To alleviate these negative impacts, commonly a triager is taking the responsibility to assess issue reports. However, this process is time consuming and cannot fill in missing information. Since issue reports provide insight into a projects changes and faults over time, they are an often used data source for bug prediction, that is the inference of knowledge about past and future bugs. But the quality of such predictions depends on the quality of the source data. We address two factors strongly affecting the quality of issue reports: (1) issue reports have an assigned type. When the assigned type does not reflect the true nature of the report, it is said to be misclassified. An issue report classified as bug may in reality be a feature request. This Misclassification introduces bias in bug prediction and makes it harder to assign an issue to the right developer. (2) Bug reports are commonly written by hand and the reporter does not always include important information. The quality of the content of a bug report impacts the time a developer needs to spend identifying the actual issue. To tackle the misclassification problem, we propose an approach to categorize issue reports. Our classifier achieves an accuracy of 82.9% when classifying issue reports into bugs and non-bugs. For multiclass classification of issue reports by type, our model achieves an accuracy of 74.4%. Thus, our model can reliably validate datasets used for bug prediction and distinguish between issue types. To address the second problem, we propose an approach to estimate the quality of bug reports and assign them a score. This Quality Estimator is capable of giving improvement suggestions, potentially helping reporters to write more helpful bug reports. To showcase real world applications of our models we integrated both the classifier model and the Quality Estimator into a tool for issue report assessment. The tool is implemented as browser extension and allows the user to get feedback about the type of an issue report with one click. The tool is also capable of providing suggestions on how to improve a bug report based on the information already provided.

Posted by scg at 8 February 2018, 6:15 pm comment link

Bug Prediction with Neural Nets using Regression- and Classification-based Approaches

Sébastien O. Broggi. Bug Prediction with Neural Nets using Regression- and Classification-based Approaches. Bachelor’s thesis, University of Bern, January 2018. Details.

Abstract

Bugs can often be hard to identify and developers spend a large amount of time locating and fixing them. Bug prediction strives to identify code defects using machine learning and statistical analysis and therefore decreasing time spent on bug localization. With bug prediction, awareness of bugs can be increased and software quality can be improved in significantly less time. Machine learning models are used in most modern bug prediction tools and with recent advances in machine learning, new models and possibilities have arisen that further improve the possibilities and performance in bug prediction. In our studies, we test the performance of “Doc2Vec — a current model that is used to vectorize plain text — on source code to perform classification. Instead of relying on code metrics, we analyze and vectorize plain-text source code and try to identify bugs based on similarity to learned paragraph vectors. Testing two different implementations of the Doc2Vec model, we find that no usable results can be achieved by using plain text classification models for bug prediction. Even after abstracting the code and applying parameter tuning on our model, all experiments deliver a constant 50% accuracy, so no learning can be achieved by any of the models. The experiments clearly show that code should not be treated as plain text and should instead contain more code-specific information like metrics about the code. Our second setup of experiments consists of a 3-layer feed forward neural network that performs classification- and regression- based approaches on code metrics, using datasets that contain a discrete number of bugs as a response variable. There have already been many successful experiments using metrics to perform classification based on code metrics. In our studies we compare the performance of a standard regression and standard multi-class classification model to the models “classification by regression (CbR) and “regression by classification” (RbC). In the RbC model, we use the output from classification to predict an accurate number of bugs and then calculate the root-mean-square error (RMSE). In the CbR model we use the output of regression to perform binary classification and calculate area under the receiver operating characteristic curve (ROC AUC) to compare the results. In our experiments we find that a neural network delivers better results when using the CbR model on an estimated defect count compared to the results using standard multi-class classification. We also suggest that the RMSE can significantly be decreased by using the RbC model compared to standard regression.

Posted by scg at 6 February 2018, 7:15 pm comment link

Polymorphism in the Spotlight: Studying its Prevalence in Java and Smalltalk

Nevena Milojković, Andrea Caracciolo, Mircea Lungu, Oscar Nierstrasz, David Röthlisberger, and Romain Robbes. Polymorphism in the Spotlight: Studying its Prevalence in Java and Smalltalk. In Proceedings of the 2015 IEEE 23rd International Conference on Program Comprehension, p. 186—195, IEEE Press, 2015. Published. Details.

Abstract

Subtype polymorphism is a cornerstone of object-oriented programming. By hiding variability in behavior behind a uniform interface, polymorphism decouples clients from providers and thus enables genericity, modularity and extensi- bility. At the same time, however, it scatters the implementation of the behavior over multiple classes thus potentially hampering program comprehension. The extent to which polymorphism is used in real programs and the impact of polymorphism on program comprehension are not very well understood. We report on a preliminary study of the prevalence of polymorphism in several hundred open source software systems written in Smalltalk, one of the oldest object-oriented programming languages, and in Java, one of the most widespread ones. Although a large portion of the call sites in these systems are polymorphic, a majority have a small number of potential candidates. Smalltalk uses polymorphism to a much greater extent than Java. We discuss how these findings can be used as input for more detailed studies in program comprehension and for better developer support in the IDE.

Posted by scg at 25 January 2018, 11:15 am comment link

Big Commit Analysis — Towards an Infrastructure for Commit Analysis

Andreas Hohler. Big Commit Analysis — Towards an Infrastructure for Commit Analysis. Masters thesis, University of Bern, January 2018. Details.

Abstract

Developers commit changes to the code base of a certain project in order to, for instance, fix bugs, add features, or refactor the code. In empirical studies, researchers often need to link commits with issues in issue trackers to audit the purpose of code changes. Unfortunately, there exists no general-purpose tool that can fulfill this need for different studies. For instance, while in theory each commit should serve one purpose, in practice developers may include several goals in one commit. Also, issues in issue trackers are often miscategorized. We present BICO (BIg COmmit analyzer), a tool that links the source code management system with the issue tracker. BICO presents information in a navigable form in order to make it easier to analyze and reason about the evolution of a certain project. It takes advantage of the fact that developers include issue IDs in commit messages to link them together. BICO also provides dedicated analytics to detect big commits, i.e., multi-purpose and miscategorized commits, using statistical outlier detection. In an initial evaluation, we use BICO to analyze bug-fix commits in Apache Kafka, where our tool reports 9.6% of the bug-fixing commits as miscategorized or multi-purpose commits with a precision of 85%. This high precision demonstrates the applicability of the outlier detection method implemented in BICO. A further case study with Apache Storm shows that the precision of detecting multi-purpose commits can vary between projects. In addition, BICO also comes with a built-in metric suite extractor for calculating change metrics, source code metrics and defect counts.

Posted by scg at 15 January 2018, 2:15 pm comment link
<< 1 2 3 4 5 6 7 8 9 10 >>
Last changed by scg on 14 August 2017