SCG News

Categorising Test Smells

Maudlin Kummer. Categorising Test Smells. Bachelor’s thesis, University of Bern, March 2015. Details.

Abstract

The aim of this investigation into test smells was to find out how familiar developers are with test smells, the frequency of test smells and their severity in the industrial world. First of all, a taxonomy of different test smells was created and grouped according to programming principles as a basis for this study. Several interviews were then conducted to find out which test smells to include in the subsequent survey. 20 people with different industrial experience levels participated in this survey. It was hypothesised that test smells are not identified as such and that their names are unknown. The hypothesis was supported by the results of the survey. The results revealed that test smells are not quite well-known despite the fact that some of them occur rather frequently and pose severe problems.

Posted by scg at 23 March 2015, 7:15 pm comment link

Towards Faster Method Search Through Static Ecosystem Analysis

Boris Spasojević, Mircea Lungu, and Oscar Nierstrasz. Towards Faster Method Search Through Static Ecosystem Analysis. In Proceedings of the 2014 European Conference on Software Architecture Workshops, ECSAW ’14 p. 11:1—11:6, ACM, New York, NY, USA, August 2014. Details.

Abstract

Software developers are often unsure of the exact name of the method they need to use to invoke the desired behavior in a given context. This results in a process of searching for the correct method name in documentation, which can be lengthy and distracting to the developer. We can decrease the method search time by enhancing the documentation of a class with the most frequently used methods. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem - written in the same language and sharing dependencies. We implemented a proof of concept of the approach for Pharo Smalltalk and Java. In Pharo Smalltalk, methods are commonly searched for using a code browser tool called "Nautilus", and in Java using a web browser displaying HTML based documentation - Javadoc. We developed plugins for both browsers and gathered method usage data from open source projects, in order to increase developer productivity by reducing method search time. A small initial evaluation has been conducted showing promising results in improving developer productivity.

Posted by scg at 13 March 2015, 9:15 pm comment link

Mining the Ecosystem to Improve Type Inference For Dynamically Typed Languages

Boris Spasojević, Mircea Lungu, and Oscar Nierstrasz. Mining the Ecosystem to Improve Type Inference For Dynamically Typed Languages. In Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, Onward! ’14 p. 133—142, ACM, New York, NY, USA, 2014. Details.

Abstract

Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.

Posted by scg at 13 March 2015, 9:15 pm comment link

A Unified Approach to Architecture Conformance Checking

Andrea Caracciolo, Mircea Filip Lungu, and Oscar Nierstrasz. A Unified Approach to Architecture Conformance Checking. In Proceedings of the 12th Working IEEE/IFIP Conference on Software Architecture (WICSA), ACM Press, 2015. To appear. Details.

Abstract

Software erosion can be controlled by periodically checking for consistency between the de facto architecture and its theoretical counterpart. Studies show that this process is often not automated and that developers still rely heavily on manual reviews, despite the availability of a large number of tools. This is partially due to the high cost involved in setting up and maintaining tool-specific and incompatible test specifications that replicate otherwise documented invariants. To reduce this cost, our approach consists in unifying the functionality provided by existing tools under the umbrella of a common business-readable DSL. By using a declarative language, we are able to write tool-agnostic rules that are simple enough to be understood by non-technical stakeholders and, at the same time, can be interpreted as a rigorous specification for checking architecture conformance

Posted by scg at 26 February 2015, 3:15 pm comment link

Explora: Tackling Corpus Analysis with a Distributed Architecture

Leonel Merino. Explora: Tackling Corpus Analysis with a Distributed Architecture. In SATToSE’14: Post-Proceedings of the 7th International Seminar Series on Advanced Techniques & Tools for Software Evolution, 2015. Details.

Abstract

When analysing a corpus of software, researchers often ask questions that entail exploration and navigation, such as what packages contain fat interfaces in open source systems?, how consistently is the code being commented? and are naming conventions being followed?. The answers to these questions can impact software maintainability and evolution. Software visualisation can be of aid to understanding and exploring the answers to such questions, but corpus visualisations are time consuming and difficult to achieve since they require large amounts of data to be processed. We tackle this constrain by using a distributed architecture. In this paper we propose an environment where researchers can build queries for their questions and afterwards rapidly visualise them. We elaborate on a proof of concept tool named Explora and we report early results when visualising Qualitas Corpus. This paper uses colours in the figures.

Posted by scg at 18 February 2015, 3:15 pm comment link
<< 1 2 3 4 5 6 7 8 9 10 >>
Last changed by admin on 21 April 2009