SCG News

Stopping DNS Rebinding Attacks in the Browser

Mohammadreza Hazhirpasand, Arash Ale Ebrahim, and Oscar Nierstrasz. Stopping DNS Rebinding Attacks in the Browser. In Proceedings of the 7th International Conference on Information Systems Security and Privacy - ICISSP, 2021. Details.


DNS rebinding attacks circumvent the same-origin policy of browsers and severely jeopardize user privacy. Although recent studies have shown that DNS rebinding attacks pose severe security threats to users, up to now little effort has been spent to assess the effectiveness of known solutions to prevent such attacks. We have carried out such a study to assess the protective measures proposed in prior studies. We found that none of the recommended techniques can entirely halt this attack due to various factors, e.g., network layer encryption renders packet inspection infeasible. Examining the previous problematic factors, we realize that a protective measure must be implemented at the browser-level. Therefore, we propose a defensive measure, a browser plug-in called Fail-rebind, that can detect, inform, and protect users in the event of an attack. Afterwards, we discuss the merits and limitations of our method compared to prior methods. Our findings suggest that Fail-rebind does not nec essitate expert knowledge, works on different OSes and smart devices, and is independent of networks and location.

Posted by scg at 23 February 2021, 3:34 pm comment link

Biomimicry-based Algorithms and Their Lack of Generalization

Dean Klopsch. Biomimicry-based Algorithms and Their Lack of Generalization. Bachelor’s thesis, University of Bern, February 2021. Details.


Biomimicry has received much attention in engineering, and many breakthrough discoveries have been guided by a solution found in nature. However, many biomimicry-based proposals apply to a specific problem, provide limited context, and lack implementation details. That makes it unnecessarily hard for practitioners to find relevant literature for their problems. To investigate this problem, we performed a literature review on 111 publications related to biomimicry and extracted several characteristics, e.g., meta-data, the solution, and the investigated species. In particular, we were interested in whether the proposed algorithms could be used for other use cases. Our results indicate a structural issue: publications related to new or adapted algorithms very prominently emphasize on a specific use case, instead of the generalized problem category, e.g., clustering. We found that 38% lack generalization at least in one of the introductory elements (i.e., title, abstract, and introduction), and that 53% of them lack generalization entirely. Moreover, 40% of the proposed algorithms lack at least one major characteristic, e.g., code samples or benchmarks against state of the art algorithms. We motivate the found generalization problem with our adapted implementation of an algorithm proposed for load scheduling. Moreover, the artifacts of this study can support practitioners in finding more efficiently existing solutions across research domains.

Posted by scg at 17 February 2021, 4:15 pm comment link

Interactive Visualizations for Software Duplication

Jonas Richner. Interactive Visualizations for Software Duplication. Masters thesis, University of Bern, January 2021. Details.


In large software systems usually about 5%-20% of the code is duplicated. Duplicated code can increase maintenance costs because it has to be maintained in multiple locations. There is a significant amount of research on visualizing software duplication to help reduce these costs. But in practice mostly basic visualizations are used and the more advanced visualizations proposed by researchers are not adopted by the software industry. We believe the reason for this is that visualizations from academic research rely on single stand-alone views that only support simple analysis tasks. To support more complex tasks, we propose a set of connected multi-view visualizations for inspecting software duplication. We follow the systematic approach of Bret Victor for building interactive visualizations to gain insight into a system. Results from our user study indicate that our prototype is easy to use in various clone analysis tasks, and helps users reason about the code at multiple levels of abstraction.

Posted by scg at 26 January 2021, 10:15 am comment link

Moldable requirements

Nitish Patkar. Moldable requirements. In Benevol’20, p. , , 2020. Details.


Separate tools are employed to carry out individual requirements engineering (RE) activities. The lack of integration among these tools scatters the domain knowledge, making collaboration between technical and non-technical stakeholders difficult, and management of requirements a tedious task. In this Ph.D. research proposal, we argue that an integrated development environment (IDE) should support various RE activities. For that, distinct stakeholders must be able to effortlessly create and manage requirements as first-class citizens within an IDE. With "moldable requirements," developers create custom hierarchies of requirements and build tailored interfaces that enable other stakeholders to create requirements and navigate between them. Similarly, they create custom representations of requirements and involved domain objects to reflect various levels of detail. Such custom and domain-specific representations assist non-technical stakeholders in accomplishing their distinguished RE related tasks. The custom interfaces make the IDE usable for non-technical stakeholders and help to preserve requirements in one place, closer to the implementation.

Posted by scg at 12 December 2020, 12:07 pm comment link

A Sampling Profiler for a JIT Compiler

Andreas Wälchli. A Sampling Profiler for a JIT Compiler. Masters thesis, University of Bern, September 2020. Details.


For efficient execution of dynamically typed languages, many implementations use a two-tier architecture. The first tier is used for low-latency startup and collects dynamic profiles, e.g., the types of all program variables. The second tier provides high throughput through the use of an optimizing compiler. This compiler specializes the code for the type information recorded in the first tier. If a program suddenly changes its behavior and presents the compiled code with types that have not been seen before and that are incompatible with the compiled version, that specialization becomes invalid. It is deoptimized and control is transferred back to the first tier where new profiles are gathered and specialization can start anew. But if the program behavior becomes more specific, for instance, if a variable suddenly becomes monomorphic (i.e., only takes on one single type) this will not trigger a deoptimization as it is still compatible with the compiled version. If the program were recompiled with that monomorphic variable in mind, performance could be improved. Once the program is running in an optimized form there are no means to notice such optimization opportunities. We propose the use of a sampling profiler to monitor native code without instrumentation. With the absence of instrumentation we incur no overhead when the profiler is inactive and can control the active profiler overhead by limiting the sampling rate. It also allows sampling at random points in the program and not just at predefined locations. Our implementation is R-hacek in the context of the optimizing R JIT-compiler for the R language. Based on the collected R-hacek profiles we are able to detect when the native code produced by R is specialized for stale type information and trigger recompilation for more specific type information. We show that sampling with our profiler adds an overhead of less than 3% in most cases and up to 9% in some cases when active. We also show that it reliably detects stale type information within milliseconds.

Posted by scg at 4 September 2020, 8:15 pm comment link
<< 1 2 3 4 5 6 7 8 9 10 >>
Last changed by admin on 21 April 2009