Corporate Collaboration In Software Testing
Corporate Collaboration in Software Testing
The Center for Software Technology
seeks to collaborate with companies, e.g. to help them identify parts of their testing cycles that can structurally be improved, and to transfer academic ideas towards their solutions.
Research capacity: 1 senior researcher + yearly 3-5 master students to work on thesis projects in the area of software testing. From sept. 2010 we will be further strengthened by 1 PhD?
student and 1 dedicated software engineer (EU-FP7 funded).
Our current research focuses are:
Unit-level automated testing, in particular in OO setup.
We use reflection to infer the set of APIs towards a given target class, and use the information to generate sequences of APIs calls to test the class. The sequences are generated, in principle, randomly; but instrumentation is used to measure the coverage of the tests, and we experiment with approaches to direct our random generation towards uncovered areas.
Tool: T2, website: t2framework.googlecode.com
Useful patterns for automated testing.
Ultimately, the power of an automated tool like T2 is limited. Without no knowledge on semantics it would be very difficult for a tool to generate, e.g. valid zip-code. Semantical information can come in the form of specifications (e.g. class invariants), models (e.g. automatons), and custom data domains. We investigate patterns that would allow this kind of information to be tightly integrated to T2, hence allowing us to still capitalize on T2's brute automation. By treating the SUT as a black box, these patterns can be made suitable for e.g. the testing of business logics behind web applications.
Presentation at the Dutch Testing day 2009
Specification-based Testing of Object-oriented Programs with T2
Examples of other themes as examples of possible interesting collaborations:
Automated testing of PHP applications
, Amplixs BV (running).
Testing a web application is a bit similar to testing against a GUI-based application. It is more challenging because the set of possible interactions changes dynamically, and that the overall space of interactions is in principle huge. An approach like T2 cannot handle this. Model-based could make the problem more workable, but only if we can avoid having to hand craft them all the way. One of the ideas is to have a tester propose a single test sequence. An algorithm will then exhaustively explore all branches of the sequence, analyze the runs, then apply a heuristic to determine a model, which is then used for subsequent regressions.
Since an application can accumulate a lot of test cases over its life time, regression can be quite long. This is not only annoying, but is also wasted time. Selective regression tries to select only fault revealing test cases with respect to the changes just made to the SUT; this should run much faster. We use instrumentation to do this, and want to experiment with several known selection algorithms. We would want to see how the approach actually perform for a real case study.
Testing a function over a complex business domain (e.g. tax or insurance) is challenging because of the combinatorial explosion in which its parameters can be combined. Pragmatically, this means we will have to select a not-too-large subset of the combinations which we decide to cover, but preferably we don't want to manually enumerating the combinations, since as humans we are likely to miss some important combinations. We want to have a way to declaratively express the coverage (over the combinations) that we want, and let a tool generate the combinations instead. The approach would work well for black box testing.
Log-based diagnosis of Internet applications
This is part of the european FITTEST project, that should start in sept. 2010, funded by EU-FP7. Our part in the project is to develop a technique to identify fragile points in an Internet application. This relies on static analysis. These fragile points will then be monitored; if strange behavior is observed, it will be logged. Subsequently, by analyzing other information in the logs, we try to construct an abstracted view on the execution leading to the strange behavior.