Open and On-going Master Projects
If you want further information or wish to apply please contact contact: Wishnu Prasetya
For projects in companies, you can contact them directly. I, or other staff member, can act as your University-side supervisor.
Topics in Automated Software Testing
• DSL for Scripting Automated Testing
is our homegrown automated tool to test Java classes. It is in principle purely random, though on the other hand it is also light weight and runs pretty fast; and hence suitable for short-runs development-time testing. The random-based approach does mean that it often requires some configuration and customization work upfront (e.g. to configure the testing scope, to customize the base domains, to configure the randomness, etc). Furthermore, once it runs there is little we can do to control its choices.
We want to design and develop a DSL to configure and to strategically control the runs of T2. This will require some modifications on T2's engine; but assume we can define a "T2 action", that takes a set of test-sequences and randomly (by using T2's random engine) generates a new test-sequences. Combinators can be defined over such actions, e.g. to filter the generated sequences according to some criterion, to compose them, or to apply some transformations on them (map). Similarly, combinators can be defined to configure T2, and to different configuration to different actions.
Finally, some experiments need to be conducted to test/demonstrate the usefulness of the approach.
- T2 site, downloads and pointers to some earlier research theses around it.
- T2 benchmarking in SBST 2013?
- Evosuite, another automated testing tool for Java.
- DSC, another automated testing tool for Java. Both Evosuite and DSC ran the same SBST benchmark as T2, so you can compare.
- LUA embeddable scripting language is where we want to write the DSL with.
- LUAJava to facilitate the embedding of LUA in Java.
• Automated Inference of Control-flow Predicates for Test-Oracles
Automated test-cases generation is often suggested as a solution to complement costly manual testing. A test-case basically describes a sequence of steps on the target program, the inputs it wants to give to the program, and what it expects to get from the program. The last part is usually called "oracle", but conceptually it is the same as "specifications". Most automated test-generation techniques usually only address the generation of the first two parts (the steps and the inputs). This works fine if the program has a specification. At least in theory, a specification can be turned to a code-fragment, and thus giving us free oracles. However, in practice most programs do not have well documented specifications, let alone formal specifications.
What we want to do is to generate at least fragments of specifications by inferring them from logs, which is a very challenging problem.
To make the target program produce logs, we either insert logging statements manually, or inject them automatically. A very useful tool to infer specifications from logs is Daikon
. It is either used directly, or used as a back-end by other tools to do dynamic inference. Despite still being the state of the art, inference with Daikon suffers two problems. First, its vocabulary is limited. Second, it cannot infer disjunctive specifications (expressions of the form a || b || c ...), because this leads to exponential explosion. The research we want to pursue is on how to improve this.
One approach is to split the behavior of the target program into several scenarios, which can be expressed easily. And then we derive the specifications for each scenario. This circumvents the disjunction problem above. We currently already have tools to do that, and we use simple visit patterns on the program's control flow's nodes for specifying scenarios. However the tools cannot handle complex visit patterns, nor can it exploit other kinds of deep logging. Another challenging problem is that the solution is sensitive to modifications on the target program. If the program is modified such that the control flow graph changes, the previously inferred oracles are in principle no longer "relevant". However, very often the modifications turn out to be local. So, most of the oracles should actually still be relevant. The problem is in defining what "relevant" is, and in finding which oracles would still be relevant.
We need ideas on how to solve those problems, a prototype implementation, and an experiment to demonstrate the effectiveness of your solution.
• A DSL for Building a Dynamic Inference System
Programs can be instrumented to produce logs, which can provide a lot of information about its executions. Various things can be inferred from logs, ranging from usage statistics to finite state models. Daikon
is a prime example of a 'dynamic inference tool'. It infers specifications like pre/post conditions and class invariants from logs. Daikon has been very useful in many researches, used as a back-end module. Although its basic use is pretty simple, to use it effectively may require quite complex parameterization to tune Daikon to the problem domain. One of such parameterization is called splitting, which is actually an operation to split the logs into several partitions and to infer the specifications for each partition separately. People use this to get around Daikon inability to deal with disjunction (because it leads to exponential explosion). Splitting is actually just an instance of log transformation. Daikon can be elevated even further if we have a more powerful way to express transformations. For example, by providing more powerful splitters, and by providing values to be combined and abstracted.
We would like to have a Haskell-embedded DSL alternative to Daikon. You can still use Daikon at the back-end, but the DSL can provide with a poweful way to express log transformations, and perhaps other things too. Moreover, the type system of Haskell can be exploited to specify log structures in a type-correct way.
• Searching for Optimal State Abstraction
Software Testing in Companies
• Automated Testing of Business Logic
This project seeks to investigate how the software company
can increase its productivity and/or quality by automating
the testing of its business logic.
The Company's Background
For 25 years AllSolutions has been developing and implementing business software for various organizations. Since 2001 we made the
whole suite of our functionalities accessible through Internet browsers, and now also through mobile devices. These include
functionalities such as ERP, CMS, DMS, CRM, and Social Interaction.
Our web-application uses a number of technologies:
- The framework and the business logic (incl. the data layer) are programmed in Progress ABL.
- ASP.NET (C#) is used to exchange data between the busiless logic and the client.
The Problem to Solve
The Business Logic (BL) forms by far the largest part of the
application. Given its role, it is also a mission critical
part. Furthermore, it the part where most changes are made. However,
it is tested by manually interacting with the application. The process
is very labour intensive. While this worked, it now becomes an
inhibiting factor for further growth of the application.
The company wants to have a solution for automating the tests. Towards
this end, this project is set to do a preliminary research. The
research question we want to answer is:
Which automated testing approach will work best for AllSolution's Business Logic?
For example, should we do it ala black box, or white box? Will random
testing be good enough, or do have to turn to model-based testing? Is
applying the classification tree approach useful and feasible?
How about the oracles? How do we get them? Is it feasible to do
property-based testing? Or algebraic testing (e.g. CRUD)? Or perhaps
using the application's previous version as the orcale?
And of course, we will need a convincing proof of your answer. We need
you to construct a prototype of your approach, where you demonstrate
the effectiveness of your solution. You will have to defend why this
solution is feasible for the company. Furthemore, we also need the
prototype to be designed in such a way that it can be scaled up to a
mature automated testing tool for AllSolutions.
• Testframework for Amplixs' php/smarty application-framework.
is a young and growing Dutch software company with national and international customers. They
specialize in applications for collecting and reporting various on-line informations. Various large and small companies use our applications.
In this project they want to design and implementat and test framework for testing applications from their
php/smarty application-framework. In the current approach the tests are completely hand crafted. Such a framework should
enable the testing to be done a lot more efficiently and with as much automation as possible.
One of the challanges here is that there are currently no adequate testframework for php; so they challange you to invent one.
Furthermore, whenever developers change the code somewhere in an application, they need a support for determining the extent of the effects of the changes; this is not trivial considering the size of typical applications. Once the extend is determined, this can be used to infer how to test the changes.
The above is just a short description of the project. A lot more details and information on how to apply are can be read in
this pdf file
; unfortunately it is in Dutch. Let me know if this is a problem; I suppose I can ask Amplixs for an English version.
Peter Scheer, peter dot scheer at amplixs dot com
Amplixs Interaction Management
Tel : 035 - 588 58 23
• I want to do a software testing project at company X
You can. But first we need to agree on the project. Talk to me, or other ST staff, so that we can
first discuss if the project will be interesting, have sufficient depth, and realistic. If we aggree to
proceed you still of course need to submit your research proposal.