Matching entries: 0
settings...
Arifulina S, Soltenborn C and Engels G (2012), "Coverage Criteria for Testing DMM Specifications", In Proceedings of the 11th International Workshop on Graph Transformation and Visual Modeling Techniques (GT-VMT 2012), Tallinn (Estonia). Vol. 47 European Association of Software Science and Technology (EASST).
Abstract: Behavioral modeling languages are most useful if their behavior is specified formally such that it can e.g. be analyzed and executed automatically. Obviously, the quality of such behavior specifications is crucial. The rule-based semantics specification technique Dynamic Meta Modeling (DMM) honors this by using the approach of Test-driven Semantics Specification (TDSS), which makes sure that the specification at hand at least describes the correct behavior for a suite of test models. However, in its current state TDSS does not provide any means to measure the quality of such a test suite.In this paper, we describe how we have applied the idea of test coverage to TDSS. Similar to common approaches of defining test coverage criteria, we describe a data structure called invocation graph containing possible orders of applications of DMM rules. Then we define different coverage criteria based on that data structure, taking the rule applications caused by the test suite's models into account. Our implementation of the described approach gives the language engineer using DMM a means to reason about the quality of the language's test suite, and also provides hints on how to improve that quality by adding dedicated test models to the test suite.
BibTeX:
@inproceedings{Arifulina2012a,
  author = {Svetlana Arifulina AND Christian Soltenborn AND Gregor Engels},
  editor = {A. Fish, L. Lambers},
  title = {Coverage Criteria for Testing DMM Specifications},
  booktitle = {Proceedings of the 11th International Workshop on Graph Transformation and Visual Modeling Techniques (GT-VMT 2012), Tallinn (Estonia)},
  publisher = {European Association of Software Science and Technology (EASST)},
  year = {2012},
  volume = {47},
  url = {http://journal.ub.tu-berlin.de/eceasst/article/view/718/724}
}
Güldali B, Sauer S, Winkelhane P, Jahnich M and Funke H (2010), "Pattern-based Generation of Test Plans for Open Distributed Processing Systems", In Proceedings of 5th International Workshop on Automation of Software Test (AST 2010), ICSE Workshop. , pp. 119-126. ACM Press.
BibTeX:
@inproceedings{ast2010,
  author = {Baris Güldali AND Stefan Sauer AND Peter Winkelhane AND Michael Jahnich AND Holger Funke},
  title = {Pattern-based Generation of Test Plans for Open Distributed Processing Systems},
  booktitle = {Proceedings of 5th International Workshop on Automation of Software Test (AST 2010), ICSE Workshop},
  publisher = {ACM Press},
  year = {2010},
  pages = {119-126}
}
Beulen D, Güldali B and Mlynarski M (2010), "Tabellarischer Vergleich der Prozessmodellefür modellbasiertes Testen aus Managementsicht", Softwaretechnik-Trends., Mai, 2010. Vol. 30(2), pp. 6-9.
Abstract: In dieser Publikation zeigen wir, wie die unterschiedlichen Prozessmodelle aus Managementsicht miteinander verglichen werden können. Dafür stellen wir basierend auf die Literatur Vergleichskriterien auf. Unser Ziel ist es, mit Hilfe von objektiven Kriterien eine Vergleichbarkeit von MBT-Prozessmodellen zu ermöglichen. Den Testmanagern geben wir ein Hilfsmittel in die Hand, mit dem sie einschätzen können, mit welchen Aufwänden sie bei der Auswahl eines Prozessmodells rechnen können. Da die Einführung neuer Verfahren vom Reifegrad eines Prozesses abhängt, adressieren wir bei dem Vergleich auch den für die Prozessmodelle benötigten Reifegrad des Testprozesses nach Test Process Improvement (TPI) und die benötigten Mo-dellierungskenntnisse des Testteams, die mit Hilfe von Modeling Maturity Levels (MML) gemessen werden können.
BibTeX:
@article{bgm2010,
  author = {Dominik Beulen AND Baris Güldali AND Michael Mlynarski},
  title = {Tabellarischer Vergleich der Prozessmodellefür modellbasiertes Testen aus Managementsicht},
  journal = {Softwaretechnik-Trends},
  year = {2010},
  volume = {30},
  number = {2},
  pages = {6-9},
  url = {http://www.gm.fh-koeln.de/~winter/tav/html/tav29/TAV29P02Gueldali.pdf}
}
Brüseke F, Becker S and Engels G (2011), "Palladio-based performance blame analysis", In Proceedings of the 16th International Workshop on Component-Oriented Programming (WCOP; satellite event of the CompArch 2011), Boulder Colorado, CO (USA). New York, NY (USA) , pp. 25-32. ACM.
Abstract: Performance is an important quality attribute for business information systems. When a tester has spotted a performance error, the error is passed to thesoftware developers to fix it. However, in component-based software development the tester has to do blame analysis first, i. e. the tester has to decide, which party is responsible to fix the error. If the error is a design or deployment issue, itcan be assigned to the software architect or the system deployer. If the erroris specific to a component, it needs to be assigned to the corresponding component developer. An accurate blame analysis is important, because wrong assignments of errors will cause a loss of time and money. Our approach aims at doing blame analysis for performance errors by comparing performance metrics obtained in performance testing and performance prediction. We use performance prediction values as expected values for individual components. For performance prediction we use the Palladio approach. By this means, our approach evaluates each component's performance in a certain test case. If thecomponent performs poorly, its component developer needs to fixthe component or the architect replaces the component with afaster one. If no omponent performs poorly, we can deduce that there is a design or deployment issue and the architecture needs to be changed. In this paper, we present an exemplary blame analysis based on a web shop system. The example shows the feasibility of our approach.
BibTeX:
@inproceedings{brueseke2011a,
  author = {Frank Brüseke AND Steffen Becker AND Gregor Engels},
  editor = {R. Reussner, C. Szyperski, W. Weck},
  title = {Palladio-based performance blame analysis},
  booktitle = {Proceedings of the 16th International Workshop on Component-Oriented Programming (WCOP; satellite event of the CompArch 2011), Boulder Colorado, CO (USA)},
  publisher = {ACM},
  year = {2011},
  pages = {25-32},
  doi = {10.1145/2000292.2000298}
}
Güldali B, Sauer S and Engels G (2008), "Formalisierung der funktionalen Anforderungen mit visuellen Kontrakten und deren Einsatz für modellbasiertes Testen", Softwaretechnik-Trends., August, 2008. Vol. 28(3), pp. 12-16.
Abstract: Wir haben in diesem Beitrag einen Ansatz zurFormalisierung der UML-Anwendungsfallsbeschreibungenvorgestellt, um Anwendungsfälleeffektiv für Testzwecke einsetzen zu können.Dabei werden die textuellen Beschreibungen derVor- und Nachbedingungen mit visuellen Kontraktenformalisiert. Die visuellen Kontrakte beschreibendie Änderungen bezüglich der fachlichenDaten nach der Ausführung des Anwendungsfalls.Mit visuellen Kontrakten könnenwährend der Testfallspezifikation Testeingabengeneriert und während der Testausführung Testausgabenüberprüft werden. Für visuelle Kontraktewurden Werkzeuge entwickelt, die dieEinbindung der visuellen Kontrakte in den Entwicklungs-und Testprozess ermöglichen.
BibTeX:
@article{egs08,
  author = {Baris Güldali AND Stefan Sauer AND Gregor Engels},
  title = {Formalisierung der funktionalen Anforderungen mit visuellen Kontrakten und deren Einsatz für modellbasiertes Testen},
  journal = {Softwaretechnik-Trends},
  year = {2008},
  volume = {28},
  number = {3},
  pages = {12-16},
  url = {http://pi.informatik.uni-siegen.de/stt/28_3/01_Fachgruppenberichte/TAV/07_TAV27P6Gueldali.pdf}
}
Engels G, Güldali B and Lohmann M (2007), "Towards Model-Driven Unit Testing", In Proceedings of the 2006 International Conference on Models in Software Engineering (MoDELS 2006). Berlin/Heidelberg, October, 2007. Vol. 4364, pp. 182-192. Springer.
Abstract: The Model-Driven Architecture (MDA) approach for constructingsoftware systems advocates a stepwise refinement and transformationprocess starting from high-level models to concrete programcode. In contrast to numerous research efforts that try to generate executablefunction code from models, we propose a novel approach termedmodel-driven monitoring. On the model level the behavior of an operationis specified with a pair of UML composite structure diagrams (visualcontract), a visual notation for pre- and post-conditions. The specifiedbehavior is implemented by a programmer manually. An automatic translationfrom our visual contracts to JML assertions allows for monitoringthe hand-coded programs during their execution.In this paper we present an approach to extend our model-driven monitoringapproach to allow for model-driven unit testing. In this approachwe utilize the generated JML assertions as test oracles. Further, wepresent an idea how to generate sufficient test cases from our visualcontracts with the help of model-checking techniques.
BibTeX:
@inproceedings{Engels2006a,
  author = {Gregor Engels AND Baris Güldali AND Marc Lohmann},
  editor = {T. Kühne},
  title = {Towards Model-Driven Unit Testing},
  booktitle = {Proceedings of the 2006 International Conference on Models in Software Engineering (MoDELS 2006)},
  publisher = {Springer},
  year = {2007},
  volume = {4364},
  pages = {182--192},
  note = {Models in Software Engineering, Workshops and Symposia at MoDELS 2006, Genoa, Italy, October 1-6, 2006, Reports and Revised Selected Papers, Genua (Italy)},
  doi = {10.1007/978-3-540-69489-2_23}
}
Grieger M, Güldali B, Sauer S and Mlynarski M (2013), "Testen bei Migrationsprojekten", OBJEKTspektrum (Online Themenspecials)., September, 2013. , pp. 1-4.
Abstract: Softwaresysteme altern über die Zeit. Dieser Prozess ist dadurch gekennzeichnet, dass sich die Kluft zwischen den wachsendenAnforderungen an die Systeme und deren tatsächliche Leistungsfähigkeit immer weiter vergrößert. Oftmals erfolgt eine Erhaltungund Weiterentwicklung der Systeme, um der Alterung entgegenzuwirken. Über die Zeit geht dies jedoch mit einer verringertenQualität der Software und einer steigenden Wartungskomplexität einher. Der Umfang der Änderungsmöglichkeitendurch eine Weiterentwicklung ist zudem begrenzt, weil Einschränkungen der zugrunde liegenden Technologie dazu führen können, dass nicht alle Anforderungen umsetzbar sind. Ein Ausweg ist die Migration des Systems in eine neue Umgebung. Dies istjedoch nur dann erfolgreich, wenn das migrierte System die funktionalen und nichtfunktionalen Anforderungen weiterhin erfüllt,was durch Testen geprüft werden muss. In diesem Artikel erläutern wir, worauf es dabei je nach Art der Migration ankommt.
BibTeX:
@article{ggsmg13,
  author = {Marvin Grieger AND Baris Güldali AND Stefan Sauer AND Michael Mlynarski},
  title = {Testen bei Migrationsprojekten},
  journal = {OBJEKTspektrum (Online Themenspecials)},
  year = {2013},
  pages = {1-4},
  url = {http://www.sigs.de/publications/os/2013/Testing/gueldali_et_al_OS_Testing_2013.pdf}
}
Farag'o D, Törsel A-M, Mlynarski M, Weißleder S, Güldali B and Brandes C (2013), "Wirtschaftlichkeitsberechnung für MBT: Wann sich modellbasiertes Testen lohnt", OBJEKTspektrum., Juni, 2013. (4), pp. 32-38.
Abstract: Modellbasiertes Testen verspricht potenziell eine höhere Effizienz und Effektivität im Testprozess. Ob im eigenen Kontext der Einsatz wirtschaftlich ist, ist jedoch häufig unklar. Dieser Beitrag analysiert systematisch Kosten- und Nutzenfaktoren und stellt ein Verfahren zur Abschätzung der Wirtschaftlichkeit des modellbasierten Testens vor. Anhand eines Beispiels wird der Ablauf veranschaulicht.
BibTeX:
@article{gjmnw10,
  author = {David Farag'o AND Arne-Michael Törsel AND Michael Mlynarski AND Stephan Weißleder AND Baris Güldali AND Christian Brandes},
  title = {Wirtschaftlichkeitsberechnung für MBT: Wann sich modellbasiertes Testen lohnt},
  journal = {OBJEKTspektrum},
  year = {2013},
  number = {4},
  pages = {32-38},
  url = {http://www.sigs-datacom.de/fachzeitschriften/objektspektrum/archiv/artikelansicht.html?tx_mwjournals_pi1[pointer]}
}
Güldali B, Jungmayr S, Mlynarski M, Neumann S and Winter M (2010), "Starthilfe für modellbasiertes Testen", OBJEKTspektrum., April, 2010. (3), pp. 63-69.
Abstract: Modellbasiertes Testen ist eine Technik, die durch den Einsatz von abstrakten Modellen und geeigneten Algorithmen bestimmte manuelle Aktivitäten, wie z. B. das Testdesign, unterstützt. Die Einführung von modellbasiertem Testen hat das Potenzial, die Testüberdeckung durch die automatische Generierung von Testfällen zu erhöhen und somit das Vertrauen in die Software zu steigern. Den Einsparungen von manuellen Testaktivitäten steht aber der Zusatzaufwand für die Erstellung der Modelle gegenüber. Projekt- und Testmanager stehen also vor der Frage, ob modellbasiertes Testen in ihrer konkreten Testorganisation eine sinnvolle Investition darstellt. Dieser Artikel erklärt die wesentlichen Begriffe zum Thema modellbasiertes Testen und gibt Entscheidungsträgern eine heuristische Entscheidungshilfe an die Hand.
BibTeX:
@article{gjmnw10,
  author = {Baris Güldali AND Stefan Jungmayr AND Michael Mlynarski AND Stefan Neumann AND Mario Winter},
  title = {Starthilfe für modellbasiertes Testen},
  journal = {OBJEKTspektrum},
  year = {2010},
  number = {3},
  pages = {63-69},
  url = {http://www.sigs-datacom.de/fileadmin/user_upload/zeitschriften/os/2010/03/gueldali_OS_03_10.pdf}
}
Weißleder S, Güldali B, Mlynarski M, Törsel A-M, Farag'o D, Prester F and Winter M (2011), "Modellbasiertes Testen: Hype oder Realität?", OBJEKTspektrum., Oktober, 2011. (6), pp. 59-65.
Abstract: Manuelle Testerstellung verursacht hohe Kosten. Im Vergleich dazu bietet modellbasiertes Testen große Vorteile hinsichtlich Testautomatisierung, früher Fehlerfindung, Erhöhung der Testabdeckung, effizienten Testentwurfs und besserer Rückverfolgbarkeit. Die Einführung des modellbasierten Testens ist jedoch mit Investitionen verbunden, für die die Rendite häufig unklar erscheint. Dabei finden sich in der Literatur bereits etliche Erfahrungsberichte zur erfolg­reichen Einführung von modellbasiertem Testen in unterschiedlichen Anwendungsdomänen. In diesem Artikel präsentieren wir einen Überblick über einige dieser Erfahrungsberichte.
BibTeX:
@article{gjmnw10,
  author = {Stephan Weißleder AND Baris Güldali AND Michael Mlynarski AND Arne-Michael Törsel AND David Farag'o AND Florian Prester AND Mario Winter},
  title = {Modellbasiertes Testen: Hype oder Realität?},
  journal = {OBJEKTspektrum},
  year = {2011},
  number = {6},
  pages = {59-65},
  url = {http://www.sigs-datacom.de/fachzeitschriften/objektspektrum/archiv/artikelansicht.html?tx_mwjournals_pi1[pointer]}
}
Güldali B, Mlynarski M and Sancar Y (2010), "Effort Comparison of Model-based Testing Scenarios", In Proceedings of 3th International Conference on Software Testing, Verification, and Validation Workshops. , pp. 28-36. IEEE Computer Society.
BibTeX:
@inproceedings{gms2010,
  author = {Baris Güldali AND Michael Mlynarski AND Yavuz Sancar},
  title = {Effort Comparison of Model-based Testing Scenarios},
  booktitle = {Proceedings of 3th International Conference on Software Testing, Verification, and Validation Workshops},
  publisher = {IEEE Computer Society},
  year = {2010},
  pages = {28-36},
  url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber}
}
Güldali B and Sauer S (2010), "Transfer of Testing Research from University to Industry: An Experience Report", In online Proc. of International TestIstanbul Conference 2010 (URL: www.testistanbul.org/presentations.html)., May, 2010. Turkish Testing Board.
Abstract: Software Quality Lab (s-lab) is an open multi-private-public partnership institute for knowledge and technology transfer. In s-lab, partners from industrial software development closely cooperate with research groups of the University of Paderborn. Together with partners from industry, s-lab develops and evaluates constructive and analytical methods and tools of software engineering for obtaining high-quality software. Testing plays an important role in analytical quality assurance within s-lab's activities. Thereby our main focus lies on the development of testing methods, test automation tools and test management concepts for the individual needs of the industrial partners. Because of the different requirements of the university and the industry; the cooperation involves some challenges, e.g. defining projects aiming at the commercial interests of the industry and addressing interesting research questions.In this paper, we give an overview of testing activities in s-lab and address targets and challenges of the cooperative work between industry and university. We also summarize the lessons learned during the numerous testing projects especially in the domain of business information systems.
BibTeX:
@inproceedings{gs10,
  author = {Baris Güldali AND Stefan Sauer},
  title = {Transfer of Testing Research from University to Industry: An Experience Report},
  booktitle = {online Proc. of International TestIstanbul Conference 2010 (URL: www.testistanbul.org/presentations.html)},
  publisher = {Turkish Testing Board},
  year = {2010},
  url = {http://www.testistanbul.org/TestIstanbul%202010/Bar%C4%B1s%20Guldali.pdf}
}
Güldali B (2005), "Model Testing -- Combining Model Checking and Coverage Testing". Thesis at: University of Paderborn., May, 2005.
Abstract: Combining software testing with model checking has several advantages. There are a lot ofapproaches that combine these two techniques in a different manner. This master thesis extendsa new combined approach, introduced in [8] and [9], that applies the specification-basedtest case generation concept from [4] to model checking, and proposes a concept for automation1.In this approach, two models are assumed to be available: a specification model thatdescribes the user requirements on the system behavior and a system model that describes theactual system behavior. The second model is model checked in order to verify the temporallogic properties generated from the specification model. The automation concept includes thegeneration of the temporal logic properties and their verification using a model checker. Thethesis also describes how to apply the coverage-based test termination criterion to modelchecking as a completeness criterion.
BibTeX:
@mastersthesis{Gueldali2005,
  author = {Baris Güldali},
  title = {Model Testing -- Combining Model Checking and Coverage Testing},
  school = {University of Paderborn},
  year = {2005}
}
Güldali B, Rose M, Teetz A, Flake S and Rust C (2015), "Modellbasiertes Testen bei der Entwicklung einer IKT-Infrastruktur für Elektromobilität", Softwaretechnik-Trends., Juni, 2015. Vol. 35(1), pp. 1-5.
BibTeX:
@article{Gül15,
  author = {Baris Güldali AND Mirko Rose AND Alexander Teetz AND Stephan Flake AND Carsten Rust},
  title = {Modellbasiertes Testen bei der Entwicklung einer IKT-Infrastruktur für Elektromobilität},
  journal = {Softwaretechnik-Trends},
  year = {2015},
  volume = {35},
  number = {1},
  pages = {1-5},
  url = {http://is.uni-paderborn.de/pi.informatik.uni-siegen.de/stt/35_1/01_Fachgruppenberichte/TAV/5_SmartEM-Testen_Final.pdf}
}
Jovanovikj I, Grieger M, Güldali B and Teetz A (2016), "Reengineering of Legacy Test Cases: Problem Domain & Scenarios (to appear)", Softwaretechnik-Trends.
BibTeX:
@article{JGGTMMSM2016,
  author = {Ivan Jovanovikj AND Marvin Grieger AND Baris Güldali AND Alexander Teetz},
  title = {Reengineering of Legacy Test Cases: Problem Domain & Scenarios (to appear)},
  journal = {Softwaretechnik-Trends},
  year = {2016}
}
Löffler R, Güldali B and Geisen S (2010), "Towards Model-based Acceptance Testing for Scrum", Softwaretechnik-Trends., August, 2010. Vol. 30(3), pp. 9-12.
Abstract: In agile processes like Scrum, strong customer involve-ment requires techniques to support requirements anal-ysis and acceptance testing. Additionally, test automa-tion is crucial, as incremental development and contin-uous integration need high efforts for testing. To copewith these challenges, we propose a model-based tech-nique for documenting customer's requirements usingtest models. These can be used by the developers asrequirements specification and by the testers for accep-tance testing. We use light-weight and easy-to-learnmodeling languages. Based on the test models, we gen-erate test scripts for FitNesse and Selenium, which arewell-known test tools in the agile community.
BibTeX:
@article{lgg2010,
  author = {Renate Löffler AND Baris Güldali AND Silke Geisen},
  title = {Towards Model-based Acceptance Testing for Scrum},
  journal = {Softwaretechnik-Trends},
  year = {2010},
  volume = {30},
  number = {3},
  pages = {9--12},
  url = {http://pi.informatik.uni-siegen.de/stt/30_3/01_Fachgruppenberichte/TAV/03_TAV30PapierLoeffler.pdf}
}
Mlynarski M (2010), "Holistic Model-Based Testing for Business Information Systems", In Proceedings of 3rd International Conference on Software Testing, Verification and Validation., April, 2010. , pp. 327-330. IEEE Computer Society.
Abstract: Growing complexity of today's software development requires new and better techniques in software testing. A promising one seems to be model-based testing. The goal is to automatically generate test artefacts from models, improve test coverage and guarantee traceability. Typical problems are missing reuse of design models and test case explosion. Our research work aims to find a solution for the mentioned problems in the area of UML and Business Information Systems. We use model transformations to automatically generate test models from manually annotated design models using a holistic view. In this paper we define and justify the research problem and present first results.
BibTeX:
@inproceedings{Mlynarski2010a,
  author = {Michael Mlynarski},
  title = {Holistic Model-Based Testing for Business Information Systems},
  booktitle = {Proceedings of 3rd International Conference on Software Testing, Verification and Validation},
  publisher = {IEEE Computer Society},
  year = {2010},
  pages = {327--330}
}
Mlynarski M, Güldali B, Späth M and Engels G (2009), "From Design Models to Test Models by Means of Test Ideas", In MoDeVVa '09: Proceedings of the 6th International Workshop on Model-Driven Engineering, Verification and Validation. New York, NY, USA , pp. 1-10. ACM.
Abstract: Model-Based Testing is slowly becoming the next level of software testing. It promises higher quality, better coverage and efficient change management. MBT shows two main problems of modeling the test behavior. While modeling test cases test designers rewrite most of the system specification. Further, the number of test cases generated by modern tools is often not feasible. In practice, both problems are not solved. Assuming that the functional design is based on models, we show how to use them for software testing. With so-called test ideas, we propose a way to manually select and automatically transform the relevant parts of the design model into a basic test model that can be used for test case generation. We give an example and discuss the potentials for tool support.
BibTeX:
@inproceedings{modevva09,
  author = {Michael Mlynarski AND Baris Güldali AND Melanie Späth AND Gregor Engels},
  editor = {L. L'ucio and S. Weißleder},
  title = {From Design Models to Test Models by Means of Test Ideas},
  booktitle = {MoDeVVa '09: Proceedings of the 6th International Workshop on Model-Driven Engineering, Verification and Validation},
  publisher = {ACM},
  year = {2009},
  pages = {1-10},
  doi = {10.1145/1656485.1656492}
}
Ellerweg J, Engels G and Güldali B (2008), "Modellbasierter Komponententest mit visuellen Kontrakten", In INFORMATIK 2008, Beherrschbare Systeme - dank Informatik, Band 1, Beiträge der 38. Jahrestagung der Gesellschaft für Informatik e.V. (GI). Bonn Vol. 133, pp. 211-214. Gesellschaft für Informatik (GI).
BibTeX:
@inproceedings{MOTES08,
  author = {Jens Ellerweg AND Gregor Engels AND Baris Güldali},
  editor = {H.-G. Hegering, A. Lehmann, H. J. Ohlbach, C. Scheideler},
  title = {Modellbasierter Komponententest mit visuellen Kontrakten},
  booktitle = {INFORMATIK 2008, Beherrschbare Systeme - dank Informatik, Band 1, Beiträge der 38. Jahrestagung der Gesellschaft für Informatik e.V. (GI)},
  publisher = {Gesellschaft für Informatik (GI)},
  year = {2008},
  volume = {133},
  pages = {211--214}
}
Schnelte M and Güldali B (2010), "Test Case Generation for Visual Contracts Using AI Planning", In INFORMATIK 2010, Beiträge der 40. Jahrestagung der Gesellschaft für Informatik e.V. (GI). Bonn , pp. (accepted for publication). Gesellschaft für Informatik (GI).
BibTeX:
@inproceedings{motes2010,
  author = {Matthias Schnelte AND Baris Güldali},
  title = {Test Case Generation for Visual Contracts Using AI Planning},
  booktitle = {INFORMATIK 2010, Beiträge der 40. Jahrestagung der Gesellschaft für Informatik e.V. (GI)},
  publisher = {Gesellschaft für Informatik (GI)},
  year = {2010},
  pages = {(accepted for publication)}
}
Oster S, Wübbeke A, Engels G and Schürr A (2010), "Model-Based Software Product Lines Testing Survey", In Model-Based Testing For Embedded Systems., July, 2010. , pp. 339-381. CRC Press.
BibTeX:
@incollection{oswuensc2010,
  author = {Sebastian Oster AND Andreas Wübbeke AND Gregor Engels AND Andy Schürr},
  editor = {P. Mosterman, I. Schieferdecker, J. Zander},
  title = {Model-Based Software Product Lines Testing Survey},
  booktitle = {Model-Based Testing For Embedded Systems},
  publisher = {CRC Press},
  year = {2010},
  pages = {339--381}
}
Mlynarski M, Güldali B, Weißleder S and Engels G (2012), "Model-Based Testing: Achievements and Future Challenges", In Advances in Computers., September, 2012. , pp. 1 - 39. Elsevier.
Abstract: Software systems are part of our everyday life and they become more complex day by day. The ever-growing complexity of software and high quality requirements pose tough challenges to quality assurance.The quality of a software system can be measured by software testing. However, if manually done, testing is a time-consuming and error-prone task. Especially test case design and test execution are the most cost-intensive activities in testing. In the previous 20 years, many automation tools have been introduced for automating test execution by using test scripts. However, the effort for creating and maintaining test scripts remains. Model-based testing (MBT) aims at improving this part by systematizing and automating the test case design. Thereby, test cases or automatable test scripts can be generated systematically from test models.MBT is already known for several years, but it currently gains a great momentum due to advanced tool support and innovative methodological approaches. This chapter aims at giving an overview of MBT and summarizes recent achievements in MBT. Experiences with using the MBT approach are illustrated by reporting on some success stories. Finally, open issues and future research challenges are discussed.
BibTeX:
@incollection{oswuensc2010,
  author = {Michael Mlynarski AND Baris Güldali AND Stephan Weißleder AND Gregor Engels},
  editor = {Ali Hurson and Atif Memon},
  title = {Model-Based Testing: Achievements and Future Challenges},
  booktitle = {Advances in Computers},
  publisher = {Elsevier},
  year = {2012},
  pages = {1 - 39},
  url = {http://www.sciencedirect.com/science/article/pii/B9780123965356000016},
  doi = {10.1016/B978-0-12-396535-6.00001-6}
}
Güldali B, Mlynarski M, Wübbeke A and Engels G (2009), "Model-Based System Testing Using Visual Contracts", In Proceedings of Euromicro SEAA Conference 2009, Special Session on ``Model Driven Engineering''. Washington, DC, USA , pp. 121-124. IEEE Computer Society.
Abstract: In system testing the system under test (SUT) is tested against high-level requirements which are captured at early phases of the development process. Logical test cases developed from these requirements must be translated to executable test cases by augmenting them with implementation details. If manually done these activities are error-prone and tedious. In this paper we introduce a model-based approach for system testing where we generate first logical test cases from use case diagrams which are partially formalized by visual contracts, and then we transform these to executable test cases using model transformation. We derive model transformation rules from the design decisions of developers.
BibTeX:
@inproceedings{seaa09/mde,
  author = {Baris Güldali AND Michael Mlynarski AND Andreas Wübbeke AND Gregor Engels},
  title = {Model-Based System Testing Using Visual Contracts},
  booktitle = {Proceedings of Euromicro SEAA Conference 2009, Special Session on ``Model Driven Engineering''},
  publisher = {IEEE Computer Society},
  year = {2009},
  pages = {121-124},
  url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber}
}
Schumacher C, Güldali B, Engels G, Niehammer M and Hamburg M (2013), "Modellbasierte Bewertung von Testprozessen nach TPI NEXT® mit Geschäftsprozess-Mustern", In Software Engineering 2013., März, 2013. Vol. P-213, pp. 331-344.
Abstract: Die Qualität eines zu entwickelnden Softwareprodukts wird entscheidend durch die Qualität des zugehörigen Textprozesses beeinflusst. Das TPI(R)- Modell ist ein Referenzmodell zur Bewertung der Qualität des Testprozesses, das mittels Kontrollpunkten den Reifegrad von Testaktivitäten bestimmt.
BibTeX:
@inproceedings{SGENH2013,
  author = {Claudia Schumacher AND Baris Güldali AND Gregor Engels AND Markus Niehammer AND Matthias Hamburg},
  editor = {Stefan Kowalewski, Bernhard Rumpe},
  title = {Modellbasierte Bewertung von Testprozessen nach TPI NEXT® mit Geschäftsprozess-Mustern},
  booktitle = {Software Engineering 2013},
  year = {2013},
  volume = {P-213},
  pages = {331--344},
  note = {Fachtagung des GI-FachbereichsSoftwaretechnik26. Februar - 1. März 2013AachenProceedings},
  url = {http://www.gi.de/service/publikationen/lni}
}
Soltenborn C and Engels G (2009), "Towards Test-Driven Semantics Specification", In Proceedings of the 12th International Conference on Model Driven Engineering Languages and Systems (MODELS 2009), Denver, Colorado (USA). Berlin/Heidelberg Vol. 5795, pp. 378-392. Springer.
Abstract: Behavioral models are getting more and more important within the software development cycle. To get the most use out of them, their behavior should be defined formally. As a result, many approaches exist which aim at specifying formal semantics for behavioral languages (e.g., Dynamic Meta Modeling (DMM), Semantic Anchoring). Most of these approaches give rise to a formal semantics which can e.g. be used to check the quality of a particular language instance, for instance using model checking techniques.However, if the semantics specification itself contains errors, it is more or less useless, since one cannot rely on the analysis results. Therefore, the language engineer must make sure that the semantics he develops is of the highest quality possible. To help the language engineer to achieve that goal, we propose a test-driven semantics specification process: the semantics of the language under consideration is first informally demonstrated using example models, which will then be used as test cases during the actual semantics specification process. In this paper, we present this approach using the already mentioned specification language DMM.
BibTeX:
@inproceedings{Soltenborn2009a,
  author = {Christian Soltenborn AND Gregor Engels},
  editor = {A. Schürr, B. Selic},
  title = {Towards Test-Driven Semantics Specification},
  booktitle = {Proceedings of the 12th International Conference on Model Driven Engineering Languages and Systems (MODELS 2009), Denver, Colorado (USA)},
  publisher = {Springer},
  year = {2009},
  volume = {5795},
  pages = {378--392},
  doi = {10.1007/978-3-642-04425-0_30}
}
von der Maßen T and Wübbeke A (2010), "Verteiltes Testen heterogener Systemlandschaften", In Proceedings of Software Engineering 2010 (SE2010). Bonn Vol. P-159, pp. 17-18. Gesellschaft für Informatik (GI).
BibTeX:
@inproceedings{vdmWuebb2010,
  author = {Thomas von der Maßen AND Andreas Wübbeke},
  editor = {G. Engels, M. Luckey, W. Schäfer},
  title = {Verteiltes Testen heterogener Systemlandschaften},
  booktitle = {Proceedings of Software Engineering 2010 (SE2010)},
  publisher = {Gesellschaft für Informatik (GI)},
  year = {2010},
  volume = {P-159},
  pages = {17--18}
}
Voigt H, Güldali B and Engels G (2008), "Quality Plans for Measuring the Testability of Models", In Proceedings of the 11th International Conference on Quality Engineering in Software Technology (CONQUEST 2008), Potsdam (Germany). , pp. 353-370. dpunkt.verlag.
Abstract: For models used in model-based testing, the evaluation of their testability is an important issue. Existing approaches lack some relevant aspects for a systematic and comprehensive evaluation. Either they do (1) not consider the context of software models, (2) not offer a systematic process for selecting and developing right measurements, (3) not define a consistent and common quality understanding, or (4) not distinct between objective and subjective measurements.We present a novel quality management approach for the evaluation of software models in general that considers all these aspects in an integrated way. Our approach is based on a combination of the Goal Question Metric (GQM) and quality models. We demonstrate our approach by systematically developing a short quality plan for measuring the testability of software models.
BibTeX:
@inproceedings{VGE08,
  author = {Hendrik Voigt AND Baris Güldali AND Gregor Engels},
  editor = {I. Schieferdecker, S. Goericke},
  title = {Quality Plans for Measuring the Testability of Models},
  booktitle = {Proceedings of the 11th International Conference on Quality Engineering in Software Technology (CONQUEST 2008), Potsdam (Germany)},
  publisher = {dpunkt.verlag},
  year = {2008},
  pages = {353--370}
}
Wübbeke A (2008), "Towards an Efficient Reuse of Test Cases for Software Product Lines", In Proceedings of the 12th International Software Product Line Conference (SPLC 2008), Limerick (Ireland). Limerick, September, 2008. Vol. 2, pp. 361-368. Lero.
Abstract: Testing is a creative, complex and often time consuming task of the development process of a software system. If the basis of this process is the development-paradigm Software Product Line (SPL) the complexity is extended by the dimension variability. This variability is the basic principle for an effective and efficient reuse in all disciplines and their dimensions of the development process. To support this reuse in an optimal way product line specific concepts and approaches are necessary. This contribution presents the state the art in testing Software Product Lines and derives current challenges on an efficient design of executable test cases in this context.
BibTeX:
@inproceedings{Wuebbeke2008,
  author = {Andreas Wübbeke},
  editor = {S. Thiel, K. Pohl},
  title = {Towards an Efficient Reuse of Test Cases for Software Product Lines},
  booktitle = {Proceedings of the 12th International Software Product Line Conference (SPLC 2008), Limerick (Ireland)},
  publisher = {Lero},
  year = {2008},
  volume = {2},
  pages = {361--368}
}
Wübbeke A and Oster S (2010), "Verknüpfung von kombinatorischem Plattform- und individuellem Produkt-Test für Software-Produktlinien", In Proceedings of Produktlinien im Kontext (PIK2010). , pp. to appear.
Abstract: Das Software-Produktlinien Paradigma verspricht durch organisierte Wiederverwendung von Entwicklungsartefakten eine schnelle, kosteneffiziente und qualitativ hochwertige Entwicklung von ähnlichen Produkten auf Basis einer gemeinsamen Produktlinien-Plattform. Dabei entstehen für das Testen von Software-Produktlinien neue Herausforderungen: Zum einen entsteht die Frage, wie die wiederverwendbaren, variablen Artefakte der Produktlinien-Plattform getestet werden sollen und zum anderen, wie produktindividuelle Anforderungen im Test berücksichtigt werden können. Beide Fragestellungen müssen auch unter dem Gesichtspunkt der effektiven Spezifikation und Wiederverwendung von Testfällen mit Variabilität untersucht werden. Dieser Beitrag skizziert zur Lösung dieser Fragestellungen eine Verknüpfung aus kombinatorischem Testen der Produktlinien-Plattform und der Wiederverwendung von Testfällen für das Testen individueller Produktanforderungen. Durch die Verknüpfung von Plattform- und Produkttest kann die Effizienz des gesamten SPL-Tests gesteigert werden. Dies wird dadurch erreicht, dass im Produkttest die im Plattformtest bereits getestete Anforderungen nur unter bestimmten Umständen berücksichtig werden.
BibTeX:
@inproceedings{wuos2010,
  author = {Andreas Wübbeke AND Sebastian Oster},
  editor = {A. Birk, K. Schmid, M. Völter},
  title = {Verknüpfung von kombinatorischem Plattform- und individuellem Produkt-Test für Software-Produktlinien},
  booktitle = {Proceedings of Produktlinien im Kontext (PIK2010)},
  year = {2010},
  pages = {to appear}
}