Tutorials
Back to main program page
The ICSE 2004 program includes fifteen tutorials, covering a wide range of important topics from research and practice. Tutorials will be held on Monday 24th and Tuesday 25th May, and will be either half a day or a full day in length. T5 Concept analysis is a very general method to analyze a binary relationship between arbitrary objects and attributes. Its output is a lattice of so-called concepts, which offers non trivial insights into the structure underlying the original relationship. Each lattice node (concept) contains maximal sets of objects sharing common attributes. The hierarchy of concepts in the lattice can be interpreted as the possibility to generalize or specialize a concept. In the analysis of software systems, several relationships among the composing entities emerge. For this reason, concept analysis found a very productive application area in software engineering. Static and dynamic relationships among software components can be subjected to concept analysis to obtain information useful during maintenance, for program comprehension, and in the execution of reengineering tasks. The objective of this tutorial is to provide background and methodological knowledge on concept analysis and on its usage in software engineering. This will be achieved by describing three recent, representative applications of concept analysis in detail. They concern respectively the reorganization of a legacy system into cohesive units, the inference of design patterns without any a-priori information, and the decomposition of a software system into computational units (decomposition slices), that may be strongly dependent, weakly dependent or independent with each other. Other examples of applications, presented more succinctly, include the reengineering of class hierarchies, feature location by means of dynamic analysis, and the derivation of a software configuration structure. Participants in this tutorial will acquire a basic background and methodological knowledge on concept analysis and on its usage in software engineering. They will learn how to use concept analysis to tackle problems such as code restructuring, design pattern recovery and impact analysis. Moreover, they will learn the core competencies necessary to generalize the approach, in order to use it in a new context. Biography: Paolo Tonella has published over 17 journal papers, 2 book chapters, and 28 conference papers. He is in the Organizing Committees of events such as SCAM (Source Code Analysis and Manipulation), WSE (Web Site Evolution), and IWPC (International Workshop on Program Comprehension). He has edited a special issue of the Journal of Software Maintenance and Evolution on Web Site Evolution and he is editing a special issue of the Software Quality Journal. He regularly reviews papers for journals such as the IEEE Transactions on Software Engineering and the Journal of Software Maintenance and Evolution. In 2004, Springer-Verlag will publish a book by P. Tonella and A. Potrich entitled "Reverse Engineering of Object Oriented Code".
Since its adoption as an industry standard in 1997, the Unified Modeling Language (UML) has been adopted widely by both industry and academia. This extensive experience has naturally led to demands for improvements and new capabilities. In September 2000, the Object Management Group-the industrial consortium that controls the UML standard-issued a request for proposal for the first major revision of UML, UML 2.0. This new version was conceived as the basis for the coming generation of model-based development methods, such as OMG's Model-Driven Architecture (MDA). The distinguishing characteristic of these methods is that their primary focus is on the definition and evolution of models rather than programs-with programs being automatically generated from such models. The combination of higher-level abstractions defined in UML and the use of automation provide the potential for a dramatic improvement in productivity and software reliability. Attendees of this half-day tutorial will learn the salient aspects of UML 2.0-from the perspective of one of its primary authors. The following specific topics will be covered: a general and critical discussion of the effectiveness of modeling techniques in software engineering; a critical review of the original UML standard based on the lessons learned from its application as well as from theoretical studies; the formal and informal requirements that drove development of the new version of UML (this includes an introduction to model-driven development methods); the overall structure and design philosophy behind UML 2.0; the conceptual foundation, or "infrastructure" used to define the UML modeling concepts and their semantics; new modeling capabilities for rendering software structures such as those required for specifying software architectures; new modeling capabilities for describing complex behavior of individual objects and groups of collaborating objects; changes to existing UML concepts. Biography:
In a variety of approaches to software development, software artifacts are used in multiple contexts or for various purposes. The differences lead to so-called variation points in the software artifact. During recent years, the amount of variability supported by a software artifact is growing considerably and its management is developing as a main challenge in the development, usage and evolution of software artifacts. Examples of approaches where the management of variability is evolving as a challenge include software product families, component-based software development, object-oriented frameworks and configurable software products such as enterprise resource planning systems. The tutorial presents insights gained, techniques developed and lessons learned in the European IST project ConIPF (Configuration in Industrial Product Families) and in other research performed by the software engineering research group at the University of Groningen. The tutorial first establishes the importance of software variability management, defines the concept of variability, discusses notational and visualization aspects, assessment of software artifacts for variability, design of architectures and components for variability, usage of variation points while configuring instantiated software artefacts and, finally, some advanced issues including variation versus composition. Biography:
Nowadays, practitioners well recognize the importance of describing the software architecture as a key enabler for a successful and long living software system. The description of the software architecture should document the essential design decisions that the designers and programmers made in the past to fulfill the requirements of the system. However, keeping the description update or recovering it from an existing system are not simple tasks. Often it happens that there is a considerable gap between the actual implementation and the mental models of the designers. Answering basic questions becomes a challenge: who is the owner of a particular component ? What are the logical dependencies of a certain part of the system ? What types of components are used in the system ? Architecture reconstruction is the reverse engineering activity that aims at recovering high-level abstract views of a software system from the existing assets. The reconstruction is performed by examining the available artifacts (documentation, source code, experts), simulating the system behavior with dynamic analysis techniques and inferring new architectural information that is not immediately evident. This tutorial covers software architecture reconstruction. It addresses, amongst others, the following questions: How do we identify architecturally significant information? How can we extract, analyze and present it? What are the critical issues that have to be considered? How do we manage the reconstruction process in a product family? What tools and methods are available? This tutorial will address these and other questions that are relevant for the development of large and complex software systems. Biography:
Following the success of XML, W3C envisions the Semantic Web (SW) as the next generation of web in which data are given well-defined and machine-understandable semantics so that they can be processed by intelligent software agents. SW can be regarded as an emerging area from the Knowledge Representation and the Web Communities. The Software Engineering community can also play an important role in the SW development. Modeling and verification techniques can be useful at many stages during the design, maintenance and deployment of SW ontology. We believe SW will be a new research and application domain for software modeling techniques and tools. For example, recent research results have shown that UML, Z and Alloy can provide modeling, reasoning and consistency checking services for SW. On the other hand, the diversity of various software specification techniques and the need for their effective combinations requires an extensible and integrated supporting environment. The success of the Semantic Web may have profound impact on the web environment for software design methods, especially for extending and integrating different software modeling techniques. This full-day tutorial (with no specific prerequisite) is aimed at both industrial and academic participants. The tutorial will include:
Biography:
Software architects have techniques to deal with many quality attributes such as performance, reliability, and maintainability. Usability, however, has traditionally been concerned primarily with presentation and not been a concern of software architects beyond separating the user interface from the remainder of the application. A usability-supporting architectural patterns (USAP) describes a usability concern that is not supported by separation alone. For each concern, a USAP provides the forces from the characteristics of the task and environment, the human, and the state of the software to motivate an implementation independent solution cast in terms of the responsibilities that must be fulfilled to satisfy the forces. Furthermore, each pattern includes a sample solution implemented in the context of an overriding separation based pattern such as J2EE Model View Controller. During the tutorial, the instructors will present the concept of a USAP and several examples. The instructors will also facilitate an exercise where attendees will develop their own USAP. Biographies: Bonnie John is an engineer who has worked both in industry and academe. She is an Associate Professor in the Human-Computer Interaction Institute and the Director of the Masters Program in HCI. Natalia Juristo is a professor of Software Engineering at the Computer Science School of Universidad Politecnica de Madrid. She is director of the MSc in Software Engineering. Dr. Juristo is widely published in software engineering and has been a member of several editorial boards. Maribel Sanchez-Segura is on the faculty of Computer Science Department at Carlos III University of Madrid. She has performed research into the relationship between usability and software engineering.
Object-oriented software requires reconsidering and adapting approaches to software test and analysis. Some traditional test and analysis techniques are easily adjusted to object-oriented software, but others require substantial revision, and yet others need to be introduced to cope with new problems of object-oriented software. This tutorial brings together process and technical aspects of testing object-oriented software in an overall coherent framework that considers what can be simply adapted from conventional test practices and what new and extended techniques are required. Topics include test planning, test design from specification and design documentation, adapting design and code inspection to object oriented software development, intra- and inter-class structural testing, testing programs with exception-handling and threading, test oracles for object-oriented programs, regression testing, and process improvement. Attendees will gain an understanding of challenges in testing in object-oriented software and several techniques that have been devised for dealing with them. They will learn how to formulate a systematic approach to testing object-oriented software, considering both process and technical issues. This tutorial is designed for practitioners who may have some experience with object oriented design and quality assurance, but who desire a broader view of testing and analysis of object oriented software. It is also suitable for teachers, students, and researchers in software engineering who desire a coherent overview of problems and techniques for analysis and test of object oriented software. A general knowledge of software engineering and some familiarity with software development practice will be assumed. Experience with one or more software quality assurance techniques will be helpful but not essential. Biographies: Dr. Young's papers have appeared in ACM Transactions on Software Engineering and Methodology, IEEE Transactions on Software Engineering, and the Journal of Software Testing, Verification. Mauro Pezze` is professor of computer science at the University of Milan Bicocca since 2000. Prior to that, he was on the faculty of Politecnico of Milan. He received his degree in Computer Science (summa cum laude) from the University of Pisa and the PhD in Computer Engineering from Politecnico of Milan. Dr. Pezze's research interests include system and software engineering; software engineering environments; specification, analysis, testing and validation of concurrent, real-time and object-oriented systems; and legacy and hybrid systems. He serves on the Steering Committee of the European Joint conferences on Theory and Practice of Software (ETAPS). He was general chair of ICECCS'97, chair of FASE 2003 and TACoS 2003, program co-chair of ICECCS'96 and GT-VMT '01. His publications have appeared in ACM Transactions on Software Engineering and Methodology, IEEE Transactions on Software Engineering, IEEE Transactions on Control Systems Technology, the Journal of Real-Time Systems, the Journal of High-Integrity Systems, and Programming Languages.
The purpose of this full-day tutorial is to delineate and illustrate the correct use and interpretation of case studies. It will help software engineers identify and avoid common mistakes by giving them a solid grounding in the fundamentals of case studies as a research method. Using an equal blend of lecture and discussion, it aims to provide software engineers with a foundation for conducting, reviewing, and reading case studies. For researchers, this tutorial will provide a starting point for learning how to conduct case studies. They will be able to find, assess, and apply appropriate resources at their home institution. For reviewers, the tutorial will provide guidance on how to judge the quality and validity of reported case studies. They will be able to use the criteria presented in this tutorial to assess whether research papers based on case studies are suitable for publication, allowing them to raise the quality of publications and give appropriate feedback to authors. For practitioners, the tutorial will provide a better awareness of how to interpret the claims made by researchers about new software engineering methods and tools. Practitioners will also gain deeper insights into the roles they can play in designing and conducting case studies in collaborative research projects. As well, they will read case studies more effectively and be better able to identify results suitable for use in their workplace. Biographies:
Rapid change and increasing software criticality are driving successful development and acquisition organizations to balance the agility and discipline of their key processes. The emergence of agile methods in the software community is raising the expectations of customers and management, but the methods have shortfalls and their compatibility with traditional plan-driven methods such as those represented by CMMI, ISO-15288, and UK-DefStan-00-55 is largely unexplored. Multiple sources of perplexity -- inconsistent definitions and interpretations, overgeneralization of successes and failures, confusing of methods' usage and misuse -- complicate the search for clarity of understanding. This tutorial pragmatically examines the aspects of agile and plan-driven methods through examples and case studies. We characterize "home grounds" where the approaches are most likely to succeed, identifying five critical dimensions that describe the agile/plan-driven spectrum. We present a risk-based method for developing balanced strategies that take advantage of the strengths and mitigate the weaknesses of both agile and plan-driven approaches, and that fit the objectives, constraints, and priorities of a particular project or organization. Step-by-step walkthroughs of several example projects show how the method is applied. Finally, we involve participants in an exercise involving hands-on evaluation of their current organizational balance of agility and discipline, identification of their likely directions of change, and development of strategies for evolving their balance of agility and discipline to meet their future objectives and challenges. Biographies: Richard Turner is a research professor in engineering management and systems engineering at the George Washington University. In support of the U.S. Department of Defense, he is responsible for identifying and transitioning new software technology into software-intensive defense systems. He is a co-author of "CMMI(r) Distilled" and of "Balancing Agility and Discipline: A Guide for the Perplexed," Addison-Wesley, 2004.
|