James D. Herbsleb (Carnegie
Melon University)
Beyond Computer Science
19 May @ 11:00 AM
St.
Louis Ballroom A & B [Floor
Plan]
Session Chair: Anthony
Finkelstein
[Slides]
Biography: James
D. Herbsleb is the A. Nico Haberman Associate Professor of
Computer Science at Carnegie Mellon University. His research
interests lie primarily in the intersection of software engineering
and computer-supported cooperative work, focusing on such areas as
geographically-distributed development teams, open source software
development, and more generally on oordination in software
engineering. He holds a JD (1980), and a PhD (1984) in psychology
from the University of Nebraska, and a MS (1991) in computer science
from the University of Michigan.
After completing a post-doctoral
fellowship at the University of Michigan, he moved to Carnegie Mellons Software
Engineering Institute, where he led an effort to empirically validate the
CMM for Software. He then joined the Software Production Research Department
at Lucent Technologies, where he initiated and led the Bell Labs Collaboratory
Project, which conducted empirical studies
and designed collaborative technologies and practices for global software development.
Abstract: Computer science is necessary but not
sufficient to understand and overcome the problems we face in software
engineering. We need to understand not only the properties of the software
itself, but also the limitations and competences humans bring to the engineering
task. Rather than rely on commonsense notions, we need a deep and nuanced
view of human capabilities in order to determine how to enhance them. I
discuss what I regard as promising examples of cognitive and organizational
theories and propose research directions to develop new ways of representing
run-time behavior and ways of thinking about project coordination. I conclude
with observations on creating an interdisciplinary culture.
Stephen Fickas (University
of Oregon)
Clinical Requirements Engineering
19 May @ 11:00 AM
St.
Louis Ballroom A & B [Floor
Plan]
Session Chair: Anthony Finkelstein
[Slides]
Biography: Stephen
Fickas is a Full Professor in the Computer Science Department at the
University of Oregon. He began his interest in software engineering
while a research assistant at USC/ISI in the early 1980s.
Over the last ten years, he has focused on requirements engineering.
During that time he helped establish the RE conference series and chaired
the IFIP 2.9 Working Group on Requirements Engineering. Beyond the
work reported in this paper, Fickas has an interest in GORE (goal-oriented
RE), requirements monitoring, and the place of formal methods in the
RE process.
Abstract: In this paper,
I make a case for integration of requirements engineering (RE) with clinical
disciplines. To back my case, I look at two examples that employ a clinical
RE approach, first, that of introducing email into the life of a brain-injured
individual, and second, introducing digital darkroom tools into my life.
The former uses a Brownfield approach by starting with an existing clinical
process, cognitive rehabilitation, and then defining an RE process that
fits. The latter uses a Greenfield approach that postulates a new clinical
RE process that focuses on the problems some of us have using digital
darkroom tools.
Michael Twidale (U. of
Illinois at Urbana-Champaign)
Silver Bullet or Fool's Gold: Supporting usability
in open source software development
19 May @ 2:00 PM
St.
Louis Ballroom A & B [Floor
Plan]
Session Chair: Hausi Muller
Biography: Michael
Twidale is an Associate Professor of the Graduate School of
Library and Information Science, University of Illinois at
Urbana-Champaign. Before that he was a faculty member of the
Computing Department at Lancaster University, UK. His research
interests include computer supported cooperative work, computer
supported collaborative learning, user interface design and
evaluation, information visualization, museum informatics, how people
cope with computers, scenario based design and the application of
ethnographic methods to computer systems design and evaluation.
All these involve the use of interdisciplinary techniques in order to better
understand the needs of end users and their difficulties with existing
computer applications as part of the process of designing more effective
systems. Current projects include over the shoulder learning, an investigation
into collaborative techniques for improving data quality in databases,
and the usability of open source software.
Abstract: At first
glance it can look like Open Source Software development
violates many, if not all, of the precepts of decades of careful
research and teaching in Software Engineering. One could take a
classic SE textbook and compare the activities elaborated and
advocated in the various chapters with what is actually done in plain
sight in the public logs of an OSS project in say SourceForge. For a
Professor of Software Engineering this might make for rather
depressing reading. Are the principles of SE being rendered obsolete?
Has OSS really discovered Brooks' Silver Bullet? Or is it just a flash
in the pan or Fool's Gold?
In this talk I will mainly look at one aspect of Open Source
Development, the 'problem' of creating usable interfaces, particularly
for non-technical end-users. Any approach involves the challenge of
how to coordinate distributed collaborative interface analysis and
design, given that in conventional software development this is
usually done in small teams and almost always face to face. Indeed all
the methods in any HCI text just assume same-time same-place work and
don't map to distributed work, let alone the looser mechanisms of OSS
development. Instead what is needed is a form of participatory
usability involving the coordination of end users and developers in a
constantly evolving redesign process.
Peter Ayton (City University,
London) How
Software Can Help or Hinder Human Decision Making (and vice–versa)
19 May @ 2:00 PM
St.
Louis Ballroom A & B [Floor
Plan]
Session Chair: Hausi Muller
Biography: Peter
Ayton is a professor of Psychology in the Department of
Psychology at City University, London, which he joined in 1992. He
holds a PhD in Psychology from University College London, (1988).
His
research has been concerned with judgmental forecasting, human
judgement of uncertainty and human choice. Applied research on decision
making has been a particular interest and he has been a collaborator
on multidisciplinary research projects funded to
investigate expert reasoning with toxicological risks, public perceptions
of food risk, convicted prisoners' perceptions of
recidivism risks and software reliability.
He was a contributing author to the 2001 Assessment Report of the
Intergovernmental Panel on Climate Change. He has published numerous
papers in international journals and is a member of the International
Institute of Forecasters, the Society for Judgment and Decision Making,
the European Association for Decision Making and the Experimental Psychology
Society.
Abstract: Developments
in computing offer experts in many fields specialised support
for decision making under uncertainty. However, the impact of these
technologies remains controversial. In particular, it is not clear how
advice of variable quality from a computer may affect human decision
makers.
Here I review research showing strikingly diverse effects of computer
support on expert decision-making. Decisions support can both
systematically improve or damaged the performance of decision makers in
subtle ways depending on the decision maker's skills, variation in the
difficulty of individual decisions and the reliability of advice from
the
support tool.
In clinical trials decision support technologies are often assessed in
terms
of their average effects. However this methodology overlooks the
possibility of differential effects on decisions of varying difficulty,
on
decision makers of varying competence, of computer advice of varying
accuracy and of possible interactions among these variables. Research
that
has teased apart aggregated clinical trial data to investigate these
possibilities has discovered that computer support was less useful for
- and
sometimes hindered - professional experts who were relatively good at
difficult decisions without support; at the same time the same computer
support tool helped those experts who were less good at relatively easy
decisions without support. Moreover, inappropriate advice from the
support
tool could bias decision makers' decisions and, predictably, depending
on
the type of case, improve or harm the decisions.