You are on page 1of 452

Proceedings

of the
6th European Conference on
Information Management
and Evaluation
University College Cork
Ireland
13-14 September 2012

Edited by
Dr Tadgh Nagle
University College Cork
Ireland












Copyright The Authors, 2012. All Rights Reserved.
No reproduction, copy or transmission may be made without written permission from the individual authors.
Papers have been double-blind peer reviewed before final submission to the conference. Initially, paper abstracts
were read and selected by the conference panel for submission as possible papers for the conference.
Many thanks to the reviewers who helped ensure the quality of the full papers.
These Conference Proceedings have been submitted to Thomson ISI for indexing.
Further copies of this book and previous years proceedings can be purchased from
http://academic-bookshop.com
CD version ISBN: 978-1-908272-66-9
CD version ISSN: 2048-979X
Book version ISBN: 978-1-908272-65-2
Book Version ISSN: 2048-8912

Published by Academic Publishing International Limited
Reading
UK
44-118-972-4148
www.academic-publishing.org

i
Contents
Paper Title Author(s) Page
No.
Preface iv
Committee v
Biographies vii
Proposal of Adaptability Indexes to
Support Management of Engineering and Marketing
Systems
Oswaldo Luiz Agostinho 1
Knowledge Gaps in Post-Merger Integration of Software
Maintenance Processes: A Case Study
Maria Alaranta and Eero Martela 9
A Proposed Framework for Guiding the Effective
Implementation of an Informal Communication System
for Virtual Teams
Garth Alistoun and Christopher Upfold 17
IS Consultants and SMEs: A Competence Perspective Adrian Bradshaw, Paul Cragg and Venkat
Pulakanam
25
Developing a Framework for Maturing IT Risk
Management Capabilities
Marian Carcary 33
IS Evaluation in the Fusion View: An Emergence
Perspective
Sven Carlsson and Olgerta Tona 41
Where do Tablets fit in the Organizations Workstation
Inventory?
Mitch Cochran and Paul Witman 47
Classifying IT Investment Evaluation Methods According
to Functional Criterion
Jacek Cypryjanski

55
Academic Group and Forum on Facebook: Social, Serious
Studies or Synergy?
Ruth de Villiers and Marco Cobus Pretorius 63
Evaluating the Process of Delivering Compelling Value
Propositions: The Case of Mobile Payments
Denis Dennehy, Frederic Adam and Fergal
Carton
74
Using Bricolage to Facilitate Emergent Collectives in
SMEs
Jan Devos, Hendrik Van Landeghem

and
Dirk Deschoolmeester
82
Determining the Maturity Level of eCommerce in South
African SMEs
David Freeme and Portia Gumede 91
Advancing GeoMarketing Analyses with Improved
Spatio-temporal Distribution of Population at High
Resolution
Srgio Freire and Teresa Santos 100
Activity Theory: A Useful Evaluation Methodology for the
Role of Information Systems in Collaborative Activity
Audrey Grace 109
Dealing With Uncertainty Through KM: Cases in Four
Software SMEs
Ciara Heavin and Frederic Adam 117
Analyzing Lessons Learned to Identify Potential Risks in
new Product Development Projects
Vered Holzmann 127
Evaluating Determinants for ERP use and Value in
Scandinavia: Exploring Differences Between Danish and
Swedish SMEs
Bjrn Johansson, Pedro Ruivo, Tiago
Oliveira and Miguel Neto

135

ii
Paper Title Author(s) Page
No.
User Experience in Mobile Phones by Using Semantic
Differential Methodology
Kalimullah Khan 143
Challenges in Building a Community Health Information
Exchange in a Complex Environment
Ranjan Kini 151
Factors Inhibiting Recognition and Reporting of Losses
From Cyber-Attacks: The Case of Government
Departments in the Western Cape Province of South
Africa
Michael Kyobe, Sinka Matengu
,
Proske
Walter and Mzwandile Shongwe


159
The Overall Process Taken by Enterprises to Manage the
IaaS Cloud Services
Alina Mdlina Lonea, Daniela Elena
Popescu and Octavian Protean
168
Sustainable Enterprise Architecture: A Three-
Dimensional Framework for Management of
Architectural Change
Thanos Magoulas, Aida Hadzic, Ted
Saarikko and Kalevi Pessi
178
Applying Structural Equation Modelling to Exploring the
Relationship Between Organisational Trust and Quality
of Work Life
Nico Martins and Yolandi van der Berg 186
Identification and Governance of Emerging Ethical Issues
in Information Systems: Empirical and Theoretical
Presuppositions
Laurence Masclet and Philippe Goujon 195
Breaking Consensus in IS Evaluations: The Agitation
Workshop
John McAvoy, Tadhg Nagle and David
Sammon
203
Drivers and Challenges for Biometrics in the Financial
Services
Karen Neville et al 211
Did you get Your Facebook Session Completed? Markku Nurminen 219
Infusion of Mobile Health Systems in the NHS: An
Empirical Study
Yvonne O Connor, John O Donoghue

and
Phillip O Reilly
226
An Exploratory Study of Innovation Intermediation in IS
Education
Brian OFlaherty and Joe Bogue 234
Bringing Some Order to the Black Art of Innovation
Measurement
Paidi ORaghallaigh, David Sammon and
Ciaran Murphy
243
Using Focus Groups to Evaluate Artefacts in Design
Research
Paidi ORaghallaigh, David Sammon and
Ciaran Murphy
251
Realizing the Business Value of Service-Oriented
Architecture: The Construction of a Theoretical
Framework
Ronan OSullivan, Tom Butler and Philip
OReilly
258
The Identification of Service Oriented Architecture-
Specific Critical Success Factors
Ian Owens and John Cunningham

267
Treasure Hunting in the 21st century: A Decade of
Geocaching in Portugal
Teresa Santos, Ricardo Mendes, Antnio
Rodrigues and Srgio Freire
273
Intelligent Decision Support Systems Development Based
on Modern Modeling Methods
Elena Serova 282
Integrating Sustainability Indicators in IT/IS Evaluation Gilbert Silvius 291
The art of Shooting the Moving Goal Explorative Study
of EA Pilot
Nestori Syynimaa 302
Information Interaction in Terms of eCommerce Kamila Tislerova 307

iii
Paper Title Author(s) Page
No.
Designing High Quality ICT for Altered Environmental
Conditions
Daryoush Daniel Vaziri, Dirk Schreiber and
Andreas Gadatsch
314
An Analysis of the Problems Linked to Economic
Evaluation of Management Support Information Systems
in Poland on the Example of ERP/CRM Class Applications
- Problem Analysis
Bartosz Wachnik 326
Towards an Understanding of Enterprise Architecture
Analysis Activities
Haining Wan

and Sven Carlsson 334
Moving Towards a Sensor-Based Patient Monitoring
System: Evaluating its Impact on Data and Information
Quality
Atieh Zarabzadeh, John ODonoghue,
Frederic Adam, Mervyn OConnell, Siobhn
OConnor, Simon Woodworth, Joe
Gallagher and Tom OKane
342
Using the REA Approach to Modeling of IT Process
Evaluation
Ryszard Zygala 351
Non Academic papers

361
A Process Model to Guarantee Information Quality in
Elective Surgery Information Systems
Rita Cristvo and Pedro Gomes 363
PHD papers 373
Information Risks and Their Connection With Accounting Marie ern 375
A Methodology for Competitive Intelligence Metrics Rhiannon Gainor and France Bouthillier 383
The use of Virtual Public Space and eCommunities to
Kick-Start eParticipation Timisoara, Romania
Monica Izvercianu and Ana-Maria Branea 391
Strategic Management and Information Evaluation
Challenges Facing Entrepreneurs of SMEs in ICT
Maroun Jneid and Antoine Tannous 400
Method Engineering Approach to the Adoption of
Information Technology Governance, Risk and
Compliance in Swiss Hospitals
Mike Krey, Steven Furnell, Bettina
Harriehausen and Matthias Knoll


408
Collaborative Methodology for Supply Chain Quality
Management: Framework and Integration With Strategic
Decision Processes in Product Development
Juan Camilo Romero, Thierry Coudert, Lau-
rent Geneste and Aymeric De Valroger
418/
Work In Progress papers 429
Recording our Professional Development Just Became
Easier: Using a Learning Management System
Mercy Kesiena Clement-Okooboh 431
Is More Data Better? Experiences From Measuring
Academic Performance
Harald Lothaller 435



iv
Preface

The 6th European Conference on Information Management and Evaluation (ECIME) is hosted this year by University
College Cork in Ireland. The Conference Chair is Dr David Sammon and the Programme Chair is Dr Tadhg Nagle, both
from University College Cork.
ECIME provides an opportunity for individuals researching and working in the broad field of information management,
including information technology evaluation to come together to exchange ideas and discuss current research in the
field. We hope that this years conference will provide you with plenty of opportunities to share your expertise with
colleagues from around the world.
The opening keynote address will be delivered by Professor Patrick Finnegan, University of New South Wales, Sydney,
Australia on the topic ProSIS: An exploration of potential Pro-Social and Pro-Societal impact through Information Sys-
tems.
ECIME 2012 received an initial submission of 125 abstracts. After the double-blind peer review process 43 academic
papers, 1 non-academic paper, 6 PhD papers and 2 short work in progress papers have been accepted for these Con-
ference Proceedings. These papers represent research from around the world, including Austria, Belgium, Brazil, Can-
ada, Czech Republic, Finland, Germany, Ireland, Israel, Kingdom of Saudi Arabia, Lebanon, Netherlands, New Zealand,
Poland, Portugal, Romania, Russia, South Africa, Sweden, Switzerland, UK, USA.
We wish you a most interesting conference.
Dr TadhgNagle
Programme Chair
Dr David Sammon
Conference Chair
September 2012


v
Conference Executive
Dr Frank Bannister Trinity College, Dublin
Professor Egon Berghout, Groningen University, Netherlands
Dr Ann Brown, City University Business School, London
Dr Walter Castelnovo, University of Insubria, Como, Italy
Dr Elena Ferrari, University of Insubria, Como, Italy
Mini Track chairs
Dr Maria Alaranta, Aalto University, Finland
Dr Jorge Ferreira, Nova University, Lisbon, Portugal
Dr Ciara Heavin, University College Cork, Ireland
Dr Karen Neville, University College Cork, Ireland
Leona OBrien, University College Cork, Ireland
Ian Owens, Cranfield University, UK
Isa Santos, University of Porto, Portugal
Dr Elena Serova, St. Petersburg State University of Economics and Finance, Russia
Dr Joo Manuel R. S. Tavares, University of Porto, Portugal

Conference Committee
The conference programme committee consists of key people in the information systems community. The following
people have confirmed their participation:
Ademola Adesina (University of Western Cape, South Africa); Maria Alaranta (Helsinki University of Technology TKK,
Finland); Saheer Al-Jaghoub (Al-Ahliyya Amman University, Jordan); Maria Ceu Alves (University of Beira Interior, Por-
tugal); Hussein Al-Yaseen (Amman University, Jordan); Karen Anderson (Mid Sweden University, Sweden); Joan Bal-
lantine (University of Ulster, UK); Frank Bannister (Trinity College Dublin, Ireland); Ofer Barkai (SCE - Sami Shamoon
College of Engineering, Israel); David Barnes (Westminster Business School, University of Westminster, London, UK);
Peter Bednar (Department of ISCA, Portsmouth University, UK); Egon Berghout (University of Groningen, The Nether-
lands); Milena Bobeva (Bournemouth University, Poole, UK); Ann Brown (CASS Business School, London, UK); Giovanni
Camponovo (University of Applied Sciences of Southern Switzerland, Switzerland); Sven Carlsson (School of Economics
and Management, Lund University, Sweden); Fergal Carton (University College Cork, Ireland); Walter Castelnovo (Uni-
versit dellInsubria, Como, Italy); Anna Cavallo (University of Rome, "Sapienza", Italy); Sunil Choenni (University of
Twente and Ministry of Justice, The Netherlands); Peter Clutterbuck (University of Queensland, Australia); Reet Cronk
(Harding University, Texas, USA); Barbara Crump (Massey University, New Zealand); Renata Dameri (University of
Genoa, Italy); Paul Davies (University of Glamorgan, UK); Miguel de Castro Neto ( ISEGI, Universidade Nova de Lisboa,,
Portugal); Guillermo de Haro (Instituto de Empresa, Madrid, Spain); Francois Deltour (GET-ENST-Bretagne Engineering
School, France); Dirk Deschoolmeester (Ghent University, Belgium); Jan Devos (Ghent University, Belgium,); Eduardo
Diniz (Escola de Administracao de Empresas de Sao Paulo, Fundacao Getulio Vargas, Brazil); Maria do Rosrio Martins
(Universidade Cape Verde, Portugal); Romano Dyerson (Royal Holloway University, London, UK); Alea Fairchild (Vesal-
ius College/Vrije Univ Brussels, Belgium); Jorge Ferreira (e-Geo Geography and Regional Planning Research Centre /
New University of Lisbon, Portugal); Graham Fletcher (Cranfield University / Defence Academy of the UK, UK); Elisa-
beth Frisk (Chalmers University of Technology, Gteborg, Sweden); Andreas Gadatsch (Bonn-Rhein-Sieg University of
Applied Sciences, Germany); Ken Grant (Ryerson University, Toronto, Canada); Ginevra Gravili (Facolta Di Economia,
Lecce, Italy); Paul Griffiths (The Birchman Group, Santiago, Chile); Kerstin Grundn (Trollhattan University, Sweden);
Loshma Gunisetti (Sri Vasavi Engineering College, India); Petri Hallikainen (University of Sydney, Business School, Aus-
tralia); Ciara Heavin (University College Cork, Ireland); Jonas Hedman (Copenhagen Business School, Denmark); Mat-
thew Hinton (Open University Business School, UK); Vered Holzmann (Tel-Aviv University / Holon Institute of Technol-
ogy, Israel); Grant Royd Howard (University of South Africa (UNISA), South Africa); Bjrn Johansson (Lund University,
Sweden); Paul Jones (University of Glamorgan, UK); Ghassan Kbar (Riyadh Techno Valley, King Saud University, Saudi
Arabia); Ranjan Kini (Indiana University Northwest, Gary, USA); Lutz Kirchner (BOC Information Technologies Consult-
ing GmbH Vostr. 22, Germany); Juha Kontio (Turku University of Applied Sciences, Finland); Jussi Koskinen (University
of Jyvaskyla, Finland); Luigi Lavazza (Universit degli Studi dell'Insubria, Italy); Przemysaw Lech (University of Gdask,
Poland); Sam Lubbe (University of South Africa, South Africa); Paolo Magrassi (Polytechnique of Milan, Italy); Pon-
nusamy Manohar (University of Papua New Guinea, Papua New Guinea); Nenad Markovic (Belgrade Business School,
Serbia); Steve Martin (University of East London, UK); Milos Maryska (University of Economics, Prague, Czech Repub-
lic); John McAvoy (University College Cork, Ireland); Nor Laila Md Noor (Universiti Teknologi MARA, Malaysia); Annette
Mills (University of Canterbury, Christchurch, New Zealand); Maria Mitre (Universidad de Oviedo, Spain); Mahmoud
Moradi (University of Guilan, Rasht, Iran); Gunilla Myreteg (Uppsala University, Sweden); Mrio Negas (Aberta Univer-

vi
sity, Portugal); Karen Neville (University College Cork, Ireland); Emil Numminen (Blekinge Institute of Technology,
Sweden); Tiago Oliveira (Universidade Nova de Lisboa, Portugal); Roslina Othman (International Islamic University
Malaysia, Kuala Lumpur, Malaysia); Ian Owens (Cranfield University, Shrivenham, UK); Sevgi zkan (Middle East Tech-
nical University, Ankara, Turkey); Shaun Pather (Cape Peninsula University of Technology, South Africa); Kalevi Pessi (IT
University, Gothenburg, Sweden); Danilo Piaggesi (Fondazione Rosselli Americas, USA); Elias Pimenidis (University of
East London, UK); Zijad Pita (RMIT University, Melbourne, Australia); Nayem Rahman (Intel Corporation, Aloha, USA);
Hugo Rehesaar (NSW, Sydney, Australia); Joo Manuel Ribeiro da Silva Tavares (Faculdade de Engenharia da Universi-
dade do Porto, Portugal); Dimitris Rigas (De Montfort University, UK); Narcyz Roztocki (State University of New York at
New Paltz, USA); Hannu Salmela (Turku School of Economics and Business Administration, Finland); David Sammon
(University College Cork, Ireland); Elsje Scott (University of Cape Town, Rondebosch, South Africa); Elena Serova
(Graduate School of Management St. Petersburg State University, Russia); Yilun Shang (University of Texas at San An-
tonio, USA); Hossein Sharif (University of Portsmouth, UK); A.J. Gilbert Silvius (Utrecht University of Professional Edu-
cation, The Netherlands); Riccardo Spinelli (Universita Di Genova, Italy); Darijus Strasunskas (Norwegian University of
Science and Technology, Trondheim, Norway); Reima Suomi (University of Turku, Finland); Lars Svensson (University
West, Trollhttan, Sweden); Jarmo Thkp (Turku School of Economics and Business Administration, Finland); Tor-
ben Tambo (Aarhus University, Denmark); Llewellyn Tang (University of Reading, UK); Claudine Toffolon (Universit du
Mans - IUT de Laval, France); Geert-Jan Van Bussel (HvA University of Applied Sciences, Amsterdam, The Netherlands);
Minhong Wang (The University of Hong Kong, Hong Kong); Anna Wingkvist (School of Computer Science, Physics and
Mathematics, Linnaeus University, Sweden); Les Worrall (University of Coventry, UK); Tuan Yu (Kent Business School,
University of Kent, Canterbury, UK).


vii
Biographies

Conference Chair
Dr. David Sammon is a researcher/lecturer in Business Information Systems at University College
Cork. His current research interests focus on the areas of conceptual data modeling,
data/information management, theory and theory-building, and redesigning organisational rou-
tines through mindfulness. David has published extensively in international journals and confer-
ences. He is an Associate Editor of the Journal of Decision Systems and co-author of the book En-
terprise Resource Planning Era: Lessons Learned and Issues for the Future (2004).
Programme Chair
Dr. Tadgh Nagle is a lecturer in Business Information Systems at University College Cork. Coming
from an industry background in finance services he became a Business Analyst Lab Leader in the
Digital Enterprise Research Institute (DERI). During this time his main focus was on researching
innovation and the impact of emerging technologies on the Irish eLearning sector. From this he
has continued his research in strategic innovation and the impact of disruptive technologies. Pri-
marily exploring concepts such as ambidexterity, business models and emerging technologies
(Web 2.0) he has published in numerous international journals and conferences.
Keynote Speaker
Professor Patrick Finnegan is Professor of Information Systems and Head of the School of Informa-
tion Systems, Technology and Management at the University of New South Wales, Sydney. He is a
Senior Editor of the Information Systems Journal and a Past-President of the Irish Chapter of the
Association for Information Systems. He was awarded the 2011 Stafford Beer Medal with Philip
O'Reilly for their work on developing a theory of electronic marketplace performance. His research
on inter-organisational systems, e-business and open strategies has been published in the proceed-
ings of leading IS conferences and in a variety of journals (including Information Systems Research, the European
Journal of Information Systems, the Information Systems Journal, the Journal of Information Technology, the Journal
of Strategic Information Systems, Information Technology and People, the International Journal of Electronic Com-
merce, DATABASE, and Electronic Markets).
Mini Track Chairs
Dr Maria Alaranta

, D.Sc. (Econ. & Bus. Adm.) is a Visiting Research Scholar at CEPRIN, Georgia
State University, USA, and a Senior Researcher (on research leave) in the Department of Industrial
Engineering and Management, Aalto University, Finland. She has published a number of articles in
reputable international peer-refereed journals and conferences, including Information Systems
Frontiers, the Hawaii International Conference on System Sciences and the Academy of Manage-
ment Conference etc. Dr. Alaranta has also carried out several large consulting projects in the area
of IS.
Dr Jorge Ferreira

is an assistant professor at the Geography and Regional Planning Department,
Faculty of Social Sciences and Humanities (FCSH), Nova University of Lisbon. He develops its main
research activities in e-Geo, Research Centre for Geography and Regional Planning, an R&D unit
with Government annual funding within the University. His main research interests focus on "Ge-
ography of Knowledge Society", Geographical Information Technologies" and Information Diffu-
sion. Regional economy and innovation are also transversal research areas crossed by his re-
search.
Dr Ciara Heavin

is a College Lecturer in Business Information Systems at University College Cork,
Ireland. She also holds a BSc and MSc in Information Systems from UCC. Her main research inter-
ests include the development of the ICT industry, primarily focusing on Irelands software industry
and knowledge management in software SMEs.


viii
Dr Karen Neville

is a researcher and lecturer in Business Information Systems (BIS) at University
College Cork (UCC), Ireland. Her current research interests focus on the areas of ISS and Compli-
ance, Social Learning and Biometrics. Karen has published in international conferences and jour-
nals

Leona OBrien

is a graduate of UCC completing a BCL and an LLM with honours. She also holds a
BBus from Cork Institute of Technology. Her research interests are Financial Services Law and
Policy, specifically banking law, corporate governance and regulatory frameworks. She has com-
pleted research for the Consumer Panel of the Central Bank of Ireland, Bank of Ireland, and ITPS
amongst others.


Ian Owens

is a lecturer and researcher at Cranfield University. His research interests include infor-
mation systems evaluation, information systems development methodologies, sense making and
mindfulness, enterprise architecture, and service oriented architecture. He is also researching En-
terprise Architecture tools and techniques for the Defence Science and Technology Labs (DSTL

Isa Santos

has a BSc degree (5 years) in Mechanical Engineering and a MSc degree in Industrial
Design by University of Porto. Currently, she is pursuing a PhD from the MIT | Portugal program.
Her main area of interest is product development of medical devices.



Dr Elena G. Serova

is an Associate Professor in the International School of Economics and Poli-
tics and also in the Informatics Department of St Petersburg State University of Economics and
Finance, Russia. Her research interests include Business Models and Modelling; Information
Management and Information Systems and Economics of Innovation and Project Management.
Dr Joo Manuel R. S. Tavares

graduated in Mechanical Engineering from the University of Porto
Portugalin 1992. He obtained MSc and the PhD degrees in Electronic and Computer Engineering, in
1995 and 2001, respectively, also from the same University. Since 2001, he has been Assistant Pro-
fessor in the Department of Mechanical Engineering of the Faculty of Engineering of the University
of Porto, and senior researcher and project coordinator at the Institute of Mechanical
Engineeringand Industrial Management. His main research areas include Medical Imaging, Biome-
chanics, Biomedical Engineering and New Product Development
Biographies of Presenting Authors
Ana-Maria Branea is a PhD student at the Politehnica University of Timisoara, in the field of urban management,
with a background in architecture and urbanism and 6 year experience at the Research Centre for Urban Planning
Timisoara, Faculty of Architecture.
Marian Carcary is a post-doctoral researcher working on an IT Capability Maturity Framework research project at the
Innovation Value Institute, National University of Ireland, Maynooth. Marian previously worked as a member of Facul-
ty in the University of Limerick and Limerick Institute of Technology. She has an MSc by research and a PhD in IT eval-
uation.
Sven Carlsson is Professor of Informatics at Lund University School of Economics and Management. His current re-
search interests include: Business Intelligence, KM, and enterprise 2.0. He has published more than 125 peer-reviewed
journal articles, book chapters, and conference papers. His work has appeared in journals like JMIS, Decision Sciences,
and Information Systems Journal.
Dr Fergal Carton is a College Lecturer at University College Cork. Fergal's research domain is the integration of infor-
mation technologies into management decision making. With 15 years' experience as a management consultant, Fer-

ix
gal has a primary degree in Computer Science (University College Dublin), and an MSc in Management from the Euro-
pean School of Management Studies (ESCP-EAP) in Paris.
Mercy Kesiena Clement-Okooboh is a Doctoral researcher at the University of Bolton, United Kingdom. Her research
is focused on the effectiveness of different types of learning in the organisation and her research interests include
program and technology evaluation, action research methodology and inquiry-based learning. Currently, she is the
Head of Learning of Development in Veolia Energy (Dalkia) Ireland.
Mitch Cochran has been the Information Systems Manager for the City of Monrovia for 14 years. He has also worked
in the Court System and for IBM. He is currently working on his Information Systems PhD from Claremont Graduate
University and has completed Masters Degrees in Administration and Homeland Security. He has a CISM certification.
Paul Cragg is Professor of Information Systems at the University of Canterbury, New Zealand. He received his PhD
from Loughborough University, England. He teaches the management of IS across a range of courses from undergrad-
uate through to PhD. His research is focused on IS in SMEs, and has published in numerous journals.
Fred Creedon is a PhD candidate in the Department of Business Information Systems at University College Cork. He
received his B.Bs in BIS, and M.Bs in BIS. His primary areas of research include the introduction of IS based early warn-
ing systems in a clinical environment and their impact on decision making processes within the environment.
Rita Cristvo, raised in Lisbon, Portugal, graduated in 2001 from Nova School of Business and Economics and post-
graduated in 2004 from INDEG Business School. In 2001 worked as management consultant at Deloitte in the
healthcare sector. In 2004 started to work in SIGIC and nowadays is the assistant coordinator of SIGIC in the Central
Administration of NHS.
Jacek Cypryjanski, Ph.D. is a associated professor in the Department of IT in Management, Faculty of Economics and
Management at the University of Szczecin, Poland. His current research interest includes economic evaluation of in-
formation technology investments and multiple-criteria decision analysis methods.
Aymeric de Valroger has a MSc. degree in Industrial Engineering from the Georgia Institute of Technology in the US
and a BSc. degree from the Ecole Centrale de Paris in France. Mr. A.d V. is an experienced process supply chain con-
sultant and project manager. He has a proven track record of successful Supply Chain process improvement projects
the last 10 years. He is director from a consulting firm on Supply Chain Management and Industrial Systems.
Ruth de Villiers is a research professor in the School of Computing at the University of South Africa. She has a PhD,
and masters degrees in Information Systems and Computer-Integrated Education. Her research areas are Human-
Computer Interaction and e-Learning. She conducts research and supervises postgraduate students in research, de-
velopment, and usability evaluation of e-learning environments.
Denis Dennehy is currently undertaking a PhD with the Business Information Systems department in UCC. His research
explores the process of creating and sharing value in a mobile payment ecosystem by leveraging the business model
concept and its associated processes. Prior to this he completed a research masters which was motivated by his work
in a developing country and he also holds a BSc. in Business Information Systems.
Jan Devos is professor in Information Systems at Ghent University, faculty of Engineering and Architecture, campus
West. Devos has a PhD in Engineering, Industrial Management. His current research interests are IT governance in
SMEs, Cloud computing, E-Business and IT security. He has published several articles on IT and SMEs and was a
speaker at international academic and business conferences.
Cathal Doyle is currently pursuing a PhD in Business Information Systems at University College of Cork (UCC), Ireland.
His research focuses on the emerging phenomena of social media and the established area of learning, where he is
attempting to develop the learning environments of 2020.
David Freeme is a lecturer at Rhodes Universiry, Grahamstown , South Africa. He lectures IS Theory, eBusiness Strate-
gy, Accounting Information Systems, and IS Management to graduate and under graduates.
Srgio Freire is a research assistant at e-GEO Research Centre for Geography and Regional Planning, Universidade
Nova de Lisboa, Portugal. With a masters degree in geography from the University of Kansas (USA), he has worked at

x
the National Center for Geographic Information (Portugal) and at the Portuguese Geographic Institute, researching
land use and land cover mapping using satellite imagery and developing integrated forest fire risk methods.
Rhiannon Gainor is a PhD candidate at McGill Universitys School of Information Studies. A McConnell Foundation
Fellow in 2010/2011, her research interests are knowledge management, competitive intelligence, and measurement.
She has a Masters of Library and Information Studies, and Masters of Arts in Humanities Computing from the Univer-
sity of Alberta.
Audrey Grace has over twelve years of industry experience and is a lecturer with Business Information Systems at
University College Cork. Her current research interests focus primarily on service innovation and the role of infor-
mation systems in the delivery of complex services; collaborative systems; learning management systems and
knowledge dissemination within an organisational context.
Aida Hadzic is a PhD student at the Department of Applied IT, IT University in Gothenburg. She has a systems science
background and a second level education in the field of IT management. Aida is studying issues related to manage-
ment and architectural design of both existing and future IT investments.
Martin Hill has twenty years experience building commercial information systems from Kenyan camel farmers to sat-
ellite operators. Five years developing these for disaster and frontline military operations have evolved into academic
research into knowledge distribution across poorly connected communities
Dr Matthew Hinton is Senior Lecturer in Information Management at the Open University Business School. His re-
search covers the impact of e-commerce on operations and the evaluation of ICT investments. He has published more
than 60 academic articles and two undergraduate teaching texts Introducing Information Management: the Business
Approach and Information Management in Context (2009).
Dr. Vered Holzmann, MBA, is an experienced practicing project manager with a distinguished track record in manag-
ing computer software development teams, implementation of quality assurance programs and management of fast
track construction projects. She is a faculty member in Holon Institute of Technology - H.I.T. and lectures at Tel-Aviv
University.
Bjrn Johansson holds a PhD in Information Systems Development from the Department of Management & Engineer-
ing at Linkping University. Currently he works as Associate Senior Lecturer at Department of Informatics, Lund Uni-
versity. Previously he worked as a Post Doc at the Center for Applied ICT at Copenhagen Business School. He is a
member of the IFIP Working Groups IFIP 8.6 and IFIP 8.9.
Maroun Jneid is a PhD candidate at Universit Paris8 with 13 years of professional experience in software projects
management and engineering process activities and 11 years of experience in software engineering activities lecturing
in the Antonine Universitys Faculty of Engineering as well as its North Campus director for the last 4 years.
Kalimullah Khan a young and energetic student with excellent communication skills has done MIT from Pakistan and
an additional MSc (Thesis resulted awaited) from Blekinge Institute of Technology Sweden. He has an experience in
teaching at university level and wants to pursue his carrier in research (PhD).
Ranjan Kini, Ph.D., is a Professor of Information Systems. He is an active member of IACIS ACM, DSI, and AIS. His cur-
rent research interests include Electronic and Mobile Commerce, Ethics in Information Technology, and Health Infor-
mation Technology. He is a Senior Editor of Information Systems Management Journal and is also on several editorial
boards.
Mike Krey is currently lecturer at Zurich University. Besides his lecturing he is involved in research projects in the field
of Business Integration. His previous positions include Business Development Manager for IT Solutions in the Health
Care Sector. Mike is currently doing his PHD at the Plymouth University in the research field of IT-Governance.
Michael Kyobe is A/Professor of Information System. He holds a PhD in Computer Information Systems and an MBA.
Michael worked as a project manager and IT manager for several years and has consulted extensively with the public,
and SMEs. His research interests include business-IT alignment, governance, computer security, ethics, knowledge
management and SMEs.

xi
Alina Madalina Lonea is PhD Student at Politehnica University of Timisoara (Romania), where she received her BSc
in Systems Engineering and Computers Engineering. She holds a BSc in Computer Science from University of the High-
lands and Islands (UK). Her research interests include Cloud Management, Identity Access Management and Intrusion
Detection Systems.
Harald Lothaller is employed at the University of Music and Performing Arts Graz (Austria) in two areas. He is head of
the study centre and he is responsible for statistics, data acquisition, and the core developer of the presented perfor-
mance recording tool. Aside, he is involved in teaching at other HEIs and in research projects.
Carolanne Mahony is a PhD candidate in Business Information Systems at University College Cork, Ireland. Her re-
search interests include e-health, information behaviour, decision making and supply chain management. She has
previously experience in the electronics industry.
Nico Martins holds a PhD in Industrial Psychology and is with the Department of Industrial and Organisational Psy-
chology at The University of South Africa where he specializes in the field of Organisational Psychology. He has pub-
lished several articles and presented papers at national and international conferences in organizational cul-
ture/climate, organisational trust, organisational diagnoses and research .
Laurence Masclet is doing a doctoral research in the field of ethics and regulation of Information and Communication
Technologies. Her researches take place in the LEGIT laboratory, in Namur University (FUNDP), Belgium. She is a re-
searcher for the project IDEGOV (Identification and governance of emerging ethical issues in IS) funded by the CIGREF
foundation.
John McAvoy is a lecturer in the Department of Accountancy Finance and Information Systems at University College
Cork, Ireland. Prior to lecturing, John had a variety of roles in Information Systems, ranging from systems administra-
tion to managing software development teams. John has published in a variety of journals and conferences in the In-
formation Systems field, concentrating on project management, Agile Software Development, and small ISD teams.
Markku Nurminen is Professor Emeritus in Information Systems at the University of Turku, Finland, EU. He was re-
sponsible for the introduction of the Masters Programme in Work Informatics. He has also worked for Universities of
Jyvskyl, Finland, of Bergen and Oslo, Norway. Nurminen is also an active member in the IRIS (Information Systems
Research in Scandinavia) Association.
Yvonne OConnor is a PhD candidate in Business Information Systems at University College Cork, Ireland. Having a
keen interest in the role Information Systems play in the healthcare domain, Yvonnes research explores individual
infusion of mobile health systems in a healthcare environment.
Dr. Brian OFlaherty has lectured in Information Systems in UCC for over 20 years and holds a PhD in Management
Science from the University of Strathclyde, Glasgow. He currently focuses on teaching technology entrepreneurship
with information systems. In 2010, Dr OFlaherty received an award from Enterprise Ireland and Invest NI in recogni-
tion of his contribution to Technology Entrepreneurship education.
Paidi ORaghallaigh is a researcher and part-time lecturer at University College Cork. His academic research primarily
focuses on mapping the innovation models of organisations. He has over 15 years experience as a business and infor-
mation systems consultant and trainer.
Ronan OSullivan is an assistant lecturer in the Business Information Systems Department at University College Cork.
He is currently working on his Ph.D. and his current research interests include service-oriented architecture, web ser-
vices, strategic management and business value of IT
Marco Pretorius is the Usability Team Leader at e-Government for Citizens (Western Cape Government). Marco com-
pleted a Masters degree in Computer Science and Information Technology at the Nelson Mandela Metropolitan Uni-
versity and is a Certified Usability Analyst. He has published in refereed journals and conference publications and his
current PhD research includes usability in e-Government.
Juan Camilo Romero is PhD candidate in Industrial Systems at the University of Toulouse in France. He obtained his
MSc. degree from the same university and has a BSc. degree in Industrial Engineering from the National University of
Colombia. His research interests are focused on the problem solving and knowledge management within the frame of

xii
Collaborative Supply Chains. In parallel to his research project, Mr. Romero works as Supply Chain consultant in the
aeronautical industry.
Ted Saarikko is a PhD-student at the Department of Applied IT, IT University in Gothenburg. He has a background in
computer science as well as language studies prior to earning his Masters degree in informatics. Ted is currently study-
ing issues relating to Enterprise architecture and sustainable development.
Teresa Santos is a post-Doctoural researcher at e-GEO, Centre of Geographical and Regional Planning Studies, New
University of Lisbon, Portugal. She has a masters in GIS from the Technical University of Lisbon (Portugal). Current
research is on 3D modelling of geographic data like very high resolution satellite imagery and LiDAR data, for studying
urban sustainability and land planning.
Gilbert Silvius is professor at HU University of Applied Sciences Utrecht in the Netherlands, where he. is programme
director of the Master of Informatics and Master of Project Management programmes. These innovative programmes
link IT and project management to organizational change and the concepts of sustainability. Also in research, Gilbert
focuses on sustainability in IT, Projects and Project Management.
Aonghus Sugrue is a PhD candidate in the Accounting, Finance, & Information Systems department in the University
College of Cork (UCC), Ireland. His research interest focuses upon contemporary forms of computer-mediated com-
munication with particular emphasis on the forms of connectivity affecting individuals both within organisations and
society in general.
Nestori Syynimaa is a CIO of Anvia Plc, Finland. He holds BBA from Seinjoki UAS, Finland and MSc on Economics and
Business Administration (Computer Science) from University of Vaasa, Finland. He is also a part time PhD student in
Informatics Research Centre, Henley Business School, University of Reading, UK.
Kamila Tislerova, lecturer of Marketing, Faculty of Economics, Technical University of Liberec, Czech Republic. Field of
research: Customer Relationship Management; Value of/for customer; e-commerce; all topics also in international
dimension. Visiting lecturer in Universities in Scotland and China. Experience of 15 years in the corporate sector. In-
terest in establishing cooperation of business and academic spheres.
Olgerta Tona is a PhD Student at Lund University School of Economics and Management, department of Informatics.
Her current research area is Business Intelligence.
Stephen Treacy graduated from UCC with a Masters in Information Systems for Business Performance in 2010. Cur-
rently in year two of his PhD course, Stephen is investigating the factors affecting the potential business value of social
media, along with a proposal of a decision support framework to identify the key criteria involved therein.
Chris Upfold is a senior lecturer in the Department of Information Systems at Rhodes University, South Africa. He also
lectures in the Rhodes Business School and for Ernst and Young in South Africa and Mauritius. His areas of interest
and research are Information Security, Radio Frequency Identification (RFID), Project Management, Virtual Teams and
Corporate Communications.
Daryoush Daniel Vaziri received his Bachelor degree from the University of Applied Science in Bonn-Rhine-Sieg. Af-
terwards he worked for Telekom AG. Since 2010 he has been employed at the university of Bonn-Rhine-Sieg as a re-
search assistant and simultaneously participating at the Master program Information and Innovation management.
His research fields cover the accessibility of ICT.
Bartosz Wachnik specializes in MIS implementation. He is a member of senior management in Alna Business Solutions
in Poland, a branch of Lithuanian company, which is one of the largest IT companies in the Baltic area. He has pub-
lished more than 20 articles in professional and academic journals. He has co-operated with University of Technology
in Warsaw where he has PhD.
Haining Wan is a PHD student at School of Information Systems and Management, National University of Defense
Technology, and now a visiting PHD student at Lund University School of Economics and Management. His current
research interests include: Enterprise Architecture, Business/IT alignment. He got a master degree in Operation Re-
search (Dec. 2008) and a bachelor degree in Bridge Engineering (Jul. 2006).

xiii
Dr. Atieh Zarabzadeh is a post-doctoral researcher in the HISRC, UCC. She holds a PhD in health informatics, TCD. She
is a trained as a Software Engineer, Azzahra University, Tehran, as a result of transferring her IT/Mechanical Engineer-
ing BSc from the Australian National University. Her research interests include applications of novel technologies in
healthcare.
Dr Ryszard Zygala is assistant professor at the Wroclaw University of Economics, Poland. His research interests in-
clude: information management, information systems economics, information systems modeling and architecture. He
has authored a book: Essentials of Business Information Management (in Polish) and contributed chapters to several
books
Proposal of Adaptability Indexes to Support Management
of Engineering and Marketing Systems
Oswaldo Luiz Agostinho
Department of Production Engineering, So Carlos Engineering School,
University of So Paulo, Brasil
agostinh@sc.usp.br

Abstract: The Responsiveness of Productive Systems has become a prerequisite to meet the demands of high
diversification of products being manufactured simultaneously, coupled with the reduction of lot sizes and the
reduction of life cycle of those products. Organizations that have portfolios with large numbers of products and
the ability to absorb shorter product life cycles become competitive against their competitors in the consumer
market. This article proposes Adaptability Indexes to support the management of the Systems and Processes of
Engineering in organizations, taking into account the need to reduce design time and the deployment of new
products to meet market demand for these products with increased diversification and reduction of their useful
life. The indexes of adaptability are defined by the relationship between time of development of a new product
and the development time of the product immediately subsequent, taking into account a sequence
of development of new products. The time of development of a new product comprises product design,
development of the means of manufacture and construction of the initial pilot batches. From the definition of the
adaptability index, engineering organizations can be classified by the following characteristics: a) regressive
adaptability, when the engineering organization needs successively greater times in the development of
successive products; b) neutral adaptability, when the engineering organization requires equivalent development
times for the development of successive products; c) progressive adaptability, when the engineering
organization successively needs shorter times in the development of successive products. Starting from these
adaptability characteristics of engineering organizations, the paper discusses the methodologies and
technologies to be applied to products and means of manufacturing development to increase the progressive
indexes of adaptability and reverse engineering organizations from a regressive adaptability index to a
progressive one. As a conclusion, it will propose that engineering organizations should have progressive
adaptability indexes in order to increase responsiveness of organizations that face competitiveness challenges
in environments of high diversification of products and constant reduction of product life cycle in global markets
in the 21st century. Paper relevance: Responsiveness becomes indispensable for organizations that compete in
environments of products with high diversification and reduction of life cycles. This paper presents adaptability
indexes to manage Engineering organizations, to increase their capacity of product and means of manufacturing
design, providing competiveness to attend the external conditions mentioned above.

Keywords: engineering systems, responsiveness, adaptability, products diversification
1. Introduction
Enterprises of the 21
st
century are facing competition on various fronts, such as reduction of product
life time, increase of diversification, reduction of customer response time, international competition.
Organizations that have portfolios with large numbers of products and the ability to absorb shorter
product life cycles become competitive against their competitors in the consumer market
(Schonberger, R.J., 2002).

The competitiveness conditions of an organization are dependent on external influences. One can
classify the external influences as coming from the market, scientific and technological developments,
society and environment regulations. These factors, acting simultaneously, induce a state of external
competitiveness. The external competitiveness is a reference for measuring the internal state of
competitiveness, supported by organizational, methodological and technological attributes applied
through the activities of the business processes of the organization. The application of technologies
on the various business processes, such as shop floor, engineering, support, planning, sales,
purchasing and others, constitute the technological strategy subordinated to the attributes and aligned
to the respective business strategies. Due to those factors, the Responsiveness of Productive
Systems has become a prerequisite to meet the demands of high diversification of products being
manufactured simultaneously, coupled with the reduction of lot sizes and the reduction of life cycle of
those products. Responsiveness becomes indispensable for organizations that compete in
environments of products with high diversification and reduction of life cycles. Looking at these needs,
the Engineering and Marketing organizations have the relevant responsibility to meet the
management challenges of paying attention to the outside market (Porter 2002).

1

Oswaldo Luiz Agostinho
2. Objective
This paper will focus on the responsiveness of Engineering and Marketing Systems, named
Adaptability, that provide conditions for noting external market needs of physical products or services
and the design and manufacturing .

It also proposes adaptability indexes to manage Marketing and Engineering systems in order to
increase the capacity of product design and means of manufacturing, thus providing competiveness
to face the external conditions mentioned above.
3. Enterprise competitiveness and competitiveness attributes
Enterprise competitiveness can be understood as the capacity to continuously review its competition
strategies, obtaining favorable position in the markets where it is actuated (Agostinho, 2012).
Consequently, it will provide conditions to generate higher profits than the average of companies of
the market where it is actuated, operating in a sustainable way, with quality, speed and flexibility.
Also, it must satisfy stakeholders and be compliant with environment requirements. One can divide
competiveness into two aspects: a) external competiveness, where the enterprise must have the
capacity of provoke in the consumer the desire of change from a company where he traditionally buys
a product to the new one. This capacity to drive the so called internal competiveness can be
understood as a set of harmonic and synergic methodological and technological factors that induce
the external market perception to change products from the previous organization to the new one.
This state of competiveness is obtained as a consequence of its organizational and technological
behaviour, defined as competiveness attributes. These proper characteristics of the management
system are expressed by a continuous comprehensive and integrated practice of methodologies, as
part of the organizational and technological management. The competitiveness attributes and sub-
attributes are classified according their range of application:

Market driven attributes: Proper characteristics of the management system of the organization
expressed by the continuous, comprehensive and integrated use of methodologies driven to provide
conditions of attendance of needs of the consumer markets. The market driven attributes are
deployed in the following sub attributes:

a) Innovation: The process of technological innovation comprises a complex set of activities that
transform ideas and scientific knowledge into physical reality and real world applications. It also
integrates existing technology and inventions to create a new product, service or process. Innovative
companies look for ideas of any kind, and provide an organizational culture that supports their
development in viable business programmes. The innovation sub-attribute can be split: a) focus on
consumer needs; b) enterprise wide quality; c) enterprise wide planning; d) utilization of core
knowledge; e) continuous upgrading of products and processes.

b) Agility: Enterprise agility can be reached when the technological and administrative infrastructure is
flexible, and can be rapidly created, configured and rearranged, meeting external business needs.
Among others, this attribute facilitates a superior time to market for new business initiatives. The
agility sub- attribute can be split: a) independence from technology; b) re-use of solutions;
c) adequate business infrastructure; d) response capacity of the whole system, not just isolated
processes; e) existence and application of strategies; f) extended enterprise business architecture
that includes supply chain and marketing processes.

c) Responsiveness: The characteristic of the organization to respond to the external needs in an
adequate time, also called time to market. These characteristics include pro-active search of
feedback from customers and suppliers, the ability to be a rapid cycle organization, flexible in thinking
and doing things in short periods of time. The responsiveness sub-attributes can be split: a) Business
process optimization, with elimination of the activities that do not add value to the product or to the
business; b) adoption of proven technology; c) business strategies integration; d) Response time from
the Engineering and Marketing processes; e) optimization of the supply chain network.

Organization driven attributes: Proper characteristics of the organization management system
expressed by continuous, comprehensive and integrated methodologies driven to provide conditions
for organizational structures to take note of the external needs that determine the external
competitiveness. The organization driven attributes determine the way that the administrative and
2

Oswaldo Luiz Agostinho
management business processes are applied throughout the organization itself. The organization
driven attributes can be split: a) coexistence between the hierarchical and business process
organization models; b) information infrastructure; c) knowledge management; d) organization
controls.

Human capital driven attributes: Proper characteristics of the management system of the organization
expressed by continuous, comprehensive and integrated methodologies for the development of
programmes of selection, education, and training of the human capital that supports the organization
to reach an organizational level that leads to the competitiveness status. The human capital driven
attributes can be split: a) teamwork; b) project clusters; c) human networking d) hierarchy versus
participation.
4. Organization responsiveness
The objective of this paper is to show that the development of the responsiveness of the organization
can be split into two characteristics, the so-called adaptability of the Engineering and Marketing
Systems, and the flexibility of the Shop Floor systems.

Organization Responsiveness can be defined as the Manufacturing system capacity to implement
changes in adequate times that meet the needs of the consumer market, mainly in the development of a
new product or simultaneously manufacturing different parts or quantities in the shop floor environment
(Agostinho, 2011). This paper will deal with the Responsiveness of the Marketing and Engineering
systems to meet the external market needs of products diversification, reduction of products life cycle,
competition internationalization.
5. Adaptability of marketing and engineering systems
The management approach to the Marketing and Engineering Systems must take into consideration
the need to achieve competitiveness attributes to meet the external demands of reduction of the
useful life of products, increase product diversification, the internationalization of markets, and
reduction of time to reach markets with new products, mainly in the years 2000.( W. H. El Maraghy,
2007). Looking at the approach above, this paper will focus on the responsiveness responsible for
detecting external market needs of physical products or services, their product design, manufacturing
specifications development and pre-production, defined as Adaptability of Engineering and Marketing
Systems. This will be defined as Adaptability of the Marketing and Engineering Systems as the
capacity of the Marketing and Engineering Systems introduce simultaneously and successively new
products in adequate times, to meet the needs of consumer market (Agostinho, 2011).

To better develop the adaptability concept, it will be necessary to define the activities related to the
introduction of new products by the organization. Table 1 details the main activities and sub activities
related to the implementation of a new product, for the functions of Marketing, Engineering and
Production (Chang, 2005).
Table 1: Main activities of product development (Chang, 2005)
Function Activity Activity detailment
Marketing Product acceptance research Consumer market needs
Definition of product concept
Implementation time frame
Engineering Product design





Manufacturing specifications
development
Conceptual design
Dimensioning
Parts detail
Functional tests
Reliability tests

Manufacturing routing
Manufacturing Process detail
Tooling and machine tools
Manufacturing times
Production/Engineering Pre Production Tooling try out
Manufacturing processes try out
Pilot run
For each activity, there is an associated respective execution time and correspondent angle (figure 1).
3

Oswaldo Luiz Agostinho
T
o
t
a
l

t
i
m
e

p
r
o
d
u
c
t

d
e
v
e
l
o
p
m
e
n
t
.
Activities Pre-Production
Manufacturing
specifications
development
Product
design
T
pr
T
mf
T
p
T
pa
pa
p
mf
pr
T
f
Product
acceptance
research
Product devellopment
Af


Figure 1: Activities of a new product development (Agostinho, 2011)
Tpa
- Time of product acceptance research

T
p
- Time of product design

T
mf
- Time of manufacturing specifications development

T
pr
- Time of pre-production

T
f
Total time product development.

pa
- product acceptance research.

p
- product development

mf
- development of manufacturing specifications

pr
- preproduction

f
- total product development

It can be observed that the angles
pa
,
p
,
mp
,
pr
, and consequently,
f
, are functions of the
relationship activities x time per activity. Each activity and respective time can have a different behaviour,
depending on the degree of organization of Marketing, Product Engineering and Manufacturing
Engineering Systems.

The product implementation total time T
f
is expressed as:

T
f
= T
pa
+ T
p
+T
mf
+T
pr ,

4

Oswaldo Luiz Agostinho
5.1 Adaptability indexes
In order to provide management conditions for the Marketing and Engineering organizations and drive
them to meet the organizations need to be aware of the external market demands; this work will
define the numerical indexes of adaptability. Supposing that an organization needs to launch new
products, starting from product 1 up to product n in a defined period of time, after the development of
a product i , assumed as a reference, products i+1 and i+2 will be developed successively, with the
same activities A
f.
This can be observed in figure 2:
Tfi + 1
Tfi
Tfi + 2
Product i +1
Product i
Product i +2
Activities
Af
Total
development
time
0
i+1
i
i+2

Figure 2: Adaptability of engineering and marketing systems (Agostinho, 2011)
In figure 2 can be seen:

Tfi
Time to develop product i;

Tf i+1
Time to develop product i+1 ;

Tf
i+2
Time to develop product i+2

Where T
f i+1 <
T
i
, and T
f i+2 >
T
i

The Adaptability of the Engineering and Marketing systems to introduce the product i+ 1 after the
introduction of the product i , or to introduce the product i +2 after the introduction of product i is
expressed by:

< 1
> 1

Generalizing, supposing that an Engineering and Marketing organization needs to introduce a
sequence of products, from product 1 to product n, it will define adaptability of the Marketing and
Engineering systems to introduce a product i+1, after the introduction of product i as :


5

Oswaldo Luiz Agostinho
The relationship above defines numerically the adaptability indexes of the Marketing and Engineering
Systems, varying as follows:

> a i,,i+1 0
5.2 Analysis of marketing and engineering systems using adaptability indexes
Analyzing the relationship above, one can classify:
Neutral Adaptability: Neutral adaptability occurs when a
i, i + 1
=1, i.e. the Engineering and
Marketing systems introduce a new product with equivalent times to the introduction of the
previous product or T
i + 1
= T
i
. This situation occurs predominantly in organizations that have
products with very stable design and long useful life, low diversification, low external demands
from the consumer market. Normally these kinds of organizations are classified within the
monopolistic markets, or with very stable products in their portfolios.
Regressive Adaptability: Regressive adaptability occurs when a
i, i + 1
< 1, i.e. the Engineering
and Marketing Systems introduce a new product with times T
i + 1
higher then the times of the
previous product ( T
i
). The limit value a
i, i + 1
= 0 will occur when the Marketing and Engineering
Systems need time tending to infinite to introduce a new product. This limit situation shows
complete inadequacy of the Marketing and Engineering Systems to face the challenges of
introduction of new products. This situation occurs in organizations where the Engineering and
Marketing systems are poorly organized, with very poor product design criteria and respective
data bases, processes with activities that do not add value, lack of utilization of parts grouping in
families, lack of use of methodologies of activities superposition, like Concurrent Engineering,
random use of automation, such as CAD, CAPP, CAM, local networks, etc . This level of
Marketing and Engineering systems will not allow the organization to compete in aggressive
markets, but just in niche markets, where competitiveness is not so necessary, and the product
brand is a guarantee of sales, even in poor conditions.
Progressive Adaptability: Progressive adaptability occurs when a
i, i + 1
> 1, i.e. the Marketing and
Engineering Systems introduce a new product with times T
i + 1
smaller than the times of the
previous product T
i
. The limit value will occur when a
i, i + 1
= , i.e. the Marketing and
Engineering Systems need times close to zero to introduce a new product after the introduction
of the previous one. In this limit situation, the introduction of new products is made almost
instantaneously, demonstrating that the Marketing and Engineering Systems are adequate and
able to respond to any challenge coming from the consumer market. This situation occurs in
organizations where the Engineering and Marketing systems are very well organized, both in the
product design criteria, as the type of organization that is implemented. The next paragraph
details the main criteria to provide constant increase of the adaptability indexes. Progressive
adaptability provides conditions to keep the organization at an adequate competitiveness level to
face the demand challenges of the aggressive external market conditions of the 21
st
century.
6. Criteria to increase adaptability
The current conditions of the consumer market indicate that the constant changes of products with
increasingly shorter life times are very demanding to maintain competitiveness. To meet this demand,
it becomes necessary for the Marketing and Engineering systems to maintain progressive adaptability
indexes. So, in order to keep a
i, i + 1
> 1 , it will be necessary to accomplish management and
technological conditions in the product design and manufacturing specifications development as
follows.
6.1 Reduction of complexity
The reduction of complexity of designed products and manufacturing specifications contributes to
increased adaptability because simpler projects can be optimized and reproduced, simplifying the
products data base of standard parts. The reduction of complexity of products can be obtained
through reduction of parts, part forms simplification to facilitate the manufacturing routing and
manufacturing process, careful evaluation of specifications of dimensional, geometric and surface
roughness tolerances, materials and surface and heat treatments. Looking at the manufacturing side,
the reduction of complexity is achieved through the standardization of manufacturing routes, reduction
of operations derived from the evaluation of parts specifications, standardization of tooling design,
6

Oswaldo Luiz Agostinho
reduction or elimination of unnecessary operations due to choosing inadequate machine tools that do
not meet the specifications of dimensional, geometric or metallurgical deviations (Nam P. Suh, 2008).
6.2 Project parts standardization
The standardization of components in several projects, with re-utilization of existing parts greatly
reduces the development time of a new project (Tarek, 2003). The application of the Group
Technology methodology, supported by an efficient data base of similar parts, will make possible:
Grouping the parts in families, to increase project rationalization;
Retrieving existing information related to existing projects, to be applied to new projects;
Standardization of specifications, characteristics and materials;
Upgrading products with the elimination of duplicate drawings;
Development of standard routings and manufacturing processes, derived from part families;
Standardization of tooling derived from the standard manufacturing processes.
6.3 Optimization of the marketing and engineering systems
Looking at the Marketing and Engineering systems, the reduction in number of necessary activities to
develop a new product can be obtained by two different ways:

a) Reduction of sub-activities that comprise the engineering and marketing business processes,
through the critical analysis of their real need. The applied methodology is Business Processes
Reengineering, with the elimination of activities that do not add value to the product or business. As
an example, is the activity of checking drawings really necessary? Are there ways to increase project
robustness by avoiding unnecessary checking on the drawings? To achieve these objectives,
methodologies must be applied of standardization and simplification of product design and
manufacturing processes sheets, with the application of Group Technology, Design for Manufacturing,
among others.

b) Superposition or elimination of activities: the elimination of activities can be effective after the
application of item 6.1. The other alternative will be the superposition of activities, changing from
serial activities to parallel or simultaneous activities, applying the Concurrent Engineering
methodology.

It can be observed that, besides the time for product development and manufacturing specifications
being greater when Concurrent Engineering is applied, the total product development time will be
less, due to the superposition of the activities (Chang, 2005).
6.4 Reduction of time per activity
The reduction of the time per activity will be achieved by the utilization of Information Technology
resources, such as CAD, CAE, CAPP, CAM, Stress modelling softwares, etc. The computational
resources will be applied after the previous criteria of adaptability increase have already been applied,
as described above: Reduction of complexity, Projects parts standardization Optimization of the
Marketing and Engineering organizations through Business Processes Reengineering and Concurrent
Engineering.
7. Conclusions
Enterprises in the 21
st
century are facing competition on various fronts, such as reduction of product
life time, increase of diversification, reduction of customer response time, competition
internationalization. Organizations that have portfolios with large numbers of products and the ability
to absorb shorter product life cycles become competitive against their competitors in the consumer
market. Marketing and Engineering systems are of great importance in maintaining conditions to
compete in these constantly changing environments, enabling organizations to provide the necessary
time to market for new products (Sluga, 2008). In order to provide management conditions to manage
the Marketing and Engineering organizations and drive them to meet the organizations need to be aware
of external market needs, this work proposes numerical indexes of adaptability.
7

Oswaldo Luiz Agostinho
In order to maintain the conditions of competitiveness, mainly related to paying attention to external
market needs, such as reduction of time to market, increase of diversification, reduction of useful life of
products, adaptability must be maintained progressively, i.e. a i,i+1 > 1. To keep it in this condition, the
several methodologies described above must be applied in the business processes of both Marketing
and Engineering systems. The monitoring of the adaptability indexes, with the purpose of maintaining
progressive adaptability is a pre-condition to be competitive, providing the consumer market with always
new products with the necessary diversification.
References
Agostinho, O.L. (2011) Manufacturing Systems (in Portuguese), Faculdade de Engenharia Mecanica,
Universidade Estadual de Campinas, So Paulo, Brasil.
Agostinho, O.L., Batocchio, A., Silva (2012) Proposal of Methodology to Balance, Correlate and Align Technology
and Business Strategies to Competitiveness Organization Attributes, PMA 2012 Conference, Cambridge,
UK.
Agostinho, O.L. (2008) Technology and Business Strategies: Methodology for correlation using knowledge
management, International Congress on Management of Technology (IAMOT), Nice, France.
Chang, C.M. (2005) Engineering Management Challenges in the New Millennium, Pearson/Prentice Hall, N.J..
USA .
El Maraghy, W.H., Urbanic, R.J. (2007) Modelling of Manufacturing Systems Complexity, Annals of CIRP.
European Institute for Technology and Innovation Management (2004) Bringing Technology and Innovation into
the Boardroom, Palgrave Macmillan, London, UK.
Fine, C.H. (2001) Clock Speed Winning Industry Control in the Age of Temporary Advantage, Perseus Books,
Boston, USA.
Madapusi, A. and Miles, G. (2011) Routines in enterprise application systems; Management Research Review.
2011, Vol. 34, 1, pp. 75-97.
Nam P. Suh, (2008) Complexity in Engineering, Annals of CIRP.
Porter, M.E., Van der Linde, C. (2002) Toward a new conception of the environment-competitiveness
relationship? In: Stavins, R. (Ed.). Economics of the environment: selected readings. New York: W. W.
Norton & Company.
Schonberger, R.J. (2002) World Class Manufacturing The Next Decade, The Free Press, N.Y., USA.
Sluga, P. Butala1, J. Peklenik1 ((2008) A Conceptual Framework for Collaborative Design and Operations of
Manufacturing Work Systems, Annals of CIRP.
Tarek, K.M. (2003) Management of Technology, McGraw Hill, N.Y., USA .

8
Knowledge Gaps in Post-Merger Integration of Software
Maintenance Processes: A Case Study
Maria Alaranta and Eero Martela
Department of Industrial Engineering and Management, Aalto University,
Espoo, Finland
maria.alaranta@aalto.fi
emartela@gmail.com

Abstract: The integration activities after a merger or an acquisition result in and are troubled by loss of
knowledge that is critical for running the business. However, we know little of such knowledge gaps. We adapt
the IS Body of Knowledge by Iivari et al., 2004 to form a framework for knowledge gaps in post-merger
integration of software processes. The key knowledge types are: Context-of-use knowledge, software-
engineering knowledge, organizational knowledge, transformation knowledge and operative knowledge. Our
empirical data comes from the integration of the software maintenance processes after a megamerger in the ICT
field. We explore what types of knowledge loss trouble the integration and when such losses are most prominent.
In the case studied, all five types of knowledge gaps are present. Key findings include the variety and persistence
of knowledge problems and their evolutionary nature. There are gaps in several types of knowledge at all
integration phases and all types of knowledge during ramp-up. Gaps in transformation knowledge persist
throughout the integration process. Knowledge gaps evolve over time both in terms of which gaps are prominent
at each phase of the change, and in terms of the specific content of the gaps. The framework allows to
systematically integrate insights provided by prior literature, and it can help researchers of knowledge issues
during turbulent times to focus their studies. Our analysis also reveals the emergent nature of knowledge-related
problems in post-merger process integration. Researchers would benefit from including this evolutionary
perspective in their studies, and thus being able to build more descriptive theories. Practitioners can use our
results to identify, understand and prepare for specific knowledge-loss problems, thus facilitating overcoming
these problems and managing risks. They may also benefit from engaging in on-going creation and transfer of
critical knowledge during all phases of integration.

Keywords: knowledge gap, M&A, merger, post-merger integration, software maintenance
1. Introduction
Mergers and acquisitions (M&As) are dramatic events in a companys life cycle that have fundamental
influences on its business processes. They are among key strategy tools. Yet, 50 80 percent of the
deals fail to meet their targets (Alaranta & Kautz, forthcoming). Among main causes for high failure
rates is integration problems; including poor integration of knowledge resources (Yoo et al. 2007). As
a result, much of the knowledge that is critical for running the business processes is lost.

This loss of knowledge disrupts the sequence of related business processes that together result as a
product or service. These processes include both the operational activities (e.g., research, analysis,
design, coding and support testing) and managerial activities (coordinating and managing) (Cf. Grant,
1996, Cha et al. 2008). The capability to perform such activities depends largely on a firms prior
experience on such tasks, i.e., learning-by-doing knowledge. (Cha et al., 2008)

Knowledge gaps refer to situations in which an organization or one part of it needs some knowledge
but is for some reason lacking it (Cha et al., 2008). Any of a companys knowledge processes
knowledge creation, knowledge storage/retrieval, knowledge transfer and knowledge application
(Alavi et al., 2001) may be disrupted resulting in a knowledge gap. In a merger, an organizations
knowledge processes are disrupted, i.e. some knowledge may be missing from some parts of the
organization and some knowledge isnt transferring due to the sudden loss of employees
connections. In addition, managers tendency to stick to old processes and ways of operating may
further amplify the disruption of knowledge chains as the knowledge configurations arent adapted to
the new situation. Due to these reasons, a merger causes gaps in knowledge related to where to
acquire needed knowledge. (Yoo et al. 2007) Furthermore, a common reason for the existence of
knowledge gaps is that some knowledge exists within the organization but the part of organization
that would need the knowledge doesnt know about the availability of that knowledge (Alavi et al.,
2001; Szulanski, 2000) or where to acquire that knowledge (Alavi et al., 2001; Yoo et al., 2007).

The empirical context for this study is corrective software maintenance process. Software
maintenance states for the modification of a software product after delivery, to correct faults, to
9

Maria Alaranta and Eero Martela
improve performance or other attributes, or to adapt the product to a modified environment. (IEEE,
1998) Software maintenance is an interesting context for this study for its economic importance - the
cost of software maintenance has been estimated to have raised from approximately 35% of IS costs
in 1960s to up to 90% in 1990s (Polo et al., 2003) - and knowledge-intensive nature (Anquetil et al.,
2006). Software maintenance plagued by inadequate information and uncertainties (Zmud, 1980). In
corrective software maintenance, some key sources of knowledge gaps include poor documentation,
schedule problems, interdependencies with other components in the product in question (Lientz et al.,
1978) as well as problems related to communication, interaction and interfaces, and user involvement
(Kajko-Mattsson, 2004).

In this research, we examine and illustrate the knowledge gaps in an empirical case of integrating
software maintenance processes after a merger. The empirical research question is: What
knowledge gaps occur in post-merger integration of software maintenance processes and how do
knowledge gaps change over time?
2. Background
Prior research on the issues of knowledge processes in M&As provides useful initial insights
regarding the nature of knowledge gaps. Yoo et al. (2007) explored the fracturing of knowledge
configurations after a merger. They discuss the process of creating a mutual stock of knowledge
among groups or individuals, focusing on the operative knowledge needed to carry out daily activities.
Empson (2001) discusses the meaning of organizations knowledge bases for the success of post-
merger knowledge transfer. Her focus is on the role that individuals play in the transfer of technical,
organizational and client context knowledge. Bresman et al. (1999) promoted the significance of
personal contacts in the transfer of technological know-how and patents after a merger.

In order to analyze the knowledge gaps in post-merger integration, we need framework for organizing
relevant knowledge. As the focus of our empirical study is in software maintenance processes, we
adopt the systematic body of knowledge (BoK) for IS development (consisting of system
development, system operation and system maintenance) by Iivari et al (2004). The IS BoK is a
suitable base for our analysis as such process knowledge allows us to organize the practically
relevant IS knowledge in an action-oriented way. We build our analysis on its five knowledge areas:
technical knowledge, application domain knowledge, organizational knowledge, IS application
knowledge, and IS development process knowledge (Iivari et al., 2004).

Technical knowledge encompasses the knowledge related to the types of hardware and software as
well as their application (Iivari et al., 2004). In the context of post-merger integration of software
maintenance processes, this translates into the technical knowledge needed by an engineer to carry
out daily software maintenance tasks which we term Engineering knowledge. Application domain
knowledge covers the knowledge about the application domain such as accounting concepts and
principles - for which an information system is built (Iivari et al., 2004). The application of this in the
context of post-merger integration is operative knowledge, i.e., the non-technical knowledge needed
to carry out the daily activities. This includes the knowledge that is needed for daily tasks such as
common working habits and employee networks. According to Iivari et al. (2004), Systems
development process knowledge refers to the tools, techniques, methods, approaches and principles
used in developing IS. In post-merger integration, what is developed is the new organizational
configuration. Thus, we need transformation knowledge i.e., knowledge about how the organizations
merge; such as planning and ramping up the integration or catalyzing the cooperation of employees
from different backgrounds. Organizational knowledge refers to the knowledge on the organization
including its work processes, structure, culture, role distribution, cause-effect relationships, flow of
information etc. (Cf. Iivari et al., 2004). Finally, IS application knowledge includes on how an IS may
support the activities in its intra- and inter-organizational context. (Iivari et al., 2004). We broaden this
concept to include all relevant knowledge related to the context in which the software maintenance
process takes place, such as knowledge about customers expectations or customers business and
call it Context-of-use Knowledge.

The following table summarizes the related contributions of existing literature in relation to the five
knowledge types relevant to post-merger integration of software maintenance processes.



10

Maria Alaranta and Eero Martela
Table 1: Framework for understanding knowledge types and examples in prior literature
Knowledge type Examples in prior literature on post-
merger integration
Examples of knowledge gaps in the empirical
data
Context-of-use
knowledge
- Client knowledge: general
understanding of a particular industry;
detailed knowledge of a specific client
firm; and personal knowledge of key
individuals within the client firm.
(Empson, 2001)
What the customer wants to know definitely
after an outage or in an emergency [is]
what the actual root cause was. How can I
avoid in the future? We are very bad in
communicating root causes for our
customers.
Engineering
knowledge
- Individual technical knowledge
(Empson, 2001)
- Engineering know-how and know-
what (Bresman et al., 1999)
- Information systems and data bases
consisting of technical data (Yoo et al.
2007)
we havent succeeded in transferring the
work from the front-end into the middle level
according to the original plans.
Operative
knowledge
- knowledge-sharing resources and
practices (Yoo et al. 2007)
(Youngjin et al., 2007)
So it was not answering phone calls, not
answering email, not cooperating, not really
understanding, I guess, why they should be
doing this.
Organizational
knowledge
- Sectoral knowledge (Empson, 2001)
- Firm-specific organizational
knowledge (Empson, 2001)
- connections between people (Yoo et
al. 2007)
we had two different people from two
different organizations, and they werent that
close, you know, just with how they were
working.
Transformation
knowledge
- Prior post-acquisition IS integration
experience; post-acquisition IS
integration skills (Alaranta & Kautz,
forthcoming)
Sure, I guess there was resistance, because
they were uncomfortable doing it, didnt
understand the value, the benefits, and hadnt
been told they had to do it.
The body of practically relevant knowledge on post-merger integration is highly dispersed. Prior
literature addresses what knowledge is transferred, transformed or recreated. The how the
transformation is carried out is only discussed in the context of IS integration. In addition, none of the
prior contributions touches all five areas of knowledge. We complement these studies by addressing
also the transformation knowledge on the organizational level and by integrating the five knowledge
areas.
3. Methodological choices
Post-merger integration as a phenomenon has several features that limit the choice of effective data
collection methods, such as unavailability of information on future mergers prior to closing the deal as
well as high uncertainty and sensitive nature of the mergers. These unique pressures make for
example surveys and ethnographies are oftentimes inconceivable. (Yoo et al. 2007)

Case research was chosen for this study because it allows in-depth understanding of real-life
situations. The qualitative method allowed us to access and understand fine-grained issues related to
both merging parties and all relevant actors. A study of a single case is an appropriate strategy for
revelatory studies (Yin, 1984). Megamergers such as the one studied in this paper are rare but
dramatic events in companies life cycles. They offer unique opportunities for observing major
knowledge-process changes and disruptions.

The chosen merger case, Alpha-Beta (pseudonym), is interesting for this study because; first, the
merger deal had been signed two years prior to the beginning of this research project; i.e., we could
retrospectively access the life cycle of the post-merger integration and the events of the integration
project were still in fresh memory of the employees. Second, the chosen case is one of merger of
equals in which full integration of the software maintenance processes was desired. We sought such
case to ensure that we could observe a range of problems related to disruption of existing knowledge
processes and constructing new ones. Third, to further amplify the visibility of knowledge issues in
post-merger integration, we chose a knowledge-intensive process for our study.

We chose semi-structured interviews because of their ability to provide versatile enough data about
the phenomenon and leave enough flexibility for each stakeholder to describe their experiences (Cf.
Yin, 1984). The initial list of interviewees was composed in collaboration with the case organization to
cover both merging organizations (14 interviewees from ex-Alpha and 9 interviewees from ex-Beta)
11

Maria Alaranta and Eero Martela
as well as all relevant organizational units and levels. A few new interviewees were added based on
suggestions received during the interviews, and follow-up interviews were carried out with two key
actors one year after conducting this study. Each interview lasted 1-2 hrs and the interviews were
tape-recorded and later transcribed.

The interview themes included:
Respondent background
The post-merger integration of corrective software maintenance process phase by phase: What?
Who? How? Why?
Lessons learned?
To confirm, compare and contrast the data collected via the interviews, we also used documents
received from the case company. These included process models, project plans, and organization
charts. (Cf. Yin, 1984)

Data analysis: In this study, data collection and analysis were intertwined. 2-3 interviewers were
present at each interview, and the interviewees discussed emergent themes after each interview.
Additional questions were included in the interview guide to either confirm or deny emergent themes.
This paper presents the results related to one of the themes that emerged during this study, namely
knowledge gaps. These policies served to improve the validity of this research (Cf. Yin, 1984).

As we explored the theme of knowledge gaps empirically, we also simultaneously compared our
findings to relevant former research (Strauss & Corbin, 1998), and developed the coding scheme. Our
analysis proceeded in three phases. First, a narrative including key actions and events was
composed (see Section 0). Second, we coded the data for the five categories of knowledge gaps. We
operated on the ontological assumptions that knowledge gaps are evident from problems, rework,
delays, etc. as well as indirectly when organizational actors employed various techniques for
overcoming knowledge gaps. These included e.g., virtual organizations, recruiting, communications
and training. We also attributed a phase on the post-merger time line for each knowledge gap. Third,
we re-interpreted the case data from the perspective of the knowledge gaps. This allowed us to gain a
rich understanding of the nature and timing of knowledge gaps in post-merger integration of software
maintenance processes.
4. The empirical case
4.1 Case setting
This case study concerns the integration of two very distinct software maintenance processes as a
result of a mega-merger of two multinational ICT companies. The newly formed Alpha-Beta
(pseudonym) operates in business-to-business markets, and its products consist of hardware and
software components. The motive for this merger was to achieve economies of scale and synergy
savings in highly competitive markets.

Both companies had headquarters in Europe and employed over 40 000 people. Betas organizational
culture was very hierarchical meanwhile Alpha drove a more informal culture that relied on informal
relationships.

The focus of this study is the post-merger integration of the corrective software maintenance
processes. Such process starts when a customer informs the company of a problem and ends when a
solution is provided to the customer. Before the merger, Alphas run this process as a geographically
scattered but globally managed network and it had a centralized global IS support for the process.
Beta, in turn, had organized its regionally managed process into a hierarchy around its centralized
back-end units, and it ran a scattered IS landscape with various IS in different geographical locations.

Alphabetas process integration proceeded through three distinct phases:
Planning: Integration-related actions executed before the day one of the merger
Ramp-up: Building common operations after the day one of the merger
Refining: Implementing corrective actions after reviewing outcomes of the ramp-up
12

Maria Alaranta and Eero Martela
4.2 Findings
Planning: The initial knowledge gaps were related to what would be the principles and culture of the
new organization and how to proceed with merging the companies. In addition, the employees were
largely ignorant of the future of their positions.

The managements lack of knowledge about how to integrate was, first, due to their unfamiliarity with
the counterpart organizations. One software maintenance manager told us: the challenge was
always to [know] how many [maintenance] cases for which technology coming from which
geographical area. we had data from ex-Alpha, data from ex-Beta, there were not, let's say,
consistent instructions or there was a lot of anticipation, and just guessing how will the number of
cases develop. Without knowing the structures and figures of the other organization, the
management couldnt plan an optimal model for integration. To overcome this, a project structure was
established for integrating the two former corrective software-maintenance processes into one. The
initial task of the project teams was to collect and combine knowledge from the two organizations. The
management also lacked experience and knowledge about how these kinds of integrations should be
executed and of ideal models for these kinds of processes. They studied literature and hired
consultants to overcome this gap.

Due to the different backgrounds, there was a huge gap in old and new cultures and principles of
operation. This caused confusion and slowed down decision-making. An ex-Alpha manager
complained: When I try to reach my supervisor, his assistant answers me. And then Im confused
because Id like to call my supervisor directly. Luckily, for years, nobody [in ex-Alpha] has had to
reach anybody through an assistant. In ex-Alpha we had this joke about management by SMS. But
then I got these colleagues who hadnt sent any SMS in their life. The management attempted to
attenuate these tensions by communicating and openly discussing differences in the cultures between
the merging organizations as well as by communicating the values of the new firm.

Due to the lack of knowledge, many employees felt their positions and future in the company
threatened causing employee turnover as well as general uncertainty, as a customer engagement
manager stated The company [should have expressed] a clear commitment to certain areas of the
business. Not general messages, like we did There should be some areas where engineers could
feel completely safe. We have an extreme amount of engineers that have left the company. They
were afraid that they would be fired. Often these engineers are the most competent resources that
we have. As a solution, the management aimed to make fast decisions at the managerial level and
limiting the amount of changes made. The key outcome of the planning stage was a blueprint of the
new process.

Ramp-up: One year after the signing of the deal, the merger was closed and Alpha-Beta set to
execute the integration. The execution team travelled to the regions in order to communicate the
changes and jointly define the details of the changes. The largest sub-projects were relocating
personnel to the new, centralized service centers and the implementation of a new, integrated
information system. The implementation of centralized service centers caused a temporary gap in
engineering knowledge in the new locations. In the beginning, the new sited didnt have enough
human resources both in terms of head count and engineering skills. As a result, the maintenance
process couldnt run as planned. The execution team needed to identify what gaps there were in
training requirements or even skills development, but mostly on what [the engineers] were doing
today compared to what they will need to do tomorrow, and how do you fill that competence gap to
enable this., as one team member explained. The gaps were slowly overcome through internal
recruiting and training as well as using virtual teams during the transition period.

The merger-related structural changes caused gaps in operative knowledge, organizational
knowledge, and context-of-use knowledge. Operative knowledge didnt flow between the process
levels because engineers on various levels resisted the change. A member of the execution team
complained: I sensed very clearly from day one direct resistance from ex-Beta to move over to ex-
Alpha processes. ... So just very direct No! Were not going to do it. The resistance manifested in
reluctance to forward cases to subsequent process levels and distrust between them. As one
software maintenance manager complained: theres a huge gap between the back-end and middle-
level, like the Berlin wall.

13

Maria Alaranta and Eero Martela
In addition, the implementation of the new process-support IS was delayed. This, in turn, delayed the
process integration as some regions had to continue using their previous IS, and the management
lacked an overall vision to the process during the delay. Temporarily, this caused a severe knowledge
gap for the managers: they were not aware of the poor process performance.

The engineers resisted the new IS due to its poor usability. They found creative ways to bypass it,
which caused further gaps in the operative knowledge. In addition, the case flow in the new process
took place via the new IS only. However, this lack of personal interactions between the engineers
decreased the efficiency in sharing knowledge. The management took steps to improve the IS and
their communication to alleviate these problems.

One severe problem during this phase was that the customer perception suffered. This was due to an
excessive focusing on internal changes and sub-optimizing within process levels. The management
attempted to tackle this problem by encouraging more communication with the customer and between
process levels. Customer feedback was collected to get a better understanding of the problems

Refining: Over one and a half years after the merger, the managers considered the main integration
effort finished, and they initiated various projects for further developing the process. The manager of
one maintenance unit described this situation: Once you survive day one and the car is driving more
or less into right direction, immediately take the next step and try to review what you have now.
See what we have, how it works, analyze the weak points, hear the customer expectations, and
take corrective actions as fast as possible. we have a lot of good things and lots of gaps, so after
the compromises [in choosing the best practices from Ex-Alpha and Ex-Beta] we have to correct
the direction. Again, the managers had to overcome the gap of what and how to develop. They
conducted a thorough analysis of the situation and launched projects to improve operative knowledge
and context-of-use knowledge. These projects included: implementation of formal operative
methodologies, boosting IS capabilities, improving case routing, a service excellence concept, and
planning to reconfigure the process flow.

The following

Table 2 summarizes the key knowledge gaps Apha-Beta needed to overcome to integrate the
software maintenance processes.
Table 2: Knowledge gaps to be overcome in Alpha-Betas integration of software-maintenance
Planning Transformation knowledge
- gaps in what and how to integrate; know the future configuration of the organization
Organizational knowledge
- gaps in common ways of working or common culture
Ramp-up Operative knowledge
- gaps in knowledge on the new maintenance process
Engineering knowledge
- gaps in engineering skills in the new units
Organizational knowledge
- before the IS integration, gaps in visibility to process performance
Context of use
- gaps in knowledge on customers problems and needs due to transformation work
Transformation knowledge
-gaps in how to implement the integration-related changes in different units
Refining Operative knowledge
- gaps in how to further improve the maintenance process
Context of use knowledge
- gaps in knowledge on customers problems and needs
Transformation knowledge
- gaps in knowledge on how to further develop the process
5. Discussion
Various and persistent knowledge problems: There were gaps in every knowledge area,
especially during ramp-up. This shows how a merger fundamentally disrupts the entire knowledge
configuration of an organization. Yet, contributions in existing literature covers only one (Bresman et
14

Maria Alaranta and Eero Martela
al., 1999) or few (Empson, 2001; Yoo et al., 2007) knowledge areas at time, and the nature of
transformation knowledge is only accounted for IS issues. In addition, there are gaps in
transformation knowledge at every stage of the post-merger integration process. This finding is in line
with the notion that post-merger integrations are complex (Alaranta & Kautz, forthcoming), and it
reveals a key challenge: transformation knowledge must be created for each unique merger situation
by combining knowledge on both organizations and desired goals.

Whilst the prominence of contextual organizational knowledge in post-merger integration is not new to
the literature (Empson, 2001; Yoo et al., 2007), what is perhaps new is that problems related to it
seem to attenuate towards the end of the integration life cycle. This may be because collaboration
increases mutual understanding of each others context (Cf. Bresman et al., 1999). In addition, some
gaps in organizational knowledge were diminished via the implementation of a common IS (Cf.
Alaranta & Kautz, forthcoming)

Alpha-Betas the biggest problems related to performance and customer satisfaction seem to stem
from gaps in transformation knowledge, organizational knowledge and context-of-use knowledge.
Problems in engineering knowledge (the content of the maintenance process) are prominent only
during ramp up but all other knowledge types cause major problems in at least two phases. This is in
line with prior literature in the sense that the transfer of explicit engineering knowledge is easier than
the transfer and creation of the other, more tacit types of knowledge (Cf. Nonaka, 1994).

Evolutionary knowledge problems: There is an evolution in both the prominence of the different
knowledge gaps and their contents. Whilst prior research acknowledges the emergent nature of post-
merger integration (Alaranta & Kautz, forthcoming), our study is the first one to take the knowledge
perspective, providing insights into both causes and dynamics of this evolution.

There are also interdependencies between the knowledge gaps. The heavy loads work related to
creating transformation and organizational knowledge caused loss of context-of-use knowledge
during ramp-up and refining, even though both ex-organizations had efficient processes for creating
and transferring it. This insight is in line with prior post-merger integration research that predicts value
destruction in such change processes (Haspeslagh & Jemison, 1991). What is perhaps new is
showing that knowledge-related problems may directly contribute to this value destruction.
6. Conclusions
This paper addresses knowledge gaps in post-merger integration of software maintenance processes.
Prior research predicts value destruction in post-merger integrations (Haspeslagh & Jemison, 1991),
and our research shows that knowledge-related problems directly contribute to it. We adapt a
framework for understanding the nature of such knowledge loss. We integrate existing contributions
on what knowledge needs to be transferred, transformed or recreated, and add transformation
knowledge, i.e., knowledge on how the integration is carried out. Our empirical study also addresses
when during the post-merger transformation is each knowledge type most prone to gaps.

This initial analysis can serve as a resource for researchers and managers. Our findings show that
scrutinizing different types of knowledge gaps at all phases of the integration process is a key to
fruitful analyses and successful management of post-merger situations. The framework can be used
to focus future studies on IT during turbulent times.

Researchers could also benefit from including the evolutionary perspective in their studies, and thus
being able to build more descriptive theories. Future research could also focus on practices for
overcoming knowledge gaps during turbulent times; including transferring knowledge across units
(Szulanski, 2000), and creating new knowledge (Nonaka, 1994).

The framework and the empirical analysis can support practitioners risk analyses and provide cues
on what knowledge needs to be transferred or created to overcome in their merger cases.
Practitioners may also take home the notion that merging organizations benefit from engaging in on-
going creation and transfer of relevant knowledge. This is particularly true for transformation
knowledge.
15

Maria Alaranta and Eero Martela
References
Alaranta, Maria Kautz, KarlHeinz (forthcoming). A Framework for Understanding Post-Acquisition IS
Integration. Journal of Information Technology Theory and Application
Alavi, M., & Leidner, D. E. 2001. Review: Knowledge Management and Knowledge Management Systems:
Conceptual Foundations and Research Issues. MIS Quarterly, 25(1): 107-136.
Anquetil, N., Oliveira, K., & Dias, M. 2006. Software maintenance ontology. Ontologies for Software Engineering
and Software Technology: 153-173.
Bresman, H., Birkinshaw, J., & Nobel, R. 1999. Knowledge Transfer in International Acquisitions. Journal of
International Business Studies, 30(3): 439-462.
Cha, H. S., Pingry, D. E., & Thatcher, M. E. 2008. Managing The Knowledge Supply Chain: An Organizational
Learning Model Of Information Technology Offshore Outsourcing. MIS Quarterly, 32(2): 281-306.
Empson, L. 2001. Fear of Exploitation and Fear of Contamination: Impediments to Knowledge Transfer in
Mergers between Professional Service Firms. Human Relations, 54(7): 839-862.
Grant, R. M. 1996. Toward a Knowledge-Based Theory of the Firm. Strategic Management Journal, 17(Winter):
14.
Haspeslagh, P., & Jemison, D. 1991. Managing acquisitions: Creating value through corporate renewal: The Free
Press New York.
IEEE. 1998. IEEE. Standard for Software Maintenance, 1998; 47 pp.
Iivari, J., Hirschheim, R., & Klein, H. 2004. Towards a distinctive body of knowledge for Information Systems
experts: coding ISD process knowledge in two IS journals. Information Systems Journal, 14(4): 313-342.
Kajko-Mattsson, M. 2004. Problems within front-end support. Journal of Software Maintenance and Evolution:
Research and Practice, 16(4-5): 309-329.
Lientz, B. P., Swanson, E. B., & Tompkins, G. E. 1978. Characteristics of application software maintenance.
Commun. ACM, 21(6): 466-471.
Nonaka, I. 1994. A Dynamic Theory of Organizational Knowledge Creation. Organization Science, 5(1): 14-37.
Polo, M., Piattini, M., & Ruiz, F. 2003. Advances in software maintenance management: Technologies and
solutions: IGI Global.
Strauss, A., & Corbin, J. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing
Grounded Theory (2 ed.). Thousand Oaks, CA: SAGE Publications.
Szulanski, G. 2000. The Process of Knowledge Transfer: A Diachronic Analysis of Stickiness. Organizational
Behavior and Human Decision Processes, 82(1): 9-27.
Yin, R. K. 1984. Case Study Research - Design and Methods. Newbury Park, CA, USA: SAGE.
Yoo, Y., Lyytinen, K., & Heo, D. 2007. Closing the gap: towards a process model of post-merger knowledge
sharing. Information Systems Journal, 17(4): 321-347.
Zmud, R. W. 1980. Management Of Large Software Development Efforts. MIS Quarterly, 4(2): 45-55.
16
A Proposed Framework for Guiding the Effective
Implementation of an Informal Communication System for
Virtual Teams
Garth Alistoun and Christopher Upfold
Rhodes University, Grahamstown, South Africa
Galistoun@gmail.com
c.upfold@ru.ac.za

Abstract: This research provides insight into the nature of informal communication, and how it relates to virtual
teams. It is shown that informal communication plays a critical role in the achievement of the production, group
maintenance and member support goals of a team. Given the dispersed nature of virtual teams, it is shown that
the lack of access to face-to-face communication results in challenges to effective teamwork. These challenges
are (1) Trust building, (2) Information exchange, (3) Process gains and losses, (4) Feelings of isolation, (5)
Participation, (6) Coordination and (7) Cohesion. In order to overcome these challenges, five functional needs of
a virtual informal communication system are identified. These needs are (1) Co-presence, (2) Low behavioural
cost, (3) Visual channel, (4) Document sharing and (5) Multiple complimentary systems. The proposed framework
of an effective virtual communication system is intended to provide guidelines against which intended or existing
virtual communication systems can be assessed. Given that the framework is theoretical, further research will be
required to determine its veracity.

Keywords: virtual teams, globalisation, communication, distributed teams
1. Introduction
Barriers of distance are shrinking due to ever improving communication technologies. Humankind is
at any moment connected through telephones, email, Voice over IP and a host of related
technologies. Businesses and other organisations are increasingly taking advantage of these
technologies which are being used to underpin both local and global collaboration. Project teams are
now often made up of members who are geographically distributed and are forced to use
technological communication mediums to co-ordinate and complete their work. Numerous studies
have been conducted to develop a model of virtual team communication processes and the means of
improving these interactions through communication technologies (Maruping and Agarwal, 2004;
Kirkman, Rosen, Gibson, Tesluk and McPherson, 2002; Geister, Konradt and Hertel, 2006; Liu and
Burn, 2007). This paper focuses on the relatively new area of virtual collaboration research, the role
informal communication plays in virtual teams. The majority of studies and communication
technologies discount or ignore the value of informal communication in virtual teams in favour of a
more structured, task-based approach to team development and maintenance, at least in the early
stages of a virtual teams lifecycle (Kirkman et al., 2002; 70-71; Whittaker, Frohlich and Daly-Jones,
1994: 131; Liu and Burn, 2007: 48-49, Pinsonneault and Caya, 2005: 10). Given this apparent neglect
of such a key form of communication (Kraut et al., 1990: 15; Isaacs et al., 1997: 2), there is a need to
develop technologies that enable virtual group members to maintain their relationships, co-ordinate
their work and succeed in the most natural way possible.
2. Informal communication
Informal communication is a spontaneous communicative event between random, out of role
participants who do not prearrange the topic of conversation (Kraut et al. 1990a: 5). The informal
communication process is also highly interactive, content rich and informal in terms of speech register
and language usage (Kraut et al. 1990a: 5) and is performed synchronously in face-to-face settings
(Whittaker et al., 1994: 131). Kraut et al. (1990a: 15) provide a taxonomy by which conversations can
be classified. These categories are (1) scheduled, (2) intended, (3) opportunistic and (4)
spontaneous. Scheduled conversations have prearranged content and context (Kraut et al., 1990a:
15), intended conversations are those in which the initiating participant specifically goes in search of
the other party (Kraut et al., 1990a: 15), opportunistic conversations are categorized as those in which
the initiating party has planned to talk with the other participant at some point and has taken
advantage of a chance encounter to have the conversation (Kraut et al., 1990a: 15), and spontaneous
interactions are in no way pre-planned by the participants. Isaacs et al. (1997: 5) regard intended,
opportunistic and spontaneous interactions as informal in the way they are initiated.

17

Garth Alistoun and Christopher Upfold
Informal communication plays an important role in supporting effective teamwork through work-
execution, group co-ordination, socialization of team members and team building processes
(Whittaker et al., 1994: 131). Isaacs et al. (1997: 9) state that the restriction of these communication
processes, such as in distributed collaboration, has a negative impact on task-performance despite a
greater emphasis placed on formal meetings.

Isaacs et al. (1997: 11-15) go further to describe the mechanisms through which informal
communication achieves its supporting role. The proposed functions of informal communication are
(1) tracking people, (2) taking and leaving messages, (3) making meeting arrangements, (4)
delivering documents, (5) giving and getting help and (6) reporting progress and news (Isaacs et al.,
1997: 11-15). Tracking people involves the identifying of the current location of team members, their
current activities and future plans (Isaacs et al., 1997: 11). Leaving and taking messages is
concerned with contacting people through a third person whereas arranging meetings refers to
making future interaction arrangements with a person or a group (Isaacs et al., 1997: 12). Document
delivery is the process of handing documents to a recipient with actions attached such as signing,
proof reading or general perusal (Isaacs et al., 1997: 12-13). Giving and getting help is a joint problem
solving process characterised by question and answer exchange (Isaacs et al., 1997: 13) while
progress and news reporting is the process of disseminating relevant information to team members
(Isaacs et al., 1997: 14-15).

Kraut et al. (1990a: 7) suggest that teams need to satisfy three super-tasks in order to be successful,
which are (1) production, (2) group maintenance and (3) member support. Production refers to
achieving the goals of the project in which the team is engaged such as writing reports, delivering
presentations, and all other work needs of the project. Group maintenance is the process of recruiting
and socializing new group members, securing external resources and maintaining group
cohesiveness in order to sustain the team over its lifecycle (Kraut et al., 1990a: 7). Member support is
the process of regulating the feelings of individual group members in order to ensure that they are
satisfied with their work, relationships and team membership (Kraut et al., 1990a: 7).

The functions of informal communication proposed by Isaacs et al. (1997: 11-15) contribute to the
achievement of the three super-tasks suggested by Kraut et al. (1990a: 7). Help, document delivery
and reporting functions all aid in the production needs of the team, while tracking, message taking
and arranging meetings are directly supportive of the group maintenance function (Isaacs et al., 1997:
14-15). Member support is achieved through the help, tracking and reporting functions (Isaacs et al.,
1997: 15).
3. Virtual teaming
The international business environment has become increasingly complex and competitive, forcing
businesses to rethink and modify their operating models (Bharadwaj and Saxena, 2006: 63). These
changes have brought about the creation of virtual teams, defined by Geister et al. (2006: 459 460)
as two or more people who work together on a mutual goalinteract from different locationsand
communicate by means of information and communication technology. A virtual team can be made
up of a diversely skilled complement of members in order to form the best possible group for a given
project (Pinsonneault and Caya, 2005: 1). Liu and Burn (2007: 41) identify three typical attributes of
virtual teams as being (1) geographically dispersed, (2) lacking in social context and (3) lacking in
face-to-face encounters. A fourth dimension is that virtual teams make significant use of information
and communication technologies in order to facilitate communication and coordination processes and
meet team goals (Maruping and Agarwal, 2004: 975). Maruping and Agarwal (2004: 975) provide an
analysis of virtual communication capabilities in contrast to the face-to-face medium:
Computer Mediated Communication (CMC) is superior to face-to-face in brainstorming and
decision making (Maruping and Agarwal, 2004: 975; Santra and Giri, 2009: 105).
Face-to-face is superior to CMC in conflict management and problem-solving (Maruping and
Agarwal, 2004: 975)
CMC results in lower productivity/performance in virtual teams (Maruping and Agarwal, 2004:
975; Pinsonneault and Caya, 2005: 8; Liu and Burn, 2007: 45; Siebdrat et al., 2008: 3).
Liu and Burn (2007: 43) offer a model with which to compare satisfaction and performance in face-to-
face as well as virtual teams. The model, as illustrated in Figure 1, highlights a distinction made
between virtual and face-to-face teams being that virtual teams are far more task-oriented and use
18

Garth Alistoun and Christopher Upfold
less socio-emotional processes in order to achieve their outputs (Liu and Burn, 2007: 46). This view is
echoed by Pinsonneault and Caya (2005: 6) and Maruping and Agarwal (2004: 979) but differs by
suggesting that time impacts virtual team communication by modifying an initial task focus to a far
more social one as the team develops.

Figure 1: Performance path differences between virtual and face-to-face teams reproduced from Liu
and Burn (2007: 44)
3.1 Trust building
Trust is traditionally developed by interacting face-to-face and developing a shared social context
(Kirkman et al., 2002: 69; Pinsonneault and Caya, 2005: 5; Thomas and Bostrom, 2008; 46).
Research has suggested that computer mediated communication leads to lower levels of trust
between team members (Geister et al., 2006: 461). Virtual teams, being dispersed by definition, have
to base their trust on something other than social context. Researchers propose that virtual teams
base their trust on task participation and predictable performance of teammates (Kirkman et al., 2002:
69; Pinsonneault and Caya, 2005: 5; Thomas and Bostrom, 2008; 46). This finding ties in with the Liu
and Burn (2007: 46) model, illustrated in Figure 1 where virtual teams make limited use of socio-
emotional processes in the performance of their goals.
3.2 Information exchange
As previously mentioned, virtual teams can be made up of a diversely skilled group of people
(Pinsonneault and Caya, 2005: 1). The diversity of the team means that there exists a large pool of
information which the team can potentially make use of. Information exchange has, however, been
demonstrated to be surprising poor in virtual teams with one researcher noting that a face-to-face
team under study, exchanged more unique information in a single meeting than a virtual team
exchanged in three weeks of asynchronous communication (Pinsonneault and Caya, 2005: 7).
3.3 Process gains and losses
According to Kirkman et al. (2002: 71), virtual teams find it more difficult to benefit from process gains
than their face-to-face counterparts. This is because virtual team members rarely, if ever, have the
opportunity to interact informally in a hallway or around a water cooler where many of a teams best
ideas are generated (Kirkman et al., 2002: 71).
19

Garth Alistoun and Christopher Upfold
3.4 Feelings of isolation
Virtual teams, as already stated, have little or no opportunity to interact informally/socially (Kirkman et
al., 2002: 72). Without this type of interaction team members feel isolated from one another which
leads to a lower level of work satisfaction and performance. This reduction in relationship building
opportunities is illustrated in Figure 1 where virtual teams base performance and satisfaction
outcomes on task processes alone.
3.5 Participation
While it was previously suggested that participation in virtual teams was assumed to be more equal
than in co-located teams, Pinsonneault and Caya (2005: 7) suggest there is no evidence supporting
this assumption. They also suggest that perceptions of fairness in labour division were found to be
lower in studied virtual teams than in their face-to-face counterparts.
3.6 Coordination
Virtual teams cannot rely on physical coordination activities such as direct supervision and proximity
of participants (Pinsonneault and Caya, 2005: 8). Hence, coordination has been demonstrated to be
more difficult in a virtual environment than a co-located one. They also argued that coordination
efficiency is lower in virtual teams and decreases over time (Pinsonneault and Caya, 2005: 8). Finally,
these authors suggest that while coordination interventions designed to manage content and timing of
intra-team communication have been demonstrated to alleviate the decrease in coordination
efficiency, they have negatively impacted on team trust (Pinsonneault and Caya, 2005: 8).
3.7 Cohesion
Cohesion has been shown to be positively associated with team performance in both virtual and
physical teams (Pinsonneault and Caya, 2005: 6). Virtual teams have to communicate far more
frequently than face-to-face teams in order to achieve the same level of team cohesiveness. This
situation is alleviated over time as team members become familiar with communication technologies
and are able to exchange more personal information with greater effectiveness (Pinsonneault and
Caya, 2005: 6).
4. Functional system requirements
Informal communication plays an important role in supporting effective teamwork through work-
execution, group co-ordination, socialization and team building processes (Whittaker et al., 1994:
131). Restriction of these communicative processes has a negative impact on task-performance and
team satisfaction as evidenced in virtual collaboration (Isaacs et al., 1997: 9). In order to overcome
the challenges faced by different virtual teams, a communication solution will have certain
requirements placed upon it to be effective.

In an attempt to identify and explore some of the requirements believed to be necessary for a
potential virtual team communication solution, a system called the VideoWindow system was
implemented at Bellcore and studied by Fish et al. (1990).

Figure 2: Image of the VideoWindow System (Fish et al. 1990)
20

Garth Alistoun and Christopher Upfold
The system consisted of two large screens mounted on the wall of two research lounges on different
floors of a building, linked by video recorders and directional microphones. Establishing a connection
was meant to be as simple as looking at the screen for an available partner and initiating a
conversation with that partner. In order to ensure co-presence of potential conversational partners,
free coffee was provided at each end of the system for the duration of the experiment.

The results of the Fish et al. (1990: 6-10) experiments are summarised under seven headings:
Transparency Defined by Fish et al. (1990: 6) as the ability to distinguish between
conversations using the system and those that occur face-to-face. The VideoWindow was found
to alter conversations in minor ways where users tended to speak relatively louder and often
embedded comments on the system itself in their interactions. The most important finding under
this heading was that conversation opportunities over the VideoWindow had a substantially lower
conversion rate than face-to-face opportunities, 17% in comparison to 41% (Fish et al., 1990: 7).
Reciprocity Defined by the Oxford English Dictionary as a state or relationship in which there
is mutual action, influence, giving and taking, correspondence, etc., between two parties
(University of Chicago, 2002). The VideoWindow system was found to have certain reciprocity
problems associated with it (Fish et al., 1990: 7). According to Fish et al. (1990: 7), the principle in
face-to-face conversation opportunities is that if you can see someone, then they can see you and
if you can hear someone, then they can hear you. This principle was not preserved in the
VideoWindow system where both the camera and the microphones had a limited range of
functionality which resulted in a failure to convert many conversation opportunities into
conversations.
Privacy The VideoWindow system did not support the ability to make a conversation private.
This was one of the most requested features by the study group (Fish et al., 1990: 8).
Architectural and Environmental Context The VideoWindow system is publicly situated in
order to allow for spontaneous interactions mimicking hallway or coffee room run-ins between
colleagues. This imposes a behavioural cost on users if they have no reason to be in the public
area when they wish to use the system (Fish et al., 1990: 8).
Social and Organizational Context - Fish et al., (1990: 8) propose that the VideoWindow
system would only be effective if it links users who already know one another or have an explicit
reason to communicate.
Conversational Regulation The initiation and maintenance of a face-to-face conversation is
regulated by a complex set of mechanisms (Fish et al., 1990: 8). A technical failing of the
VideoWindow system due to the placement of cameras is that it does not support eye contact
which is suggested to be an important mechanism for regulating social interaction.
Social Relationships Fish et al. (1990: 9) hypothesised that the VideoWindow system would
impact the relationships between the distributed participants over time. This would lead to positive
effects of proximity such as greater familiarity and person liking (Fish et al., 1990: 9; Kraut et al.,
1990a: 23-24).
As already suggested, informal communication has certain functional requirements that must be met
in order to positively impact virtual team collaboration (Maruping and Agarwal, 2004: 976). From the
earlier definition of informal communication, informal communication is unscheduled, interactive and
rich between random, out of role participants (Kraut et al., 1990a: 5). The following are proposed to be
the requirements of a virtual informal communication system:
4.1 Co-presence
Isaacs et al. (1997: 10) support a finding of Kraut and Streeter (1995) that spontaneous interaction is
under-utilised relative to its value, whereas formal interaction is over-used. This finding is supported
by many researchers who indicate the lack of support for serendipitous communication in mainstream
communication technologies (von Bismarck et al., 1999: 5-6; Kraut et al., 1990a: 4; Whittaker et al.,
1994: 131). The definition of informal interaction indicates that an effective informal communication
technology should allow for spontaneous interaction by creating co-presence between possible
conversational participants (Isaacs et al., 1997: 21). Co-presence is defined by Kraut et al. (1990a:
33) as the mechanism by which possible conversation participants are brought together and are made
aware of one anothers availability. Kraut et al. (1990a: 33) posit that the essence of computer-
mediated communication is co-presence without physical proximity. Co-presence is not simply a
21

Garth Alistoun and Christopher Upfold
matter of randomly connecting participants but also providing a communication context or common
ground in which the possibility of communication is created (Nardi, 2005: 91-92). The possibility of
virtual informal communication is created by creating an environment and a social context in which
users can spontaneously interact (Isaacs et al., 1997: 21). The context of conversation may take the
form of a coffee room, a lounge, a work area or any other publicly available space available to users
(Whittaker et al. 1994: 131). The social context created will help to reduce information exchange
challenges (Nardi, 2005: 92) by providing a context for trust creation (Kirkman et al., 2002: 69;
Pinsonneault and Caya, 2005: 5; Thomas and Bostrom, 2008; 46). An alternate view is presented by
Dourish et al. (1996: 34) who state that face-to-face communication in the real world should not be
used as a baseline for evaluating virtual communication systems. They propose, echoing the views of
Maruping and Agarwal (2002), that teams have different communicative requirements at different
stages of their development (Dourish et al., 1996: 34). As teams become more used to interacting
with one another and increasingly familiar with the technology they use to interact, they develop a
new communicative behaviour that cannot be compared to face-to-face interaction (Dourish et al.,
1996: 34). Dourish et al. (1996: 34) propose that a team will evolve with a communication technology
to use it effectively and achieve the same performance and satisfaction levels as co-located teams.
This view is confirmed by Pinsonneault and Caya (2005: 8-9), however, they do identify other
challenges which are not necessarily alleviated by time and experience (Pinsonneault and Caya,
2005: 10).
4.2 Concentration of suitable partners
As already discussed, in placing the system into a social and organisation context it would only be
effective if it links users who already know one another or have an explicit reason to communicate
(Fish et al., 1990: 8). In co-location, this is achieved by placing people who need to work together or
share a common interest close to one another (Kraut et al., 1990a: 33). This is encapsulated in the
systems architectural and environmental as well as social and organizational contexts.
4.3 Low behavioural cost
Kraut et al. (1990a: 33) define behavioural cost as the amount of effort needed to initiate and conduct
a conversation. The perceived behavioural cost of a communication system is proposed to be an
important determinant of its usefulness (Kraut et al., 1990a: 34). A person will not use a
communication device if they believe that the behavioural cost is too high (Kraut et al., 1990a: 34).
Making contact with another person is often a by-product of another activity and a communication
system which aims to support spontaneous informal encounters should embody this fact (Kraut et al.,
1990a: 33).
4.4 Visual channel
A hypothesis made by Isaacs et al. (1997: 23) is that conversational partners will make use of video
data to establish whether a user is open to interaction. However, both Whittaker et al. (1994: 135) and
Kraut et al. (1990a: 33) agree that frequent interactors will initiate a conversation regardless of the
receivers apparent openness to interaction. Users simply make use of the system to see whether the
receiver is available rather than ready to converse (Whittaker et al., 1994: 135; Kraut et al., 1990a:
33). The visual channel therefore plays an important role as a stimulus for conversation (Kraut et al.,
1990a: 34).
4.5 Document sharing
Documents play an important role in initiating, re-initiating and sustaining conversations (Whittaker et
al., 1994: 135). An effective informal communication system should employ some form of document
sharing and manipulation (Whittaker et al., 1994: 135).
4.6 Multiple complementary systems
Both Isaacs et al. (1997: 25) and Pinsonneault and Caya (2005: 11) propose that no one system can
support all of the functional needs of informal communication. Virtual teams that make use multiple
communication systems have shown a greater level of satisfaction, equality of participation and
quality of outputs than those using a single medium (Pinsonneault and Caya, 2005: 11). They
propose that multiple systems will have the flexibility that a virtual team requires in order to exchange
and process information effectively.
22

Garth Alistoun and Christopher Upfold
Capturing the role of informal communication, the identified dysfunctions of virtual teams and the
acknowledged functional system requirements, the proposed framework of an effective informal
communication system is illustrated in Figure 3.


Figure 3: A proposed framework of an effective informal communication system
5. Conclusion
Existing research suggests that virtual teaming technologies do not take into account the informal
communication needs of teams. Informal communication has been shown to play an important role in
the production, group maintenance and member support goals of teams. Virtual teaming is prone to a
considerable list of challenges due to the dispersed nature of the team. In the course of the literature
review, these challenges were described as (1) trust building, (2) information exchange, (3) process
gains and losses, (4) feelings of isolation, (5) participation, (6) coordination and (7) cohesion. In order
to limit the impact of the identified challenges and help virtual teams meet their production, group
maintenance and member support needs, five functional needs of an effective informal
communication system are distilled from the literature. The identified needs are given as (1) co-
presence, (2) low behavioural cost, (3) visual channel, (4) document sharing and (5) multiple
complimentary systems.

The proposed framework can be used to assess existing and in-development virtual communication
systems in terms of their likely impact on virtual team effectiveness.
6. Future work
Given that this proposed framework is purely theoretical in nature, further research will be required to
develop the framework further. Masters students in the Department of Information Systems, Rhodes
University have constructed a working VirtualWindow system. The proposed framework will be used
to guide the evaluation of this VirtualWindow system together with other existing virtual
communication systems. In particular, a case study has been planned and will be conducted between
two remotely located project teams working within the IT Systems Development field in South Africa.
23

Garth Alistoun and Christopher Upfold
The intention is to evaluate the custom built system, described above, in an attempt to evaluate to
what extent the system addresses the five functional needs considered necessary for implementing
an effective informal communication system. Based on the outcome of the case study, the Virtual
Window system will be updated in an attempt to address shortcomings and the framework will also be
reflected on and refined based on the findings of the experiment. The important goal of this future
work is to introduce and evaluate a highly effective system for facilitating informal communication
within virtual teams.
References
Bharadwaj, S. S. and Saxena, K. B. C. (2006) Impacting the Processes of Global Software Teams: A
Communication Technology Perspective. VISION - The Journal of Business Perspective. 10(4): 63-75.
Dourish, P., Adler, A., Belotti, V and Henderson, A. (1996) Your Place or Mine? Learning from Long-Term Use of
Audio-Video Communication. Computer Supported Cooperative Work. 5(1): 33-62.
Fish, R. S., Kraut, R. E. and Chalfonte B. L. (1990) The VideoWindow System in Informal Communications.
Proceedings of the 1990 ACM conference on computer supported cooperative work.
Geister, S., Konradt, U. and Hertel, G. (2006) Effects of Process Feedback on Motivation, Satisfaction and
Performance in Virtual Teams. Small Group Research. 37(5): 459-489.
Isaacs, E. A., Whittaker, S., Frohlich, D. and OConaill, B. (1997) Informal Communication re-examined: New
functions for video in supporting opportunistic encounters. In: Finn, K., Sellen, A. and Wilbur, S. (eds).
Video-Mediated Communication. Mahwah, New Jersey: Lawrence Ehrlbaum.
Kirkman, B. L., Rosen, B., Gibson, C. B., Tesluk, P. E. and McPherson, S. O. (2002) Five Challenges to Virtual
Team Success: Lessons from Sabre Inc. The Academy of Management Executive. 16(3): 67-79
Liu, Y. C. and Burn, J. M. (2007) Improving Value Returns from Virtual Teams. Proceedings of the European
and Mediterranean Conference on Information Systems.
Maruping, l. M. and Agarwal, R. (2004) Managing Team Interpersonal Processes through Technology: A Task
Technology Fit Perspective. Journal of Applied Psychology. 89(6): 975-990
Nardi, B. A. (2005) Beyond Bandwidth: Dimensions of Connection in Interpersonal Communication. Computer
Supported Cooperative Work. 14(2): 91-130.
Pinsonneault, A. and Caya, O. (2005) Virtual Teams: What We Know and What We Dont Know. International
Journal of e-Collaboration. 1(3): 1-16.
Santra, T. and Giri, V. N. (2009) Analyzing Computer-Mediated Communication and Organizational
Effectiveness. The Review of Communication. 9(1): 100-109.
Siebdrat, F., Hoegl, M. and Ernst, H. (2008) The Bright Side of Virtual Collaboration: How Teams Can Profit from
Dispersion. Proceedings of the 2008 Academy of Management Best Paper Awards.
Thomas, D. and Bostrom, R. (2008) Building Trust and Cooperation Through Technology Adaption in Virtual
Teams: Empirical Field Evidence. Information Systems Management. 25(1): 45-56.
University of Chicago (2002) Reciprocity. [online]. Available at:
http://csmt.uchicago.edu/glossary2004/reciprocity.htm. [Accessed 7 May 2010]
Von Bismarck, W.-B., Bungard, W, and Held, M. (1999) Is informal communication needed, wanted and
supported? Proceedings of the 8
th
international conference on human computer interaction.
Whittaker, S., Frohlich, D. and Daly-Jones, O. (1994) Informal Workplace Communication: What it is like and
how might we support it? Proceedings of the SIGCHI conference on human factors in computing systems:
celebrating interdependence.

24
IS Consultants and SMEs: A Competence Perspective
Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
College of Business and Economics, Canterbury University, Christchurch,
New Zealand
paul.cragg@canterbury.ac.nz

Abstract: Many small and medium-sized enterprises (SMEs) lack in-house resources, including IS knowledge
and skills. As a result, they often turn to consultants for assistance with IS projects, eg, helping the firm select
and implement a new system. While there is some evidence that this practice increases IS success, there has
been no attempt to discover if SMEs gain anything other than a new system from such projects. Thus this study
aimed to determine what SMEs gain from engaging IS consultants. In particular, the study aimed to identify any
improvements to internal IS competences. In brief, did the consultant engagement help improve IS skills and
abilities within the SME? A multiple case study approach was adopted. Data was collected from SMEs that had
engaged consultants to implement an accounting information system. In addition, to provide a broader
perspective, interviews were conducted with consultants who specialise in assisting SMEs with IS. Resource-
based theory was used as a lens to help analyse the case evidence. Each case was examined to identify
instances where consultants influenced a competence, ie, had an impact on the creation or use of any
competences. The cases provide evidence that SMEs lack many IS abilities. The findings indicate that
consultants compensate for a lack of IS competences rather than help build in-house competences. Consultants
help many SMEs overcome their lack of IS competences. The study adds to our understanding of consultants
acting as intermediaries to assist and advise firms. In this intermediary role as conduits, consultants provide
advice to assist many SMEs by finding appropriate products, implementing the system, integrating software with
existing systems, and training and support. Consultant attributes, eg, technical, soft skills and training skills,
influence project success. The study provides new insights into a particularly significant relationship for SMEs
and thus provides a step towards improving our understanding of IS success in SMEs.

Keywords: IS projects, IS consultants, SMEs, competences
1. Introduction
With relatively low levels of IS expertise, the typical small and medium-sized enterprise (SME) turns to
external experts when faced with a major IS project, for example, when acquiring a new system. Prior
studies have shown that external expertise is a major predictor of IS success for SMEs (Thong, 2001;
Thong et al., 1997; de Guinea et al., 2005; Bruque and Moyano, 2007). However, despite its
importance to IS success, relatively little research has focused on the relationship between SMEs and
IS external experts. This study helps address this gap by examining major IS projects in SMEs to
explore the role of IS experts, how external experts add value, and how external experts influence
project success.

A multiple case study approach was adopted involving both SMEs and IS consultants. The study
focused on IS projects where new accounting information systems (AIS) had been installed, including
a new version of an existing system. The evidence was used to examine the role played by IS
consultants in implementing IS projects for SMEs. SMEs were defined for this study as independent
firms with between 5 and 50 employees. A consultant was considered as any person or organisation
that is certified to install or implement an accounting information system. This included accounting
and consulting firms.
2. Prior literature
There is evidence that SMEs engage consultants for various tasks, including selecting and
implementing packaged software (Howcroft and Light, 2008), software and Web application
development, project management, and benchmarking (Nevo et al., 2007). Furthermore, studies have
identified the importance of external experts like consultants in aiding SMEs with IS projects (Soh,
Yap, & Raman, 1992; Thong, Yap, & Raman, 1994; Thong, Yap, & Raman, 1997; Thong, 2001; de
Guinea et al., 2005).

The broader IS literature indicates the following four reasons why firms in general engage
consultants:
Firms may engage consultants for their knowledge and expertise since firms may not have
sufficient knowledge or expertise in-house (Nevo et al., 2007)
Firms may engage consultants as an alternative to hard to find IS staff (Nevo et al., 2007)
25

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
Firms engage consultants for knowledge transfer to internal IS staff and to gain technical know-
how (Nevo et al., 2007).
Firms may hire consultants to compensate for a lack of capability. The lack of these capabilities
represents a barrier to technology transfer, especially in smaller and less experienced firms.
Consultants act as intermediaries to assist and advise firms, effectively compensating for a lack of
capability (Bessant and Rush, 1995).
Champion et al. (1990) and Basil et al. (1997) sort the roles generally played by consultants into nine
categories: hands-on expert, modeller, partner, coach, teacher or trainer, technician, counsellor,
facilitator, and reflective observer. Bessant and Rush (1995) highlight four roles of consultants. Firstly,
the traditional role of consulting sees consultants transferring specialised, expert knowledge to clients.
Secondly, consultants engage in the role of experience sharing, either implicitly or explicitly. Thirdly,
consultants act as marriage brokers, where consultants are a single point of contact for the client to
access a wide range of specialist services. The fourth role is a diagnostic role. In this role consultants
help their clients articulate and define their needs.

There is also evidence that consultants play various roles in SMEs. Traditionally in IS research, the
main role of the IS consultant in SMEs has been expressed as a mediator role. Thong (2001) and de
Guinea et al. (2005) note that in small businesses, consultants act as mediators for the lack of IS
skills and lack of expertise. The role of consultants has also been seen as intermediary. Consultants
act as bridging intermediaries by disseminating knowledge (Carey, 2008). Consultants also act as
conduits by standing between IS suppliers and SMEs (Howcroft and Light, 2008). In these
intermediary roles, consultants carry out several activities and services. Howcroft and Light (2008)
point out that consultants provide services such as advice to assist with finding appropriate software,
the implementation and customisation of the software, training and support service and the integration
of software with existing systems. In the bridging role consultants carry out activities such as:
transferring specialised knowledge; sharing ideas and experiences; acting as a point of contact for a
wide range of specialised services; and assisting clients to clearly specify their particular needs
(Carey, 2008).

Prior research has used the resource-based view to examine IS in SMEs (Thong, 2003; Caldiera and
Ward, 2003; Eikebrokk and Olsen, 2007; Butler and Murphy, 2008; and Cragg et al., 2011). Some of
this has identified important IS competences, ie, skills and abilities, that are applicable to IS in SMEs.
For example, Thong (2001) developed a resource-based model of IS implementation in small firms. It
was shown that small firms with successful IS had highly effective external experts, adequate IS
investment, high users IS knowledge, high user involvement, and high CEO support. External
expertise was found to be a predominant key factor of IS implementation success in small
businesses. Butler and Murphy (2008) used the resource-based view to understand how small to
medium software enterprises (SMSEs) build and apply business and IS capabilities. Their findings
indicated that managing external relationships was a core business and IS capability, whatever the
period of an SMSEs evolution. Caldiera and Ward (2003) show the great importance of IS
knowledge, ie, either within the firm or in a closely associated specialist enterprise, like a consultant.
This suggests that for SMEs, consultants play a vital role by supplying IS knowledge.

Scupola (2008) identified three important competences at the managerial level for SMEs:
Vision - understanding how the system could add value to the company and contribute to the
companys business strategy.
Value - finding out what value the system could bring to the company.
Control - ways/initiatives to encourage and enforce assimilation of the system at the individual
level.
Scupola (2008) also suggested three competences that are key at the individual level: technical skills,
interpersonal skills and conceptual skills. Two other frameworks also identify IS competences specific
to SMEs. Eikebrokk and Olsen (2007) provide a total of seven competences associated with
developing e-business in SMEs. Cragg et al.s (2011) framework identifies six macro-competences,
which encompass a total of twenty-two competences, including the ability to, eg, define IS
requirements, access IS knowledge, manage change, and project management. The content of all
three frameworks is summarised in Table 1.

26

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
Table 1: IS competences from the SME literature
Eikebrokk & Olsen (2007) Scupola (2008) Cragg et al (2011)
Concept of e-business
Strategic Planning
IT-business process
IT management
Systems and infrastructure
SourcingAlignment
Vision
Value
Control
Technical skills
Interpersonal skills
Conceptual skills
Business and IS strategic thinking
Define IS contribution
Define the IS strategy
Exploitation
Deliver solutions
Supply
In summary, research on the impact of consultants on SMEs is underdeveloped. Although earlier
studies indicated that external experts have a significant influence on IS success, subsequent studies
provide little understanding of their influence. Prior research provides few insights into how external
experts influence IS projects and whether they influence in-house competences.
3. Research objectives and methods
The study aimed to gain a deeper understanding of the role played by IS consultants and their impact
on SMEs. A two-phase approach was adopted. Phase 1 was exploratory as there was relatively little
prior research examining the interplay between IS consultants and SMEs. The aim for phase 1 was
to:
Determine why SMEs engaged consultants.
Identify what tasks were undertaken by consultants.
The second phase was designed to built on phase 1, with a focus on IS competences. The aim of
phase 2 was to:
Determine if consultants influence IS competences in SMEs.
As phase 1 was exploratory, a multiple case study design was deemed appropriate, based on
Eisenhardt (1989) and Yin (2009). Phase 1 commenced with developing a case study protocol to
serve as an interview guide for data collection. The protocol explored information relating to the
background of the participants, the consultant engagement process, the role that consultants play, the
effectiveness of consultants and the success of the project. The initial protocol was improved through
the use of a pilot case study. The study was conducted in New Zealand. The multiple-case study
design consisted of four SMEs and three consultants. The SMEs involved met the following criteria:
Implemented an AIS within the last 3 years
Had 5 to 50 employees
Used a consultant to assist with the implementation
Tables 2 and 3 provide a summary of the SMEs and consultants involved in the study. Face-to-face
interviews were used as the major method of collecting data. For the SMEs, the interviews involved
one or more senior managers. All interviews were recorded and transcribed. In addition, brochures
and website were used to gather supporting material. The data were ported into Nvivo and coded and
analysed using several techniques inherent within the software, including memo writing, annotating,
searching, pattern matching and modelling.
Table 2: Summary of the backgrounds of the four SMEs interviewed.
Case
Number of
Staff
Sector AIS Type of Consultant
Project
Outcome
Main Interviewee
SP 30 Manufacturing Accredo
Independent-
reseller
Successful Managing Director
AB 17 Services MYOB Accountant Successful Practice Manager
AR 16 Manufacturing Infusion
Independent-
reseller
Not successful
Financial
Controller
DL 11 Manufacturing QuickBooks Accountant Successful Owner-Manager
Table 3: Summary of the backgrounds of the three consultants interviewed
Consultant Number of employees Type of Consultant AIS
AI 5 Independent-reseller Accredo
ER 5 Independent-reseller Accredo
AT 23 Accountant MYOB
27

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
4. Findings
The cases revealed a range of reasons why the firms engaged consultants. The main reasons were a
lack of IS knowledge, a lack of IS skills, and a lack of accounting knowledge and accounting skill. For
example, here are some paraphrased comments from the interviews with SME managers:
we did not really understand what we wanted
we did not know what we were doing
there was no accounting experience within the firm
we could not implement on our own
insufficient IS expertise
need advice on software solution and assistance to install the software
5. The tasks and duties of consultants
The consultants engaged in a variety of tasks for each project. Table 4 provides a summary of the
major tasks that the consultants performed at each of the four firms. Table 4 also shows that the
consultants were involved at different stages in the overall life cycle of the project, based on the
systems development life cycle (SDLC). The most common tasks carried out by consultants were:
analysing needs and recommending a system, installing software, and configuring software.
Table 4: Tasks carried out by consultants at the SMEs
SMEs Duties and tasks performed by consultant SDLC Phase
SP
Analysis & Recommendations
Installation of the software
Configuration of the software
Customisation of the software
Training users
On-going technical support
Analysis & Design
Implementation
Implementation
Implementation
Implementation
Support
AB
Analysis & Recommendations
Installation of the software
Configuration of the software
On-going accounting function
Analysis & Design
Implementation
Implementation
Support
AR
Installation of the software
Configuration of the software
Customisation of the software
Implementation
Implementation
Implementation
DL
Analysis & Recommendations
Installation of the software
Training users
Implementation
Implementation
Implementation
6. IS competences
The interviews provided evidence that SMEs lack IS skills and abilities. Some examples of this are
provided in Table 5, based on the competences framework of Cragg et al (2011). The SME managers
indicated a preference to focus on their business and leave many IS activities to the experts. The
SMEs also lack the ability to solve problems. The consultants noted that some SMEs would be aware
they have an IS-related problem but would not understand the problem. SMEs also lack knowledge of
accounting solutions to the extent that they do not know what system would meet their needs. SMEs
usually followed the recommendations of consultants on what solution they would implement.

Implementing accounting software involves more than just installing the software as the system must
be aligned with processes within the business. The interviews provided evidence that SMEs were
unable to do this. In some cases the accounting package was modified to fit the business. In other
cases, the new accounting package changed the way the business operated and the consultant
played an important role in assisting the organisation with the changes to their processes. The SMEs
managed their implementation project internally. However, as one consultant pointed out, consultants
occasionally assist SMEs with managing implementation projects. All of the SMEs had limited
knowledge of the infrastructure requirements of their accounting systems. The consultants were the
ones that exhibited these abilities and advised the organisations on infrastructure, particularly the
hardware and networks needed to operate systems effectively.


28

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
Table 5: The respective roles played by SMEs and consultants
Macro-
competence
Competence Role of SMEs Role of Consultants
Business Process
Management
Consultants assist SMEs Assist SMEs through advice
Define IS
Requirements
Consultants assist SMEs Assist SMEs to align IS with
business
Define IS
Contribution
Accessing IS
Knowledge
Most SMEs seek advice
from outside sources
Provide information and advice
on IS
Define IS
Strategy
Technology
Infrastructure
Requirements
There was existing
infrastructure in some
cases
Provide information, advice and
recommendations on
infrastructure
Benefits Management Consultants assist SMEs Aid SMEs to assess IS benefits
Managing Change Consultants assist SMEs Assist with change
management Exploitation
Project Management In some cases SMEs
manage the projects.
Manage the projects in some
cases
Implementation and
integration
Carried out by the
consultants
Carried out by the consultants
Deliver
Solutions Business Continuity &
Security
Consultants assist SMEs Carried out by the consultants
Manage IS Supplier
Relationships
Consultants worked with
SMEs to develop
relationships.
Consultants actively seek to
build relationships with SMEs
Supply
Staff development Staff were not formally
trained, they learned on
the job.
Provide training on IS and other
related business areas
7. Phase 2
To examine the competences perspective further, a second round of interviews were conducted with
both SMEs and consultants. These interviews focused on the third research question, ie:
Determine if consultants influence IS competences in SMEs.
The aim of these interviews was to focus on how IS competences may have been influenced by
consultants during the implementation project. The interview questions and the analysis drew on the
framework of IS competences (Cragg et al, 2011). Each SME case was examined to identify
instances where consultants influenced (ie, had an impact on the creation or use of) any of the listed
competences. The cases provided strong evidence that consultants help SMEs overcome many IS
competences that SMEs lack. Another conclusion is that consultants influenced all six of the macro
competences of Cragg et al (2011), ie:
Business and IS strategic thinking
Define IS contribution
Define the IS strategy
Exploitation
Deliver solutions
Supply
The Business and IS strategic thinking competence is defined as an organizations ability to identify
and evaluate the need for IS in providing opportunities to develop a better business strategy and to
manage the IS activities effectively, including establishing an appropriate IS organization and defining
roles, responsibilities and policies (Cragg et al., 2011, p.356). This relates to knowledge about how
accounting software can be of value to a business (Eikebrokk and Olsen, 2007). It relates to the ability
to define a business case and establish appropriate criteria for making decisions about IS (Cragg et al
2011). The interviews revealed that consultants help SMEs understand the value of IS, including the
implications of implementing IS and in some instances helping SMEs to establish the business case
for the project. This is because some SMEs lack the ability to establish a formal business case for
implementing IS. Consultants provide a means for SMEs to identify and evaluate the potential and
implications of implementing an accounting package. The consultants share knowledge with SMEs on
the potential of implementing particular software. The ability to identify and evaluate the potential and
implications of implementing accounting software (Cragg et al., 2011) relates to knowledge of how
29

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
accounting software can be of value to the organisation (Eikibrokk and Olsen, 2007). In providing this
ability to SMEs, consultants compensate for/overcome the SMEs lack of ability. The interviews
revealed support for the influence of consultants on SMEs understanding the value of implementing
AIS. The interviews indicated that consultants enhanced the ability of SMEs to define the potential
and implications of implementing AIS.

Define IS Contribution refers to the ability to translating the business strategy into investments in IS
that achieve both performance improvements and meet information needs (Cragg et al., 2011,
p.358). It includes four competences relating to alignment, business process management, defining IS
requirements, and accessing IS knowledge. The discussions revealed that consultants assist SMEs in
managing business processes related to the implementation of accounting packages. One consultant
commented: I've come across a lot of clients where they have never done the books in-house; the
accountant handles everything. The accountant does all their processing; they just provide bank
statements or they provide information to the accountant; he does everything. They get to a point of
saying, Werent you want to do this ourselves? So therefore, then; their system, at the moment; they
don't have a system, basically. Consultants also influence SMEs ability to access IS knowledge,
primarily when it comes to finding hardware providers.

The Define the IS Strategy macro-competence is the ability to define the information and application
architectures, technology infrastructure and IS resources it needs to enable the resources to be
successfully bought and/or implemented (Cragg et al., 2011, p.358). It addresses three IS abilities: to
define an appropriate software sourcing strategy, appropriate IS acquisition process, and appropriate
technology infrastructure. This relates to two competences from Eikebrokk and Olsen (2007), ie,
sourcing and systems and infrastructure. If SMEs have these abilities, then it is likely that the
implemented IS will be useable and have a positive impact on IS success. Package acquisition is the
common sourcing strategy for software, while the hardware infrastructure competence requires an
understanding of the infrastructure needed to implement the software. It includes computing
hardware, software and network infrastructure needed to implement a working system. The interviews
indicated that SMEs follow the recommendations of external parties like consultants and
accountants. SMEs often lack the ability to evaluate the various software solutions and do not
understand the infrastructure requirements of IS they wish to implement. Consultants play a key role
by assisting SMEs with sourcing and acquiring IS. The consultant also assists clients to evaluate
options and recommends which system is best suited for their needs. They make an assessment of
what the client requires and advise them on the software to use, typically an off-the-shelf system. For
example, if MYOB is not suitable, the consultant will suggest an alternative. In addition to software,
consultants typically either recommend new hardware or the upgrading of the existing infrastructure.
One consultant explained that she outlines what hardware infrastructure is needed to implement a
system. However, the logistics surrounding the hardware and infrastructure are left to SMEs and their
IT suppliers. The consultant liaises with the IT suppliers. Sometimes a consultant may recommend an
SME seek guidance from a hardware specialist, ie, another consultant from their "trusted circle".

The Exploitation competence relates to the organizations ability to increase the benefits from
effective use of information and application investments (Cragg et al., p.359). It includes four
competences: benefits management, managing change, project management and inter-organizational
collaboration. The discussions targeted the ability of SMEs to exploit accounting systems. Training
was identified as aiding clients to maximise the benefits of the system by learning tricks of how to use
the software. Consultants also assist clients in maximising their system by bringing organisations
closer to their accountants. The consultant recognised that accountants can improve business
development of an SME. The consultants also recognised that changes must be made to business
processes in order to derive additional benefits. However, such changes cannot be introduced
immediately after implementation, since the clients must be allowed to work in their usual manner and
be comfortable. Only after that was achieved, would the consultant suggest changes, what we tend
to do is put the system in, let them get used to doing the things the way theyve always done them
and then theyll say, we want to bring in job management. Weve always partially done it but not
quite.

The Deliver Solutions competence concerns the ability to convert requirements into working IS
assets (business solutions) that perform according to specification and can be integrated effectively
with other systems and processes (Cragg et al., 2011, p.359). In includes four competences:
applications development, implementation and integration, apply and use technology, and business
30

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
continuity and security. The SMEs in this study were not in a position to carry out IS implementation
nor integration without the assistance of consultants. Instead, consultants undertook such tasks as
part of the IS implementation project. They typically installed the software without having to do any
major customisation. Any customisation would typically involve reports and invoices. The consultants
indicated that the encouraged SMEs to apply and follow regular back-up procedures; unfortunately
this advice was seldom followed. One consultant indicated that she teaches SMEs the process of not
only backing up the accounting system, but also other areas like email.

The Supply competence refers to three operational competences that allow the organization to
create and maintain its technology resources and applications through effective management of the
IS supply chain and internal and external IS resources (Cragg et al., 2011, p.360). It includes
managing IS suppliers, asset management, and staff development. A theme to emerge from the
interviews is that consultants develop on-going relationships with their clients. The consultants
contend that they had excellent relationships with most of their clients, with one likening it being
similar to a friendship. Another claimed to keep an eye on things. These on-going relations allow
consultants to assist SMEs in keeping their systems operational. Consultants also teach or explain to
clients the importance of maintaining their systems, including keeping software up to date.
Consultants also worked to ensure that the technical skills within SMEs were adequate for the needs
of the organisation. By so doing consultants build/enhance the ability of SMEs to utilise IS. After
implementation, SMEs learn what requirements are more suited to their type of organisation and
operation. They learn how to use or integrate the use of IS with their current processes. Over time,
SMEs become more capable in areas related to the integration of IS with the organisations business
processes.
8. Discussion and conclusions
The SMEs in this study did not have appropriate IS skills and abilities to select and implement a major
new system. Thus the SMEs engaged consultants due to a lack of IS competences. It seems that the
consultants compensated for a lack of competences within the SMEs, in line with Nevo et al (2007).
The study contributes by identifying numerous competences that were compensated for by
consultants. The study thus adds to our understanding of consultants acting as intermediaries to
assist and advise firms, as argued by Bessant and Rush (1995). In this intermediary role as conduits,
consultants provide advice to assist SMEs with finding appropriate products, implementing the
system, integrating software with existing systems, and training and support (Howcroft and Light,
2008). The reasons for SMEs engaging consultants reflect the resource-poor nature of many SMEs,
as discussed by much prior literature on small firms.

Prior SME-based research has identified numerous IS competences (Scupola, 2008; Eikebrokk and
Olsen, 2007; Cragg et al., 2011). This study contributes to this research by identifying IS
competences that are relevant during the implementation of IS and, importantly, identifies many
competences that are lacking in SMEs. This study provides evidence that consultants affected all six
of the IS macro competences proposed by Cragg et al (2011). Disappointingly, for this small sample
of SMEs, few if any internal competences were improved during the consultant engagement process.
It seems possible that many SMEs are failing to take the opportunity to learn from consultants.
Instead they rely heavily on the consultants, and typically follow their advice. SMEs could recognise
that consultants are a source of IS competences. Some SMEs could aim to learn from consultants to
help build, enhance or improve internal IS competences.

Prior research has identified dangers in SMEs relying heavily on consultants. For example, Howcroft
and Light (2008) expressed the concern that the consultants may gain more than the SME. Thus, if
SMEs decide to rely heavily on consultants, they need to develop their ability to manage this
relationship. Cragg et al (2011) referred to this as the ability to manage IS supplier relationships, ie,
to develop value added relationships between the business and IS suppliers (external and internal),
including service level agreements and contract management (performance monitoring, problem
resolution and negotiating amendments) (p.357). It seems likely that some SMEs will continue to rely
heavily on consultants. An implication for SMEs is that they will need to find ways to develop this
competence. Further research could examine how SMEs achieve an ability to manage supplier
relationships, and whether this competence influences IS success in SMEs.

It should be noted that this study was limited to a relatively small number of SMEs seeking a new
accounting package. Thus a different set of SMEs and consultants, and a different application system
31

Adrian Bradshaw, Paul Cragg and Venkat Pulakanam
could present different results. Also, the list of competences may not be an exhaustive list as different
frameworks indicate different competences. This study focused on competences that are applicable to
IS implementation projects.
References
Basil, P., Yen, D., and Tang, H. (1997). Information consulting: developments, trends and suggestions for growth.
International Journal of Information Management, 17(5), 303-323.
Bessant, J., and Rush, H. (1995). Building bridges for innovation: the role of consultants in technology transfer.
Research Policy, 24(1), 97-114.
Bruque, S. and Moyano, J. (2007). Organisational determinants of information technology adoption and
implementation in SMEs: The case of family and cooperative firms. Technovation, 27:5, 241-253.
Butler, T., and Murphy, C. (2008). An exploratory study on IS capabilities and assets in a small-to-medium
software enterprise. Journal of Information Technology, 23(4), 330-344.
Caldeira, M.M. and Ward, J.M. (2003). Using resource-based theory to interpret the successful adoption and use
of information systems and technology in manufacturing small and medium-sized enterprises, European
Journal of Information Systems, 12(2), 127-141.
Carey, J. (2008). Role misconceptions and negotiations in small business owner/web developer relationships.
Journal of Management & Organization, 14(1), 85-99.
Champion, D. P., Kiel, D. H., and McLendon, J. A. (1990). Choosing a consulting role. Training and Development
Journal, 44(2), 66.
Cragg, P., Caldeira, M. and Ward, J. (2011). Organizational Information Systems Competences in Small and
Medium-sized Enterprises. Information & Management, 48:8, 353-363.
de Guinea, A. O., Kelley, H., and Hunter, M. G. (2005). Information Systems Effectiveness in Small Businesses:
Extending a Singaporean Model in Canada. Journal of Global Information Management, 13(3), 55-79.
Eikebrokk, T.R. and Olsen, D.H. (2007) An empirical investigation of competency factors affecting e-business
success in European SMEs. Information & Management, 44(4), 364-383.
Eisenhardt, K. (1989). Building theories from case study research. Academy Management Review, 14:4, 532-
550.
Howcroft, D., and Light, B. (2008). IT consultants, salesmanship and the challenges of packaged software
selection in SMEs. Journal of Enterprise Information Management, 21(6), 597-615.
Nevo, S., Wade, M., and Cook, W. (2007). An examination of the trade-off between internal and external IT
capabilities. Journal of Strategic Information Systems, 16:1, 5-23.
Scupola, A. (2008). Conceptualizing Competences in e-services adoption and assimilation in SMEs. Journal of
Electronic Commerce in Organizations, 6(2), 78-91.
Soh, C. P. P., Yap, C. S., and Raman, K. S. (1992). Impact of consultants on computerization success in small
businesses. Information and Management, 22(5), 309-319.
Thong, J. Y. L. (2001). Resource constraints and information systems implementation in Singaporean small
businesses. Omega, 29(2), 143-156.
Thong, J. Y. L., Yap, C. S., and Raman, K. S. (1994). Engagement of external expertise in information systems
implementation. Journal of Management Information Systems, 11(2), 209-231.
Thong, J. Y. L., Yap, C. S., and Raman, K. S. (1997). Environments for Information Systems Implementation in
Small Businesses. Journal of Organizational Computing and Electronic Commerce, 7(4), 253-278.
Yin, R. (2009). Case study research: design and methods. Sage.


32
Developing a Framework for Maturing IT Risk Management
Capabilities
Marian Carcary
Innovation Value Institute, National University of Ireland Maynooth, Maynooth,
Co Kildare, Ireland
marian.carcary@nuim.ie

Abstract: Understanding the value derived from IT investments and IT enabled operational improvements is
difficult, and has been a subject of research and debate among ICT practitioners and academics for many years.
This is particularly so because innovative technological developments have supported transformative changes in
organizational operational activities. Research continues to investigate approaches to not only understanding the
value derived by IT but also to optimizing this value. One of the key aspects of optimizing IT-driven value is the
requirement to effectively manage risk. The continual evolution of the IT risk landscape requires effective Risk
Management (RM) practices for all IT risk areas, such as, but not limited to security, investments, service
contracts, data protection and information privacy. Effectively managing these risk areas pose specific concerns
from the perspective of Chief Information Officers (CIOs) and Chief Risk Officers (CROs). Hence, significant
considerations should be given to not only the processes involved in assessing, prioritizing, handling and
monitoring these risks but also to ensuring the development of an appropriate risk culture and the establishment
of effective RM governance structures, to support effective RM. This paper examines the maturity
model/framework approach to improving an organizations IT capabilities, with specific reference to effectively
managing IT-related risks, and increasing value derived over time. A new IT Risk Management maturity model is
presented; this framework is part of the IT Capability Maturity Framework (IT CMF) which supports value-driven
IT management practices. It was developed by the Innovation Value Institute at the National University of Ireland
Maynooth, following a design science and open innovation research approach. The IT CMF, consisting of 33
Critical Capabilities, focuses on maturing key activities of the IT organization. The Risk Management Critical
Capability presented in this paper enables organizations to determine their IT RM maturity and identify key
recommendations in specific areas to improve maturity overtime. Thereafter the paper presents an analysis of the
maturity model approach to managing risk, to improving an organizations IT capabilities, and to deriving
enterprise-wide value from more mature IT practices.

Keywords: IT risks, IT risk management, maturity model, IT CMF, critical capability
1. Introduction
Risk is a function of the likelihood of a particular threat source exploiting an organizations
vulnerability, and the impact of the adverse event on the organization (Elky, 2006). However, for many
organizations the various IT risks are often under assessed (Benaroch et al, 2006; Glass, 2006). As
technology continues to drive industry transformation, traditional business models are gradually being
replaced by technology-enabled models (Ernst and Young, 2011), and while this may support
improved operational efficiency, it also exposes an organization to increased risk likelihood and
impact levels. Today, with the proliferation of mobile computing, social networking, and cloud based
services, organizations face increased risk of data leakage, asset theft and reputational damage. In
fact, IT risks stories are common in the recent literature. Reports of the TK Maxx security breach
resulting in theft of over 45 million customer card numbers (Gaudin, 2007); Estonias denial of service
attacks, affecting government, banking and school websites (Kirk, 2007); and the recent high-profile
wiki-leaks publishing global intelligence files are just a few examples.

Therefore, the ability to effectively manage the various IT risks is an important factor in organizations
deriving and optimizing the value associated with their IT investments. Effective practices should
consider all key IT risk areas to enable CIOs and CROs to prioritize their resources in addressing the
most significant risks. This paper presents a new maturity modeling approach to identifying and
developing an organizations risk management capabilities. The maturity model in Information
Systems (IS) research continues to grow in popularity, and while concerns exist regarding the
development process and foundations upon which some models are developed, the RM capability
maturity model presented in this paper was built upon existing theories and methodologies, followed a
rigorous development process based on a design science approach, and was externally validated in a
number of pilot organizations.

The structure of the paper is as follows: Section 2 presents an overview of the IT risk landscape and
existing approaches to RM. Section 3 introduces the concepts of maturity models in IS research and
highlights the concerns that exist regarding the approach. Section 4 provides an overview of a new IT
33

Marian Carcary
management maturity model, the IT Capability Maturity Framework (IT CMF), and an outline of how
the concerns associated with maturity models were addressed in its development. It further discusses
the models RM critical capability for assessing and improving RM maturity overtime. Section 5
concludes the paper with a discussion of the value of the maturity modeling approach for optimizing IT
capabilities, and specifically RM capabilities
2. Managing IT risks
Investing in IT exposes an organisation to several risk factors, including for example project,
organizational and technical risks (see for example Amberg and Okujava, 2005; Brown, 2005).
Undoubtedly, one of the biggest concerns from an organizations perspective is security, in terms of
protecting the organizations business critical applications and confidential/sensitive data. The Frost
and Sullivan (2011) study, which was conducted for the International Information Systems Security
Certifications Consortium (ISC
2
), reported that key risks from an organizations security perspective
include application vulnerabilities, mobile devices, viruses and worm attacks, internal employees,
hackers, contractors, cyber terrorism, cloud-based services and organized crime. The study further
reported that the key new and emerging risks facing organizations today include mobile devices and
mobility, cloud computing and social media.

Advancements from PDAs to multi-functional and ubiquitous smartphones and tablets have resulted
in a proliferation of mobile devices. However, ability to access business applications, corporate
sensitive data and confidential personal data anytime, anywhere poses risks regarding data leaks or
loss/theft of mobile devices. For example, smartphones were growing at the rate of 21% in North
America, and tablets and e-readers were expected to reach sales levels of 22 million units in North
America by 2016. This concept of the borderless environment poses specific concerns from a data
security and control perspective (Frost and Sullivan, 2011; Ernst and Young, 2011).

Cloud computing, regarded as an enabler of scalable, flexible and powerful computing, poses specific
concerns in terms of confidential information exposure to unauthorized sources; loss or leakage of
confidential data; weak systems or application controls; susceptibility to cyber-attacks; disruptions in
the continuous operations of the data centre; and inability to support compliance audits, among others
(Frost and Sullivan, 2011). Similar cloud based challenges and a number of additional ones were
highlighted in Ernst and Youngs (2011) Global Information Security survey and include legal
compliance and privacy; information security and data integrity; contractual and legal risks;
governance and risk management assurance; reliability and continuity of operations; and integration
and interoperability. From the Information Systems Audit and Control Associations (ISACA) (2010)
survey, 45% of US IT professionals believed that the risks of cloud adoption outweighed any
associated benefits; only 10% surveyed would consider migrating mission critical applications to the
cloud. However, 61% of Ernst and Youngs (2011) respondents were currently using, evaluating or
planning adoption of cloud computing-based services.

Further, the growth in use of social media tools means that social media applications are now being
used, not just for personal uses but also business purposes, in connecting with customers, tracking
customer comments about their products and services, developing brand loyalty etc. Approximately
15% of the worlds population are registered users of popular social and business networking sites.
For example, Facebook had 687.1 million users in June, 2011, while LinkedIn had 79.2 million unique
visitors worldwide in March 2011. IT risks associated with their use for business purposes include
exposure to malicious software within social networks; hacked accounts; and exposure of confidential
data or sensitive company information (Ernst and Young, 2011).

However, IT risks span a broader spectrum than security, and include a wide range of risks that may
affect or result from IT operations, for example risks associated with compliance with regulatory
changes; compliance with ethics policies; IT investments; IT project lifecycles; service continuity due
to security breaches, system failure or natural disasters; internal process changes impacting on
product or service quality; supplier contracts; and reputation. Hence, an effective approach to
managing these and other IT risks is required to enable organizations to reduce their exposure and
their potential impact on the organizations operations and in essence to protect the organizations
assets and mission (Elky, 2006). Some of the various IT risks may be intractable, in that they resist
mitigating actions, or unforeseen/ not apparent at the time of project planning (Taylor, 2006). Hence, a
proactive approach to identifying and scoring IT risks including new and emerging risks, to prioritizing
identified risks according to determined risk likelihood and impact scores, to identifying and
34

Marian Carcary
implementing appropriate risk handling strategies, and to monitoring effectiveness of the implemented
risk controls overtime is required.

Management of risk is well discussed in the literature (for example Casey, 2007; Rosenquist, 2007;
Westerman and Hunter, 2007), and several IT management frameworks address the issue of RM in
varying degrees of depth (for example, CMMI, Management of Risk (MoR), ISO 27001, ISO 27002, IT
Risk Framework, and COBIT). Risk management is the process that allows IT managers to balance
the operational and economic costs of protective measures and achieve gains in mission capability by
protecting the IT systems and data that support their organizations missions (Stoneburner et al,
2002). Ability to understand IT risks on the horizon and the likelihood and magnitude of these risks
enables stakeholders to prioritize scare resources and take steps to protect the IT assets
proportionate to their value to the organization. Strategies for managing risk, such as mitigation,
transference, acceptance, or avoidance, will depend on the identified risk scores or priority. Monitoring
changes in risk scores over time and the use of methodologies and tools supporting management of
risks (e.g. National Institute of Standards and Technology methodology, OCTAVE, COBRA etc.)
closes the loop on RM processes, enabling continual monitoring of the effectiveness of RM
approaches (Elky, 2006). However, effective RM approaches alone is not sufficient. Management and
stakeholder support and buy-in, development and enforcement of policies that deal with new and
emerging risks, and development of a risk culture that involves training and communication of RM
activities are also essential. Further, IT RM should not exist in a silo; many authors highlight the
importance of integrating IT RM approaches into the overall Enterprise Risk Management (ERM)
framework - an approach that involves holistically managing the enterprises entire risk portfolio
(Fraser and Simkins, 2010; Kouns and Minoli, 2010). As RM seeks to protect the organizations assets
and mission, it needs to be regarded as a management function as opposed to merely a technical
activity (Elky, 2006). Integrating IT risk with ERM practices promotes a greater understanding by IT of
the business priorities and protection of critical business services, and enables more effective risk
mitigation, avoidance of risk oversights and better return on IT investments (Silicon Republic, 2010).

The following sections of this paper considers the maturity modeling approach to improving risk
management capabilities, by enabling organizations to understand their current RM capabilities and
identify practices to improve their capability maturity overtime. The maturity modeling approach has
been well adopted in IS research. Section 3 provides a brief overview of maturity models prior to
introducing a new capability maturity framework that addresses organizations RM capability maturity.
3. Maturity models in IS research
Maturity models are conceptual models that outline anticipated, typical, logical and desired evolution
paths towards maturity (Becker et al, 2010), where maturity is a measure to evaluate the capabilities
of an organization in regards to a certain discipline (Rosemann and de Bruin, 2005). Maturity can
also be regarded as an evolutionary progress in the demonstration of a specific ability or in the
accomplishment of a target from an initial to a desired or normally occurring end stage (Mettler,
2009). Maturity models outline characteristics associated with various levels of maturity, thereby
serving as the basis for an organizations capability maturity assessment. In essence, they serve to
help organizations to understand their as is situation and enable them to transition to the desired to
be maturity, through deriving and implementing specific practices or improvement roadmaps. These
improvement maps support a stepped progression with respect to organizations capabilities, enabling
them to fulfill the characteristics required to meet specific maturity levels.

A recent literature review of maturity models in IS research has highlighted a growing interest in this
area (Becker et al, 2010; Mettler, 2009), in order to inform organizational continuous improvements
and support either self or third party maturity assessments. While the Software Engineering Institutes
(SEI) Capability Maturity Model (CMM) for software development and the successor Capability
Maturity Model Integration (CMMI) are most prevalent in studies of maturity (Becker et al, 2010),
nonetheless, several new maturity models have been developed in recent years. These focus on
improving maturity in, for example, IT/business alignment (Luftman, 2003; Khaiata and Zualkernan,
2009); business process management (Rosemann and de Bruin, 2005); business intelligence
(Hewlett Packard, 2007); project management (Crawford, 2006); information lifecycle management
(Sun, 2005); digital government (Gottschalk, 2009); inter-organizational systems adoption (Ali et al,
2011) and enterprise resource planning systems use (Holland and Light, 2001).

35

Marian Carcary
Despite the growing interest in this area, according to Becker et al (2010), IS research has rarely
endeavored into reflecting and developing theoretically sound maturity models and as such there is a
lack of evidence of scientifically rigorous methods in their development processes, with some models
based on poor theoretical foundations (Mettler, 2009). Methods, such as Design Science (DS)
(Hevner et al, 2004) are proposed as a useful means to develop new maturity models in a rigorous
manner, using both prior studies and empirical evidence as the basis for the models content
development and stages of maturity. Further, Becker et al (2010) suggests that there is a lack of
evidence of validity testing of newly developed models; however to ensure their relevance for
practitioners, the proposed models need to be piloted and applicability checks conducted with
practitioners. Closing the gap between current and desired maturity is also problematic, with Mettler
(2009) suggesting that many models do not describe how to carry out improvements actions.

In line with the categorization of maturity models adopted by Becker et al (2010) (prescriptive,
descriptive, descriptive/prescriptive, descriptive/reflective, and reflective), this paper reflects a
prescriptive contribution (i.e. a specification of how capability improvements could take place) through
the presentation of a new maturity model. The model presented addresses the concerns outlined
above through following a rigorous development process based on design science and open
innovation principles; empirical piloting, testing and validation of the model; and development of a
series of improvement practices, outcomes and metrics to drive maturity level progression.
4. Presenting a new maturity model - the IT capability maturity framework (IT
CMF)
The IT CMF (Figure 1) is a capability maturity model developed at the Innovation Value Institute (IVI),
National University of Ireland Maynooth. It represents a systematic framework to enable CIOs/CEOs
to understand and improve their organizations maturity in order to derive business value from IT
investments (Curley, 2004; 2007). The framework represents an emerging blueprint of IT capabilities
and serves as an assessment tool which enables organizations to understand and improve over time
their IT capability across five levels of maturity. At a macro level, the IT CMF consists of four
integrated IT management strategies (macro capabilities); these comprise 33 critical capabilities
(CCs) which represent key activities of the IT organization.

Figure 1: IT CMF (source: Innovation Value Institute)
Content development for the IT CMF is undertaken by the IVI consortium. The consortium is made up
of over 80 industry partners linked to IVI through a common desire to develop and enhance their
organizations understanding of improved business value through IT capability management. The
consortium members are invited and encouraged to participate in the research and development
activities of the IVI through workgroup contribution. A work group exists for each of the 33 CCs, which
include a mix of Subject Matters Experts (SMEs) and Key Opinion Leaders (KOLs), including
36

Marian Carcary
academic researchers, industry-based practitioners, and consultants. Work group development output
evolves through a series of four stages and is reviewed at the end of each stage by a technical
committee (TC). As development work progresses through the various stages, more in-depth content
is required and the CC material is subject to more rigorous reviews and validation processes.

This content development across the four stages follows the Design Science (DS) research approach.
This approach is increasingly recognised within IS as an important complement to the prevalent
behavioral science. DS is a problem solving approach that involves building and evaluating innovative
artifacts in a rigorous manner to solve complex, real world, relevant problems, make research
contributions that extend the boundaries of what is already known, and communicate the results to
appropriate audiences (Gregor and Jones, 2007; Hevner et al, 2004; March and Smith, 1995; March
and Storey, 2008; Pries-Heje and Baskerville, 2008; Purao, 2002; Venable, 2006). Knowledge and
understanding of the problem domain is achieved through artifact construction and evaluation (Hevner
et al, 2004). The DS approach adopted in the IT CMF development (Table 1) is closely aligned with
the three DS research cycles proposed by Hevner (2007). (For a detailed discussion of its
development, see Carcary (2011)).
Table 1: DS Cycles of the IT CMF development
DS Cycle IT CMF
DS Relevance Cycle Relevance of the IT CMF artifact is driven by the problems organizations
experience in optimizing how they currently manage and measure the business
value of their IT investments. Field testing of the IT CMF in the application
environment helps determine if further development work is required to ensure
its relevance in addressing the business problem.
DS Rigor Cycle Development is grounded in existing artifacts, methodologies, foundational
theories and expertise and draws from an extensive base of industry and
academic literature and existing IT standards and frameworks. Contributions to
the knowledge base include a detailed framework and set of practices that help
drive innovation and change in how organizations manage and use their IT
investments to optimize business value.
DS Design Cycle Development focuses on iterative build and evaluate activities by the CC
workgroup to address the identified problem, while drawing on existing
theoretical foundations and methodologies in the knowledge base. The build
process is evolved and refined through evaluation feedback, including technical
committee stage gate reviews to identify further development refinements and
field testing of the artifact within contextually diverse organizations.
4.1 An examination of the risk management critical capability
Located within the IT-CMFs Managing IT like a business macro capability, the Risk Management
CC focuses on proactively assessing, prioritizing, handling and monitoring risks in order to minimize
exposure to and the potential impact of IT risk. This CC aims to be holistic in addressing the key
categories of IT risk facing organizations, including for example IT security; data protection and
information privacy; operations/ business continuity and disaster recovery; IT investment; IT
programme, project and product life cycles; IT service contracts and suppliers; IT image/ brand; IT
personnel; regulatory/ legal and ethics policy compliance, as well as emerging risks in these and other
categories. The assessment provides key insights into an organizations maturity with respect to three
key areas - governance, risk profile design and the actual risk management processes. These three
categories are comprised of nine capability building blocks (CBBs), as outlined in Table 2.
Table 2: Capability building blocks of the RM CC
Governance Profile Design Process
Policies for Risk Management Definition of risk profiles Risk assessment
Integration into IT leadership and
governance structures
Risk prioritization
Management, governance and
performance management
Risk handling
Communications and training Risk monitoring
The above nine CBBs are the focus areas of a RM assessment, with dedicated maturity questions
developed within each of these areas. Examples of RM maturity assessment question topics are
outlined in Table 3.

37

Marian Carcary
Table 3: Example RM maturity assessment question topics
Key areas of the IT CMF RM Maturity Assessment
Definition and implementation of risk policies;
Establishing risk policies ownership and responsibilities;
Integrating RM into IT leadership and governance structures;
Identifying RM roles and responsibilities;
Identifying levels of senior management support;
Measuring the effectiveness and efficiency of RM activities;
Training stakeholders in RM;
Disseminating RM policies, processes and results;
Determining collaboration levels between risks managers;
Defining risk profiles by their potential impact;
Using risk profiles in risk assessment and mitigation;
Identifying subject matter experts for risk assessments;
Identifying and scoring risks and their impact;
Prioritizing risks and risk handling strategies;
Identifying tools to support risk handling;
Assigning ownership to identified risks;
Defining and implementing appropriate risk controls;
Monitoring and reporting identified risks and the effectiveness of risk controls.
Assessment questions in these and other areas describe maturity level statements that follow IT CMF
prescribed maturity level logic, across five stages initial, basic, intermediate, advanced and
optimized. Maturity assessment participants are invited to score the organizations maturity across
these five levels, as well as identify the future desired state. Aggregated scores support reporting of
the organizations self-assessed current and desired maturity levels; in addition an IVI assessment,
based on both the survey assessment results and in-depth interviews with key RM stakeholders result
in a formal IVI maturity assessment score and presentation of a set of practices to support the
organization transitioning to higher maturity levels. A detailed set of IVI RM Practices, Outcomes, and
Metrics (POMs) at the various maturity stages support closing the gap between organizations current
and desired maturity states.

As such, the RM maturity assessment represents the basis for organizations understanding their key
strengths and weaknesses in their ability to mitigate potential IT risks. The output from the IT CMF RM
assessment enables an organization to put action plans in place to mature their capability in
effectively managing IT risks on the horizon. In general, transitioning to higher maturity levels requires
an organization to align and integrate business objectives with RM practices; define and implement
effective processes for risk assessment, prioritization, handling and mitigation for all risk areas,
including new and emerging risks, and integrate them into enterprise RM processes; create an
effective and integrated risk register; obtain support from senior management; ensure long-term
training and retention of skills; and embed RM into IT and business activities. Adopting these and
other practices in order to mature RM capabilities and proactively manage risks becomes an
important step in deriving business value from IT investments.

The following section provides an analysis of the value of this maturity model and the maturity model
approach in supporting the transition to higher maturity levels and more effective IT capabilities.
5. Discussion and conclusions
Growth in the development and use of maturity models provides strong support for the relevance of
the maturity assessment approach in practice. As stated by Mettler (2009), as organizations
constantly face the pressures to obtain and retain competitive advantage, invent and reinvent new
products and services, reduce cost and time to market, and enhance quality at the same time, the
need for and the development of new maturity models will certainly not diminish given that they assist
decision makers to balance these sometimes divergent objectives on a more or less comprehensive
manner. Based on the literature, the greatest concern regarding this assessment approach is the
processes involved in maturity model development rather than building on a theoretical basis, many
models are simply based on practices drawn from organization or industry specific projects that
demonstrated favourable results, for many models there is a lack of model testing in terms of validity,
reliability and generalizability, and little documentation on how the model was designed and
developed (Mettler, 2009).

38

Marian Carcary
Based on the above, it can be suggested that given the relevance of maturity models to organizations
in informing and supporting prioritized stepped improvements in capabilities, a maturity model that
addresses the concerns in the literature pertaining to their theoretical foundations and rigorous
development and testing approaches should be a useful contribution. The framework for IT
management outlined in this paper, and more specifically for maturing the RM capability, therefore
should reflect an important contribution from the perspective of organizations seeking to optimize their
RM capabilities and the value they derive from IT. Through adopting the maturity modelling approach
to RM and improving maturity overtime, it is proposed that CEOs and CIOs can improve the
organizations ability to manage risks and protect the business from risk impacts; they can reduce the
organizations exposure to risks such as IT security, IT sabotage, data protection and information
privacy, and IT investment risks; they can increase the likelihood of meeting the scope, cost, time and
quality targets of projects by effectively managing associated IT risks; they can increase the
likelihood of compliance with external regulations and ethics policies; and they can increase
transparency of how IT risks map/ relate to business objectives and decisions. In essence,
organizations with a mature RM capability are more effective in proactively managing IT risks, and in
reducing the exposure to and the potential impact of IT risks.

As outlined above, the presentation of the IT-CMFs risk management critical capability is a
prescriptive contribution; further research is needed to investigate the extent to which this maturity
model supports capability maturity progression in a real world setting over time. As such, future
research will involve a series of multiple case studies on a longitudinal basis to determine the real-
world value of this approach in improving organizations capabilities in managing existing, and new
and emerging risks.
References
Ali, M., Kurnia, S., Johnston, R. (2011). Understanding the Progressive Nature of Inter-Organizational Systems
(IOS) Adoption. E-Collaboration Technologies and Organizational Performance: Current and Future Trends,
Ed. 1, pp. 124-144.
Amberg, M. and Okujava, S. (2005). State of the art of IT project value analysis. In (Ed. D. Remenyi),
Proceedings of the 12
th
European Conference on Information Technology Evaluation, pp. 21-34. Turku,
Finland, 29
th
-30
th
September, Academic Conferences, Reading.
Becker, J., Niehaves, B., Poppelbus, J., and Simons, A. (2010). Maturity Models in IS Research. Proceedings of
the 18
th
European Conference in Information System, pp 1-12. Available at:
http://web.up.ac.za/ecis/ECIS2010PR/ECIS2010/Content/Papers/0320.pdf
Benaroch, M., Lichtenstein, Y. and Robinson, K. (2006). Real options in Information Technology risk
management: an empirical validation of real-option relationships. MIS Quarterly, 30,(4), 827-864.
Brown, A. (2005). IS evaluation in practice. In (Ed. D. Remenyi), Proceedings of the 12
th
European Conference
on Information Technology Evaluation, pp. 109-118. Turku, Finland, 29
th
-30
th
September, Academic
Conferences, Reading.
Carcary, M. (2011) Design Science Research: The Case of the IT Capability Maturity Framework (IT
CMF) Electronic Journal of Business Research Methods, 9,(2), p109-118.
Casey, T. (2007). Threat Agent Library Helps Identify Information Security Risks, Intel White Paper
Curley, M. (2004). Managing information Technology for business value practical strategies for IT and business
managers. Intel Press.
Curley, M. (2007). Introducing an IT Capability Maturity Framework, International Conference on Enterprise
Information Systems.
Crawford, J.K. (2006). The project management maturity model. Information Systems Management, 23,(4), pp.
50-58.
Elky, S. (2006). An Introduction to information systems risk management. SANS Institute. Available at:
http://www.sans.org/reading_room/whitepapers/auditing/introduction-information-system-risk-
management_1204
Ernst and Young, (2011). Global Information Security Survey. Available at:
http://www.ey.com/GL/en/Services/Advisory/2011-Global-Information-Security-Survey---Plugging-the-data-
leaks
Fraser, J. and Simkins, B. (2010). Enterprise Risk Management: Today's Leading Research and Best Practices
for Tomorrow's Executives, Wiley
Frost and Sullivan (2011). The 2011 (ISC)
2
Global Information Security Workforce Study. Available at:
https://www.isc2.org/uploadedFiles/Landing_Pages/NO_form/2011GISWS.pdf.
Gaudin, S. (2007). TK Maxx security breach costs soar to 10 times earlier estimate. Information
Week. August 15, 2007.
Glass, R. (2006). Looking into the challenges of complex IT projects. Communications of the ACM, 49,(11), 15-
17.
39

Marian Carcary
Gottschalk, P. (2009). Maturity levels for interoperability in digital government. Government Information Quarterly,
26 (1), pp75-81.
Gregor, S. and Jones, D. (2007). The anatomy of a design theory. Journal of the Association of
Information Systems, 8,(5), 312-335.
Hevner, A., March, S. and Park, J. (2004). Design Science in Information Systems research. MIS
Quarterly. 28,(1), 75-105.
Hewlett Packard, (2007). The HP Business intelligence maturity model. Available at:
http://h20195.www2.hp.com/v2/GetDocument.aspx?docname=4AA1-5467ENW&cc=us&lc=en
Holland, C.P. and Light, B. (2001). A stage maturity model for enterprise resource planning systems use.
Database for Advances in Information Systems, 32(2), pp 24-45.
ISACA (2010). ISACA US IT risk/reward barometer survey. Available at: http://www.isaca.org/About-
ISACA/Press-room/News-Releases/2010/Pages/ISACA-US-IT-Risk-Reward-Barometer-Survey.aspx
Khaiata, M. and Zualkernan, I.A. (2009). A simple instrument to measure IT business alignment maturity.
Information Systems Management, 26(2), pp 138-152.
Kirk, J. (2007). Estonia recovers from massive denial-of-service attack. NetworkWorld. May 17, 2007.
Kouns, J. and Minoli, D. (2010). Information Technology Risk Management in Enterprise Environments, Wiley
Luftman, J. (2003). Assessing IT-Business Alignment. Information Systems Management. 20(4) pp 9-15.
March, S. and Smith, G. (1995) Design and natural science research on information technology, Decision
Support Systems 15,(4), 251266.
March, S.T. and Storey, V.C. (2008). Design Science in the Information Systems discipline: an introduction to the
special issue on design science research, MIS Quarterly, 32,(4), 725-730.
Mettler, T. (2009). A design science research perspective on maturity models in Information Systems. St. Gallen:
Institute of Information Management, Universtiy of St. Gallen.
Pries-Heje, J. and Baskerville, R. (2008). The Design Theory Nexus, MIS Quarterly, 32,(4), 731-755.
Purao, S. (2002). Design Research in the Technology of Information Systems: Truth or Dare. Georgia State
University, Department of CIS Working Paper. Atlanta.
Rosemann, M. and de Bruin,T. (2005). Towards a business process management maturity model. In
Proceedings of the European Conference on Information Systems, Regenburg, Germany.
Rosenquist, M. (2007). Measuring the Return on IT Security Investments, Intel White Paper.
Silicon Republic (2010). Closing the gaps between ICT and enterprise risk management. Available at:
http://www.siliconrepublic.com/news/item/16901-closing-the-gaps-between-ic
Stoneburner, G., Goguen, A. and Feringa, A. (2002). Risk Management guide for Information Technology
systems. National Institute of Standards and Technology. US Department of Commerce.
Sun, (2005). Information lifecycle management maturity model. Available at:
http://www.dynasys.com/Downloads/Sun_ILM_Maturity_Model_2005.pdf
Taylor, H. (2006). Critical risks in outsourced IT projects: the intractable and the unforeseen. Communications of
the ACM, 49,(11), 75-79.
Venable, J.R. (2006). The role of theory and theorizing in design science research. In Proceedings of the First
International Conference on Design Science Research in Information Systems. 24
th
-25
th
February,
Claremont, CA.
Westerman, G. and Hunter, R. (2007). IT Risks. Harvard Business School Publishing
40
IS Evaluation in the Fusion View: An Emergence
Perspective
Sven Carlsson and Olgerta Tona
Informatics, Lund University School of Economics and Management, Lund,
Sweden
Sven.Carlsson@ics.lu.se
Olgerta.Tona@ics.lu.se

Abstract: Different theories have changed the way we address to technology. In the debate on the core of
Information System (IS), El Sawy identified three faces of IS views: connection, immersion, and fusion. El Sawy
contends that it may be time for a natural shift of emphasis from the connection view to the immersion view to the
fusion view as IT continues to morph and augment its capabilities. In the fusion view, IT and IS are fused within
the business environment, such that business and IT and IS are indistinguishable to standard time-space
perception and form a unified fabric. There exist different traditional IS evaluation approaches, like experimental,
pragmatic, constructivist, pluralist and realist IS evaluation research. These approaches evaluate IS when it is
positioned either in the connection or immersion view. However, we believe that the fusion view will influence the
way IS is evaluated. This paper uses the relational emergence theory, based on the philosophy of critical realism
where emergence refers to an entity as a whole whose parts are structured by the relations among each other.
Emergent entities possess properties different from its individual parts properties. Considering the similarity to
fusion concept, the emergence concept is used to theorize and operationalize the fusion view, as it lacks a
theoretical grounding. Based on this, we present and discuss the implications for IS evaluation in terms of how to
evaluate a process as well as the output of the process. The discussion on IS evaluation is illustrated through an
empirical example, drawn on a research study within a police organization. This paper concludes that in the
fusion view, the evaluation process shall embrace a holistic perspective. The focus of the evaluation process
shall be the emergent entity which consist of IS, users, task and processes structured by means of relationships
among each other. The properties exhibited by this emergent entity shall be evaluated.

Keywords: IS evaluation, fusion view, emergence
1. Introduction
The development and expansion of evaluation theory and practice is at the core of several different
disciplines. It is important to scrutinize theories, approaches, and models used in evaluation
(research) as well as evaluation research approaches philosophical underpinnings (Carlsson, 2003).

Recently, different theories have changed the way we address to technology. Orlikowski (2010) says
that this type of research can be characterized with the label entanglement in practice. She says that
influential entanglement perspectives are Actor Network Theory and the notion of sociomateriality. On
the other hand, El Sawy (2003) presented three different views on IS: connection, immersion, and
fusion. He contends that it may be time for a natural shift of emphasis from the connection view to the
immersion view to the fusion view as IT continues to morph and augment its capabilities. In the
connection view, IT and IS are viewed as separable artefacts and artificial systems that are used by
people as tools. They are separable from work, processes, and people. In the immersion view, IT and
IS are immersed as part of the business environment and cannot be separated from work, processes,
and the systemic properties of intra- and inter-organizational processes and relationships. This view
stresses work context and systemic relationships and mutual interdependencies. In the fusion view, IT
and IS are fused within the business environment, such that business and IT and IS are
indistinguishable to standard time-space perception and form a unified fabric. Hence, IT-enabled work
and processes are treated as one.

Most of the IS evaluation approaches, such as experimental, pragmatic, constructivist, pluralist and
realist ones, are approaches used within the IS connection and immersion view. We believe that the
fusion view will influence the way IS is evaluated and the already existing approaches have
drawbacks if used in fusion view.

We argue that IS evaluation research based on the principles and philosophy of critical realism will
push forward the traditional IS evaluation research approaches. In critical realism there is a
tendency to see structure as the constituent components [of structure and agency] cannot be
examined separately....In the absence of any degree of autonomy it becomes impossible to examine
their interplay (Archer 1988). The relational emergence theory, based on the philosophy of critical
41

Sven Carlsson and Olgerta Tona
realism (Elder Vass, 2010), introduces the emergence concept which is similar to the fusion
concept. Both fusion and the emergence concept refer to different parts coming together by
interacting and acting as one identity. If the parts are split up, the same entity with the same
properties will no longer be obtained.

This paper has two main contributions. The first is theoretical, in which fusion view introduced by El
Sawy (2003) will be discussed and elaborated on a theoretical level. The second contribution will be a
discussion on the implications of IS evaluation in the fusion view, in terms of how to evaluate a
process as well as the output of the process. The discussion will be illustrated by means of an
empirical example.

The remainder of the paper is organized as follows. The next section presents a brief summary of the
main IS evaluation approaches and the corresponding views where they are used. Section 3 will
discuss the emergence concept and its properties which are going to be adopted in the study. This is
followed by an empirical example. Conclusions are presented in the final section.
2. IS evaluation approaches
This section briefly reviews the IS evaluation research approaches based on Carlsson (2003). Table 1
presents a short description of each of the following approaches: experimental, pragmatic,
constructivist, pluralist and realistic evaluation approach. Further these approaches are classified
based on which IS view (El Sawy, 2003) they are more appropriate to be used.
Table 1: IS evaluation approaches
IS evaluation
approach
Description Views
Experimental
Builds on the logic of experimentation: take two more or less
matched groups (situations) and treat one group and not the
other. By measuring both groups before and after the
treatment of the one, an evaluator can get a clear measure of
the impact of the treatment.
Connection
Pragmatic
Represents a use-led model of evaluation research, stressing
utilization: the basic aim of IS evaluation research is to develop
IS initiatives (implementation of IS) which solve problems
problems can be organizational problems like reduced
competitiveness or far from good customer services.
Connection

Constructivist
IS initiatives should not be treated as independent
variables, as things, as treatments, as dosages. (Pawson
and Tilley, 1997). Instead all IS initiatives are constituted in
complex processes of understanding and interaction and an
IS initiative (IS implementation) will work through a process of
reasoning, change, influence, negotiation, battle of wills,
persuasion, choice increase (or decrease), arbitration or some
such like. (Pawson and Tilley, 1997).
Connection,
Immersion
Pluralist
Combines the strengths of the three approaches: an approach
combining the rigor of experimentation with the practice of
pragmatism, and with the constructivists empathy for the
voices of the stakeholders.
Connection,
Immersion
Realistic
Its aim to produce ever more detailed answers to the question
of why an IS initiativeIS, types of IS, or IS implementation
works for whom and in what circumstances.
Connection,
Immersion
In general, the approaches listed in Table 1 can be used in the two IS views described by El Sawy
(2003), which are the connection and immersion view. All these approaches consider IS during the
evaluation either as a tool, separated from the work and process in the case of the connection view,
or as immersed in the work and process in the case of the immersion view. None of them consider
the evaluation of IS, when it is fused in the organization and as such IS is not treated as one together
with the task, processes and people.

Taking in consideration the emergence of IS towards fusion view, there is a need for the evaluation
methods to be revised. IS can no longer be separated from the business environment, as it is already
fused in it. During the evaluation, IS has to be considered as a whole, together with the users, the
tasks and the processes.
42

Sven Carlsson and Olgerta Tona
3. Emergence in critical realism
Critical realism has become an important perspective in modern philosophy and social science
(Archer et al, 1998, Robson 2002), but critical realism is to a large extent absent in IS research. We
argue that IS evaluation research based on the principles and philosophy of critical realism
overcomes some of the problems associated with traditional IS evaluation research approaches.

Its manifesto is to recognize the reality of the natural order and the events and discourses of the
social world. It holds that we will only be able to understandand so changethe social world if we
identify the structures at work that generate those events and discourses These structures are not
spontaneously apparent in the observable pattern of events; they can only be identified through the
practical and theoretical work of the social sciences. (Bhaskar, 1989).

ElderVass (2010) has introduced the relational emergence theory based on the philosophy of critical
realism. He provides a general ontological framework to discuss the social structures and human
individuals as entities with emergent properties which determine the social events. An entity is a
whole, which consists of parts structured by means of the relations among each-other. Emergent
entities possess some properties produced by mechanisms which depend on the properties of
individual parts and the way the parts are structured in order to form the entity (whole). The properties
which derive from the entity are not possessed by its individual parts. The way the parts are related at
a certain point in time will depict the joint effect they will have. Therefore the relation between the
entity and its parts is not of causation, but of composition (Elder-Vass, 2010).

The importance of the interactions between the parts is expressed by Holland (1998, pp. 121-122) as
below:
Emergence is above all a product of coupled, context-dependent interactions.
Technically these interactions, and the resulting system, are nonlinear: The behavior of
the overall system cannot be obtained by summing the behaviors of its constituent
parts the whole is indeed more than the sum of its parts. However, we can reduce the
behavior of the whole to the lawful behavior of its parts, if we take the nonlinear
interactions into account.
There are some elements which an emergent entity should own (Elder-Vass, 2010). First of all, the
different parts which an emergent entity consists of should be recognized. The relationships between
the parts which cause this type of entity should be identified. The emergent entity should be explained
in terms of morphogenetic and morphostatic causes. Morphogenetic refers to those processes which
tend to elaborate or change a systems given form, structure or state (Buckley, 1967, p.58-59).
Morphostasis refers to the causes which maintain an entity either internal (the causes which maintain
the parts in a certain relationship) or external (the causes coming from the environment).To the later,
Buckley (1967, p.58-9) defines as those processes in complex system-environment exchanges that
tend to preserve or maintain a systems given form, organization or state.

Additionally, De Wolf and Holvoet (2005), based on a literature review, have listed different properties
possessed by an emergent entity:
Interactive Parts. The interaction between the parts is responsible for the emergent system (Odell,
2002; Heylighen, 2002).
Micro-Macro level effect. The emergent properties that an emergent system shows as the results
of the interaction between its components (Holland, 1998).
Novelty. Emergent properties cannot be understood by the properties of the components.
Anyhow, they still can be studied via the components and their relations in the context of the
whole system (Holland, 1998; Elder-Vass, 2010).
Coherence. The emergence property tends to maintain its own identity during the time, by
converting the interactive parts into a whole (Heylighen, 2002).
Dynamical. Emergence properties of a system are related to the time dimension, so that they can
arise or change over time (Holland, 1998).
Decentralised Control. No parts alone can direct or control the emergent properties of a system
(Odell, 2002).
43

Sven Carlsson and Olgerta Tona
Two-Way Link. The interaction between the parts influences the emergent system, which on the
other hand can influence its individual parts. (Odell, 2002)
Flexibility. As no single part is fully responsible for the emergent properties of a system, its
substitution or non-functionality will not lead to a total failure of the emergent entity (Odell, 2002).
To summarize: In the last view proposed by El Sawy (2003), fusion view, IT and IS, are fused within
the business environment and are indistinguishable. They can no longer be separated from work,
processes and users, but instead they shall be treated as one. Both IS fusion and the emergent entity
refer to different elements merging together by interacting and acting as one identity. If the parts are
split up, no longer will the same entity with the same properties be obtained. Using emergence as a
conceptual lens we argue that in the fusion view different parts such as IS, tasks and users,
structured by relations among each other, give rise to an emerging entity showing different emerging
properties. In this sense we try to operationalize and theorize fusion further more.

Additionally, as already mentioned in the previous section, IS fusion view calls for IS evaluation
approaches to be revised. We argue that IS evaluation research based on the emergence theory will
move the evaluation from the connection and immersion view, towards the fusion view. IS can no
longer be evaluated separated from its users, processes and tasks. The evaluation process should
take a holistic perspective. The parts which constitute the whole and their relations shall be
recognized in order to understand the generation of the impacts and events. Hence, the impacts of
the entity as a whole shall be evaluated, shifting the focus from the IS impact to the whole impact
where IS is part of the entity. When evaluating the impacts and benefits of IS the macro level
impacts should be considered. It means that the emergent properties need to be evaluated instead
and the mechanisms which bring the emergent properties shall be described. In this way, based on
the evaluation results, changes in different parts can be undertaken if necessary to maintain the
whole as such and keep IS fused in the organization.
4. Empirical example
In this section we will illustrate via an empirical example how to begin an IS evaluation based on the
emergence theory. The empirical example is based on a longitudinal research study we have been
performing in a police organization since 2009. For more details about the case, see Carlsson et al
(2010). Skne is the third largest police authority in Sweden and it has approximately 3240
employees, where approximately 2340 are police officers and 900 civil servants. The BI, created with
the software QlikView, started as a single application based on the system RAR (a system for crime
statistics, where all reported crimes are registered). (QlikView is a BI/DSS software company, see:
http://www.qlikview.com). The system was used by crime analysts to forecast when and where crime
could occur. The system creates associations on the processed data which makes it easy to
distinguish relationships between them. The information can be visualized by diagrams, tables or
dashboards. We argue that the police organizations BI system is slowly emerging to the fusion view.
We will evaluate the societal impact of an entity consisting of: users, BI system and the specific task
to be solved with two different examples. The first example is a single-shot analysis for solving a
crime. The second example is an on-going analysis of crimes for improving crime prevention.

Case 1: Finding the serial shooter. BI usage in the police organization in Skne proved to be
successful in the solution of a crime which was scaring the citizens of Malm, a city situated in the
southern part of Sweden. During the last years, a serial shooter in Malm shot many people
(emigrants, second generation emigrants, and refugees) in the streets, at bus stops, and in their cars.
Many were seriously wounded and one was killed. For this case, after finding a suspect, the police in
Skne gathered all the reports dating back to 1998 and found that there were about 58 reports
connected to him. They used a BI application which can read about 1.5 million reports in less than 10
minutes. According to one of our interviewees the specific application took about four hours to
buildbecause we knew how to do it. During the analysis 67 key words were used in the free-text
search application. The analysis produced 2700032000 rows [of information] in Excel with 1113
words in every row. This information identified the reports that should be read and evaluated.
Reading and evaluating had to be done the old way (manually).

Case 2: Crime prevention. One example of car theft prevention in Malm shows how hot spot
analyses can be both effective and efficient in terms of crime prevention. According to Weisburd and
Telep (2010), crime hot spot strategies for fighting crime have been embraced recently by some
police forces. The idea underpinning the crime hot spot strategy is that crime is better prevented by
44

Sven Carlsson and Olgerta Tona
focusing on areas (hot spots), for example, specific streets, buildings, blocks, and areas within a
community or zone, rather than by focusing on individuals. The process started by using the BI
system to point out some areas with the highest number of car thefts. Further analyses, by means of
other systems, proceeded to identify the parking lots and streets of those hot zones. Action was taken
to allocate the patrol forces to the hot spots. This strategy revealed to be effective in relation to
reducing crime not only in the hot spot areas but also in most zones of the city. Thus, the BI system
enabled the police to implement the hot spot strategy.

The successful Malm-case within the police organization indicates the BI usage in crime solution
where the case against the shooter will be stronger in the court. In this case the interaction between
the three parts: BI system, users and the task to be solved led to macro-level effects. Based on the
type of organization under study, we refer to these effects as societal benefits. The users constructed
the necessary application within BI to read the reports, saving months of work. In this case, time is
critical as the police have to handle a serial shooter and the sooner he is caught, the better. Hence,
we deduce that the interaction between the parts resulted in time reduction (enhanced efficiency) and
strongly supported the crime solution.

In addition to the Malm- case, the same analyses can be applied to the car theft example. As a
result of the interaction between BI system, the users and the task to be solved, societal benefits in
terms of crime prevention are produced. The entity BI, users and task, in terms of emergence, has
brought novel ways of dealing with crime prevention, hot spot, which cannot be achieved by these
parts operating on their own. Although hot spot analyses have been criticized for moving the crime to
the next corner, many other studies and experiments have concluded that hot spot analyses are
followed by the diffusion of crime prevention benefits (Weisburd and Telep, 2010). This study supports
the arguments of Weisburd and Telep (2010) because the benefits of crime prevention were not
limited to the hot spots but also extended to the nearby areas. Now the police can identify hot spots
and better allocate their resources with the main intention of preventing crime. The interactions
between BI and users to solve different tasks has improved efficiency in terms of reducing the number
of target spots and allocating resources better, leading to improved crime prevention. Hence, we
observe some sort of emerging entity giving rise to macro effects by means of the interacting parts,
which cannot produce these results on their own in isolation. Considering this process in terms of
morphogenic causes, the interaction between BI and the users for a specific task, the capabilities of
the BI system to analyse data very fast, and the ability of the employees to interpret the data are the
main contributory causes which give rise to the exhibition, by the emerging entity, of macro-level
effects which cannot be reached on a micro-level.

On the other hand, it is worth considering that in both examples the interaction between BI and users
existed only during the first phases of the crime analyses, where all the data needed are collected.
Afterwards, the users had to carry out manual work: for instance in the Malm case they had to
analyse the reports manually and in the case of car theft they had to use other sources to obtain
more details regarding the specific hot streets and blocks. This means that we find emergent
properties in the first phases of analytical tasks but not during the other phases, because of a lack of
interaction between the users and BI. In this case we cannot identify the morphostatic causes
(internal) which keep stable an emergent system. Hence, as the morphostatic causes are missing, we
can deduce that there exists no stable emergent entity. However, we believe that longer interaction
and extra capabilities of BI to support all of the phases of analytical work will drive BI towards the total
fusion view. This study support the arguments of El Sawy (2003) that more technological advances in
BI will shift it to fusion view and at the same time we will observe more solid emergent structures.

In coherence terms, in the case of the serial shooter some identity properties are shown. The news
headline Swedish Police Arrest Man over Malm Racist Shootings (Associated Press in Malmo,
2010) demonstrates how neither the analyst group nor the BI system whose interaction saved so
many months of work were mentioned, but the entity was given another name: the Swedish police.
The entity is recognized by the organizations name, as the interaction between the BI and users to
solve the crime takes place within the context of the police organization. One question in this case is:
will the macro-level effects drive micro-level behaviour in leading to a two-way link? Basically, the
success of this interaction between the users and the BI in crime solution and prevention may lead
the users to a more extensive usage of the system in other cases, where their needs will also drive
updating of the BI system or inclusion of other technological capabilities in it. For instance, during the
hot spot analysis, the users realized that the integration of a map in their BI system would further
45

Sven Carlsson and Olgerta Tona
improve hot spot analyses, and would, as stated by a respondent, take our work very, very far. In its
current state BI displays the hot crime zones by means of zone numbers, but it is unable to direct the
police to specific streets or buildings referred to as spots. Additionally, we can observe the
characteristic of flexibility. If we substitute the users with others, nearly the same effects and benefits
will be obtained, and also if the BI system is down some of the work can be done manually, but that
would result in a waste of time, and in some cases time can be critical, for instance in crime
prevention or solution. For example in the case of the Malmo serial shooter, if the BI system went
down, the users could still manage but the work would take about 9 months, which could even lead to
the release of the suspect until the evidence was ready. To summarize, this was an empirical example
where the evaluation of societal impact was focused on the emergent properties of the entity user BI
- tasks. To achieve a complete emergence to fusion view, technological improvements are required to
be implemented in the BI system in order for it to be fully - fused with the analysts tasks too during
the whole phases of analyses.
5. Conclusions
The existing IS evaluation approaches fall into IS connection view or immersion view. The fusion view,
calls for the IS evaluation approaches to be revised. This paper uses the relational emergence theory
based on critical realism to theorize and operationalize further the fusion view. Based on the
emergence concept, within IS fusion view an emerging entity rise, where IS is a part constituting this
entity. This paper continued further on discussing the implications of the evaluation process when IS
is positioned in fusion view. We suggest for the evaluation process to embrace a holistic perspective,
where IS, users, task and processes should be considered as one entity. The relationship between
the parts and the entities properties should be evaluated. Therefore emergence is used as a
conceptual lens and we illustrated the idea by evaluating the organizational and societal impact of the
BI system in a police organization in a holistic perspective. The arguments of El Sawy (2003) that with
more technological advances fusion view will be reached are supported in the example. Once more
capabilities will be implemented to BI, it will be extensively used and other emergent properties may
result from it.
References
Archer, M. (1988) Culture and Agency: The Place of Culture in Social Theory, Cambridge University Press,
Cambridge, UK.
Archer, M., Bhaskar, R., Collier, A., Lawson, T. and Norrie, A. (Eds) (1998) Critical Realism: Essential Readings,
Routledge, London.
Associated Press in Malmo (2010) Swedish police arrest man over Malm racist shootings, The Guardian,
Available At : < http://www.guardian.co.uk/world/2010/nov/07/malmo-race-shooting-arrest>, Accessed 13
February 2012.
Bhaskar, R. (1989) Reclaiming Reality, Verso, London.
Buckley. W. (1967) Sociology and modern systems theory, Englewood Cliffs, NJ: Prentice Hall.
Carlsson, S.A. (2003) Advancing Information Systems Evaluation (Research): A Critical Realist
Approach, Electronic Journal of Information Systems Evaluation, Vol 6, No. 2, pp. 11-20.
Carlsson, S., Skog, L.-M. and Tona, O. (2010) An IS success evaluation of a DSS in a police organization. In A.
Respcio, F. Adam, G. Phillips-Wren, C. Teixeira, and J. Telhada Eds. Bridging the socio-technical gap in
decision support systems, IOS Press, Amsterdam, , pp. 443454.
De Wolf, T. and Holvoet, T. (2005) Emergence versus self-organisation: Different concepts but promising when
combined. In Brueckner, S., Di Marzo Serugendo, G., Karageorgos, A., Nagpal, R., Eds. Engineering Self
Organising Systems: Methodologies and Applications. Lecture Notes in Computer Science, Springer Verlag:
Berlin, pp. 115.
Elder-Vass D. (2010) The Causal Power of Social Structures: Emergence, Structure and Agency, Cambridge
University Press, New York.
El Sawy, O. A. (2003) The IS Core IX: The 3 Faces of IS Identity: Connection, Immersion, and Fusion,
Communications of the AIS, Vol 12, pp. 588-598.
Heyligen, F. (2002) The science of self-organisation and adaptivity. In: The Encyclopedia of Life Support
Systems. UNESCO Publishing-Eolss Publishers.
Holland J.H. (1998) Emergence: From chaos to order, Oxford: Oxford University Press.
Odell, J. (2002) Agents and complex systems, JOT, Vol 1, pp. 3545.
Orlikowski, W.J. (2010) The sociomateriality of organisational life: considering technology in management
research, Cambridge Journal of Economics, Vol 34, pp 125-141.
Pawson, R. and N. Tilley (1997) Realistic Evaluation, Sage, London.
Robson, C. (2002) Real World Research, Second edition, Blackwell, Oxford.
Weisburd, D. and Telep, C. (2010) The efficiency of place based policing, Journal of Police Studies Vol 17, pp.
247-262.
46
Where do Tablets fit in the Organizations Workstation
Inventory?
Mitch Cochran
1
and Paul Witman
2

1
City of Monrovia, USA
2
California Lutheran University, USA
mcochran@ci.monrovia.ca.us
pwitman@callutheran.edu

Abstract: Tablets have become the new technology of choice for end users by end users. They are excited
about the new technology and have the feeling that the tablets can replace other computing devices. Marketing
studies demonstrate the significant user demand for tablets. Users are now questioning organizations IT
operations as to why they cant be used in daily business. It is the job of the IT operation to guide the
incorporation of devices into the organizations computing device inventory. A critical issue is for the organization
to demonstrate what apps are appropriate and what business applications require a traditional platform. This
paper will examine the strengths, weaknesses, and issues of incorporating tablets into the organization, to help
put structure around the problem for organizations and their IT staffs, and to define a future research agenda.
Tablets have some of the same attributes and limitations of both cell phones and workstations. Tablets have
strong capabilities for content retrieval where more powerful laptops are better suited for content creation. Tablets
provide easy access to Internet-based applications. Using the Internet use paradigm tablets can use touch or pen
based input for easy navigation. But developing content is more than easy navigation around websites or using
pre-determined apps. When users need more complex input tasks such as coding with advanced tools, they
need the support of a laptop or workstation. Some functions just need additional screen space that tablets dont
offer. Organizations need to understand how the new technology format differs from cell phones, traditional
workstations or laptops so governance procedures can be modified to incorporate appropriate controls. The
paper explores various user business functions and the device types that best fit them. The paper then discusses
issues regarding hardware differences, security and application development. The organization will need to
maintain security procedures similar to that needed of a laptop on topics such as antivirus, data leakage, and
remote access. The organization will treat it similarly to a cell phone in governance on issues such as employee-
provided technology or remote deletion capabilities. From a governance perspective, the distinctions among
workstations, tablets and phones are less clear. We propose a research agenda to move the study of this topic
forward.

Keywords: governance, tablets, workstations
1. Introduction
Organizations need to be constantly evaluating new technologies against their business and user
needs. Tablets have appeared on the scene as the newest device with the WOW factor. The users
have become enamored with the tablet for personal use as an easy to use device. The general public
has asked the question of why it cant be used at work. The IT operations groups need to understand
where how to make the users happy by supporting tablets but also control them to ensure that they fit
business goals and governance.

Industry surveys have shown that tablets are quickly making inroads into the technology inventory.
Guy Currier of Baseline Magazine posted that 11% of respondents expected a very strong increase in
the tablet investment for 2011 [10]. 14% of the respondents expected a strong increase and 25%
expected a moderate increase. So 50% of the respondents expected an increase in tablet
acquisitions. The expected increase was higher that other current technologies such as virtualization
(44%), Business intelligence (35%) or Social communications (35%).

Similarly, the CIO Insight survey [8] shows that 35% of companies have deployed tablets, 31% are
testing tablets, 26% are evaluating or tracking them, and 7% have not expressed an interest. The
survey defined tablets as Apple iPad, Blackberry Playbook or similar devices. The survey also found
the tablets would provide greater productivity or cost savings for 40% of the respondents, improve
business agility or versatility for 50% of the respondents and open up new market opportunities for
43% of the respondents.

The same CIO Insight survey shows that 46% of companies have deployed smart phones as mobile
clients. The survey asked how the smart phones addressed business goals. 28% of the respondents
felt that the smart phone was best for cost-savings or productivity goals, 47% of the respondents felt
47

Mitch Cochran and Paul Witman
that smart phones were best at improving business versatility, and 39% of the respondents felt that
smart phones were best for opening up new markets or opportunities.

The survey shows that there is a positive interest but the industry does not have a clear picture of how
tablets will be used. The Dimensional Research study of May 2011 showed that 72% of the tablets
had not been formally deployed. [9] The survey showed 41% are used by individuals who have
purchased on their own (a phenomenon known popularly as Bring your own device, or BYOD). The
survey asked for the reason for the iPad selection and found that 53% responded favorably due to the
availability of productivity tools. 35% of the respondents felt that the cool factor was driving the
business demand. The survey also showed that 51% of the respondents did not have a clearly
articulated strategy for adopting tablets. A critical piece of any hardware is the software that runs on it.
Many users dont understand what software runs well on a tablet or smart phone as compared to what
runs well on a traditional laptop. The Dimensional study showed that 42% of the respondents found
that the need for application development is not understood [9]. The respondents believed that any
application could run on the tablet.
2. Current applications and tasks
Gartner lists the most common tablet business applications or tasks as: [12]
Personal Office Automation (documents and spreadsheets)
Presentation
Note Taking
Task Management
File Management
Dictation
IT Admin Utilities
Forrester found in a tablet usage survey that 75% of owners purchased the device to complement
other devices instead of being a replacement device [13]. They also found that after the purchase of
the tablet, users spent 47% percent less time on their computers. Dan Blacharski cites a
Lenovo/Qualcomm summary that found the major uses for consumer or business environments [14]
which is listed in table 1.
Table 1: Consumers and business uses for tablets
Consumer Use Business Use
Gaming 84% Browsing 73%
Browsing 78% Email 69%
Email 74% Working Remotely 67%
Sales Support 46%
Customer Representative 45%
3. Device characteristics
Information technology managers need to understand where the product fits into the workplace. The
tool has benefits and limitations which are listed in table 2:
Table 2: Tablet and laptop characteristics
Tablet Laptop
Cost Higher for equal machines Less cost, includes keyboard, and
other Items that are extra in a tablet
Power Less compute power More processing power, more
memory
Storage Limited Larger but unlimited due to USB
ports
Mouse No track point / tract pad, Not as accurate Includes internal and external
mouse, Can be very accurate
Graphics Limited High resolution
Keyboard On screen takes space Extra, may be flat Included
Media Some with USB No CD / DVD USB is standard Includes CD / DVD
Multitasking Not yet Included in operating system
48

Mitch Cochran and Paul Witman
4. Hardware
Tablets are too new in the marketplace to be able to accurately differentiate the various products
based on reliability. From our limited experience and lack of notoriety of any particular device, tablets
seem to have the same basic reliability as traditional laptops. We have found no discussion in the
trade journals that compare reliability. A tablet has limited processing power as compared to a laptop.
Over time, the processing capacity should increase due to advances in technology but currently,
laptops have faster processors, as well as greater memory and storage. Some tablets dont have an
integrated USB or FireWire ports. Users may need USB ports to provide for connectivity to external
keyboards, CD or DVD drives, memory keys and or thumb drives. The tablet design philosophy
seems to focus on connectivity to the Cloud. Some manufacturers have allowed for keyboards to be
attached using Bluetooth technologies.

Tablets have an integrated touch keyboard. There are a couple of issues that limit keyboard
performance. The developers have worked hard on making an effective the key recognition. The
issue is whether the operator gets tactile feedback when the keys are struck. For touch typists, normal
keyboards have a raised button on the F and J keys for people to recognize that their fingers are in
the correct position. You will not find the raised bumps on a touch screen. The tablets can have
attached keyboards. Some of these are flat keyboards that dont provide the same touch or tactile
feedback that people expect in a laptop. There is also an issue of using a touch keyboard that it uses
half of the screen therefore, the operator has less screen to look at. The touch keyboard ergonomics
are also an issue. The laptop may be put on a flat service. This position will put the hands in an
incorrect typing position according to ergonomic standards. The slate may also have a glare due to
the reflection of the screen since it is now laying flat. The glare could make it difficult to type on. A
laptop can adjust the viewing angle to personal preference. From a developers point of view, the
tablet keyboard will be limited in the availability of characters. The keyboard on the iPad has the <
and > keys (used in html coding) on the third keyboard template. Some users have mentioned that it
has been hard to fix mistakes. The mouse or touch features may make it more challenging to get to
an exact point depending on the font size within a document.

Both tablets and laptops can take advantage of data input beyond that of just a mouse or keyboard.
Ant Ozok et all discuss the advantages of using a digital pen and or handwriting recognition [7]. They
pointed out that handwriting recognition accuracy is highly correlated to acceptance and use of the
recognition tools. The more that the tool is used, the higher the level of acceptance becomes. The
digital pen allows for the ease of handwriting data entry. Microsoft has provided their Notes software
for Windows-based laptops. It demonstrates the ease of use for a handwriting recognition interface
that can be used in the business environment.

Battery life may be a concern to some buyers where mosttablets do not allow for the changing of
batteries. An advantage for laptops has been the capability to carry a spare charged battery for longer
trips or time away from power. People can argue that it is not an issue since they have not had a
problem with long term battery life. Tablets have the advantage of less power use due to smaller
display screens and more efficient components such as solid state disk drives. Tablets have not been
around long enough to determine if fixed batteries or battery life are an issue.
5. Video and graphics
The tablet has specialized graphics for a relatively small touch screen. Laptops can have larger
screens with higher resolutions. The laptops can also support multiple graphics adaptors allowing for
more graphics capabilities.

Both laptops and tablets have added cameras as standard devices. The issue for organizations is
how to manage the photos that are taken. The current tablets and laptops provide resolution similar to
stand alone cameras. One of the key downfalls of tablet cameras is that they provide limited zoom
capabilities. Stand alone cameras still offer more capabilities for organizations to document situations.
The issue for organizations is still how to manage records that are derived from pictures.

The marketplace has not determined a single standard for video conferencing. Applications like
Skype, Facetime or Webex have been video enabled. The built in cameras on both tablets and
laptops should be sufficient for video conference needs using tools provided by the vendors.

49

Mitch Cochran and Paul Witman
6. Connectivity
Tablets can come with WiFi and cellular phone capabilities. There are no Ethernet connections. The
performance will then be limited to the speed and latency WiFi or 4G technologies. Currently there is
no method to connect a tablet to an Active Directory based organization.

The issue for organizations is to develop a plan for connectivity that meets their needs. WiFi can
present a problem in rolling out WiFi to designated areas. Cellular phone technology presents the
issue of choosing 3G or 4G vendors based on availability and cost.

Once the tablet is connected to the network, it needs to connect to either an outside email or a
gateway to reach internal applications. If the unit is going to be used as a virtual terminal the gateway
needs to provide that function, such as a Citrix client or a web-based client. Juniper provides access
to their SSL VPN device for file access and browsing but terminal services is not supported on
android. The organization will need to pick a virtual client application that will work with the tablet
operating system.
7. Security
From a security perspective, it can be argued that tablets are more secure than laptops. Sensitive
data would be accessed and left on the organizations gateway. It would not be downloaded or
maintained in the local device.

Too many people believe that Apple based products dont get viruses. Kevin Haley, Symantec
Director for Security Response pointed out that hackers write viruses where they are easy to
implement [3]. Viruss for Apple products are already in the wild, particularly for the Mac. Currently,
applications that can be purchased for Android are available for free on various web servers. Many of
those free applications include malware components. Kevin also notes that virus writers are attacking
small and medium size businesses. In his blog he lists some protective steps: [4]
Only use app marketplace hosted by well-known legitimate vendors for download and installing
apps.
If practical, adjust Android OS application setting to stop the installation of non-market apps.
Review other users comment on the marketplace to assist in determining if an app is safe before
downloading
During the installation of apps, always check the access permission being requested for
installation; if they seem excessive for what the application is designed to do, it would be wise to
not install the applications
Utilize a mobile security solution to devices to ensure any downloaded apps are not malicious
Consider implement a mobile management solution to ensure all devices that connect to their
networks are policy compliant and free of malware.
The industry needs to define how anti-virus and anti-malware software should be implemented. Some
vendors do have anti-virus software available for tablets. Vendors such as Air Watch, Good
Technologies, Mobile Iron, and Sybase provide applications for mobile device management and
protecting against data leakage. Data leakage is the accidental or deliberate transfer of corporate
data from the device.

The industry needs to consider if the tablet is managed like a phone or a laptop. Each format has
different security or management capabilities. Cell phones can be managed remotely. The
organizations need to be able to remotely turn off or wipe clean a tablet in the same way it manages a
cell phone.

The organization needs to decide how it will handle users loading personal applications. Most
businesses have policies that prevent the installation of non business approved applications. An issue
arises when the device is provided by the user, not the company. Does the organization have the
capability to prevent a user from loading software on a device that the user owns? Many
organizations are working on a BYOT or Bring Your Own Technology policy to address what
applications the user can load on a device that is not provided by the organization. Should the
organization choose to, it should be able to enable a policy that prevents the downloading of
50

Mitch Cochran and Paul Witman
applications. The organization should add statements to explicitly state that apps can or cannot be
purchased and installed. Business data and apps can be separated from personal applications [1].
VMWare has announced the Mobile Virtualization platform for Android devices which lets users run
native applications within a secured container. The policy should also address issues such as e-
discovery of organization documents and the ability to remotely disable and or erase the device.

Cameras could also present a security issue. Some organizations, such as the FBI, have procedures
that dont allow cameras in various areas. At one point, they would require the user to surrender any
cell phone that had an internal camera. It would be returned at the end of the visit. Tablets could
present a similar security issue for organizations concerned about cameras.
8. Applications
Applications on a tablet are loaded and tend to be easier to use than those typically loaded on a
laptop. Laptop users need to install, learn and then use software. There are more capabilities so
users need to be trained to be aware of and use all of the functionality. The initial app development for
tablets has been built largely out of the consumer arena. The applications are focused on personal
information. There are very few applications that are focused on the organization or provide for
collaboration.

Many of the applications are simple, specialized web sites. These applications require access to the
Internet. Some applications are simplified versions of laptop software. Photoshop Elements is
available for tablets but not the full PhotoShop application. Microsoft Office provides a limited function
version for tablets. The standard office product seems to be Documents to Go which provides some
limited document capabilities. Many devices provide this office application in their standard offering.

One benefit of both smart phones and tablets is to provide domain specific information similar to what
can be found on a specialized encyclopedia. The City of Monrovia Fire Department in California has
standardized on a number of applications that provide specialized information that a Division Chief
can use on an accident scene such as: Los Angeles County fire station information, heli-spot
information, incident reporting cheat sheet, GPS location, and modern vehicle information to
determine how to safely avoid airbags or batteries when cutting open a vehicle.

Generally, games are not considered to be standard software for organizations. The tablets are
limited in the graphic capabilities which limits what games can be played. Consider that training
software has taken many of the cues from the sophistication of gaming software so it will be limited on
tablet use.

Organizations are now learning the benefits of collaborative software such as SharePoint. The tablets
have not yet been able to take advantage as full partners in a SharePoint environment.

Instant messaging is also an issue. Some instant messaging services are not available across all
platforms. Should the organization use instant messaging, they would need to choose the platform
that will work with their workstation inventory.

One of the marketing points against an iPad is Apples decision to not support Flash-based web sites.
A consumer might view the limitation as a weakness since they want to be able to see all web sites.
From the users perspective, it would be nice to have as flexible a device as possible. From an
organizations point of view, if flash is not used on the expected web sites, then exclusion of Flash-
based web sites is not a limitation.

Some tablets have limitations on where software can be obtained. For general governance, this
aspect is a positive for the organization. The applications are consistent and would generally be safe
to load on a device since the app store has become a trusted source. From a competition point of
view, it limits the availability of applications. The organization needs to control where applications are
available from. Marketers are suggesting that organizations create their own app stores.

Consider that the manufacturers are in the business to make money selling hardware. They are not
interested in providing software for all platforms. A quote from Steve Jobs about Apples philosophy is
interesting: [2]
51

Mitch Cochran and Paul Witman
"We thought about whether we should do a music client for Android. We put iTunes on
Windows in order to sell more iPods. But I don't see an advantage of putting our own
music app on Android, except to make Android users happy. And I don't want to make
Android users happy."
Some organizations have emails sent from a cell phone to have a tag line that states sent from my
blackberry. The iPads can have a sent from my iPad. The organization needs to develop a standard
for email formats that is consistent or uniform across the various devices if they choose to have the
tag line at all.
9. So which device fits where?
We would propose that the smart phones, tablets and laptops can all have a place in the organization
based on the particular needs of each user. Table 3 lists what user functions are available on smart
phones, tablets, and laptops.
Table 3: User functions by device
Function Smart Phone Tablet Laptop
Check e-mail X X X
Simple Applications X X X
View Documentation X X
Use as a gateway to access organization applications X X
Light content creation X X
Business web application Clients X X
Presentation tool X X
Graphic Design X
Software - Fat or Thick Clients X
Heavy Content Creation X
Require CD / DVD X
Require mass storage X
A smart phone shines for very simple information lookup or entry on a very small screen. The smart
phones have been very successful for years editing email. Almost all smart phones include a
reasonable camera for quick photos. New apps have appeared that are specialized for the phones
such as a citizen complaint application where the citizen takes a photo of an issue and it is sent
directly to the correct organization. The key limitations are processing power, small screen and
multitasking.

Tablets can be thought of as large cell phones. They use many of the same apps but provide
processing power, expanded storage, a larger screen and keyboard.

Tablets have been shown to have a positive impact on student interaction. Wolf found that using
tablets in the classroom yielded more timely interesting lectures which resulted in increased student
performance [5]. Many cities have implemented tablets as the delivery vehicle for council documents.
The council members are able to view and mark up PDF based documents with their notes prior to
council meetings. Building on the findings from Wolf and others [5] [6], and the citys council
application experience, it makes sense that tablets could serve as electronic books or training tools.
10. Future
The tablet should follow the same growth path as computers have. It is safe to assume that the
processors will increase in power, the memory will increase and the storage capacity will increase.
The amount of sales will have an impact as to how fast the units evolve. The competition between the
two operating systems should drive the competitors into a game of one-ups-man-ship that we have
seen in other historical developments such as the browser wars between Netscape and Microsoft.

It is expected that software will start to be available across all platforms. Microsoft has announced
Windows 8 which will include cross platform capabilities. They have just introduced the Surface as
their entry into the tablet arena. They have models that are focused at the very low end devices such
as the kindle and models that are targeted against laptops. The advantage for enterprise is that
corporate applications that require a Windows based thick client can be directly supported instead of
using a web based front end.

52

Mitch Cochran and Paul Witman
Since many applications are web based, the only changes that may be needed are changing the
screen design for smaller or larger screens. This situation has been demonstrated as websites can
detect the viewing device and can automatically direct the user to the correct web site. The vendor,
Bluestack has developed software that will allow Android applications to run on windows.

Voice recognition will also move across the computing platforms. The Apple iPhone 4S sets the
standards or customer expectations with its implementation of a intelligent voice application, Siri. It is
safe to assume that the voice recognition will move across all of the platforms as the technology
matures.
11. Conclusion
Tablets definitely have a place in the organizations computer inventory. The question for
differentiation is to find where is the sweet spot or most appropriate location for this technology? One
of the ongoing themes is that tablets are more appropriate for content lookup where the laptop is
better positioned for content creation. Over time, tablets will be released that have more processing
power and memory. Those increased attributes will allow for tablet manufacturers to standardize both
voice and handwriting recognition options.

Software or functional developers need the software tools, memory and storage that laptops can
provide. Of course a tablet could be used for quick development but laptops are better suited for
longer term activities. The developers also need the larger screen size.

From a governance perspective, the organization needs to consider some of the same issues in
governing cell phones:
Limit the installation of applications by the user
Capability to provide a consistent software image
Have the ability to track the devices
Have the ability to remotely wipe off or delete the software from the device
Control what networks the device can attach to
At some point the two different distinctions will merge into one design with the only separation being
an attached keyboard. Microsoft has announced its goal of having Windows 8 being capable of
working across multiple platforms. The new operating system is planned to provide an app store and
tablet support.
12. Future research directions
This paper introduces the topic and the initial governance issues. The acceptance of tablets in the
enterprise signifies a change in the traditional computing governance model. There are numerous
areas for further research on how tablets are being used which include:
Case studies of organizations integrating personal devices
Survey of governance models used
Survey of technologies used to implement governance models
Integration of task-technology fit theories
Re-assess as device capabilities improve and evaluate add-on components such as keyboards
become more ubiquitous.
References
Anderson, R., Anderson, R., Linnell, N., Razmov, V., (2006) Supporting structured activities and collaboration
through the use of student devices in college classrooms, Manuscript,
http://www.cs.washington.edu/education/dl/presenter/papers/2006/AALR_2006.pdf,
Ant Ozok, A., Benson, Dana, Chakraborty, Joyram, Norcio, Anthony F., (2011) A comparative Study Between
Tablet and Laptop PCs: User Satisfaction and Preferences, International Journal of Human-Computer
Interaction, 24:3, 329-352. http://dx.doi.org/10.1080/10447310801920524.
Currier, Guy, Emerging Technology Adoption Trends, CIO Insight, September/October 2011, pp 18-23.
Blacharski, D, (2011) Using the tablet as an enterprise business tool ,
http://www.onestopclick.com/blog/index.php/2011/11/using-the-tablet-as-an-enterprise-business-tool/,
November 1.
53

Mitch Cochran and Paul Witman
Currier, Guy, (2011) The Four Fastest-Growing Technology Areas,, Baseline Magazine,.
http://betweenthelines.baselinemag.com/content/trends/the_four_fastest-growing_technology_areas.html.
November 4.
Dimensional Research, (2011) Enterprise ipad and Tablet Adoption: A Survey, Forrester, tablet usage among
online buyers. http://www.slideshare.net/BizrateInsights/bizrateforrester-study-tablet-usage-among-online-
buyers. www.modelmetrics.com/wp-content/.../05/iPadSurvey-May10.pdf, May.
Goodhue, D., (2011) The Model Underlying the Measurement of the Impacts of the IIC on the End-Users, journal
of the American society of rInfomration science, 48(5): 449-453, 1997.
Haley, Kevin, (2011) IEEE Security Presentation, Claremont Graduate University, August.
Haley, Kevin, (2011) Symantec Security Response Director, Symantec Blog,
www.symantec.com/connect/blogs/mobile-malware-do-your-employees-know-what-look, October.
Horwitt, Elisabeth, (2011) Mobility, CIO Magazine, October 15, 2011, p. 32.
Isaacson Walter, (2012) Steve Jobs: A Biography, Simon & Schuster, 2012
Willis, D., (2011) iPad and Beyond: The Media Tablet in Business, Gartner,
http://www.gartner.com/it/content/1586600/1586614/april_13_ipad_and_beyond_dwillis.pdf,.
Wolf, T., (2007) Assessing the impact of inking technology in a large digital design course. Proceedings of the
38th SIGCSE Technical Symposium on Computer Science Education, 39(1), 79-83.

54
Classifying IT Investment Evaluation Methods According to
Functional Criterion
Jacek Cypryjanski
University of Szczecin, Poland
jacek.cypryjanski@wneiz.pl

Abstract: One of the main problems of IT investment evaluation is selection of adequate methods. The selection
will be easier if we divide the methods into homogeneous groups. The paper presents a proposal of methods
classification based on functionality criterion. As a starting point the system-situational approach has been
adopted. Development of this approach resulted in three findings which form a theoretical basis of the proposed
classification: (a) model of the evaluation system with three phases of the evaluation process and relationships
between them; (b) synthetic approach to problems of evaluation in general and identification of problem sources
typical for IT investments; (c) synthetic approach to requirements which the appraisal (result of the evaluation)
should fulfil in a decision process. The proposed classification organizes the problem of evaluation method
selection. It allows avoiding series of mistakes such as perceiving methods which are of different roles as
alternative ones or underrating methods which could be of great importance in evaluation of IT investments.

Keywords: IT investment evaluation, evaluation methods classification, specificity of IT investments
1. Introduction
A big part of the evaluation of IT investments literature is dedicated to methods. Berghout and
Renkema (2001) list over sixty methods, that all to be of help in the evaluation process. There are
proposals of new methods which in certain conditions bring positive results, e.g. Information
Economics (Parker et al., 1988, 1989). There are also examples of how positive results were
achieved using methods adopted from other fields, e.g. financial engineering (Dos Santos 1991,
Kambil et al. 1993, Kumar 1996, Lee and Lee 2011) or even classic methods of investments
evaluation (Farbey et al. 1992, Botchkarev and Andru 2011). These achievements cannot be
overestimated; however, they paradoxically complicate the search for evaluation method adequate to
a given case. Especially if we emphasise the fact that all these findings are dedicated to practitioners
who do not have as much time to study literature as researches. It is like pumping up new ideas into
an unorganized domain it heightens chaos. However, among the new methods there are exceptions
which put things in order. These are, using terms from taxonomy proposed by Bannister and Remenyi
(2000), meta approaches as the ones by Farbey et al. (1993, 1999), which play different roles in the
evaluation process: attempt to select the optimum set of measures for a context or set of
circumstances. Its worthwhile putting a question whether other functions of evaluation process could
be distinguished and can other methods be attributed to them?

The answer seems to be obvious if we compare for example cost-benefits analysis (CBA) and return
on investment (ROI) two methods which are often mentioned in the context of IT investment
evaluation. According to Ferbey et al. (1999), CBA is an approach that attempts to find a money
value for each element contributing to the cost and benefit of a development project. The approach
originated as an attempt to deal with the problem that some elements regarded as benefits or costs
have no obvious market value or price. (...) The resulting cost-benefit values can be projected in the
form of notional cash flows on a year-by year basis and the projected outcomes for alternative
schemes or designs fed into a decision model based on one of the standard ROI methods. In other
words CBA and ROI play different roles in evaluation process (the first helps us to deal with
immeasurability, while the second is a method for calculating efficiency of an investment), they are
complementary to each other, and therefore shouldnt be treated as an alternative.

The aim of the study presented in the paper is to develop a new classification of methods based on
functionality criterion. In order to classify methods according to the function they play in an evaluation
process, the system and the situational approaches have been adopted. Using system approach
allows the better to understand the significance of an environment of the examined subject and the
relationships between them. It also allows defining elements of the examined subject, to perceive
them as individual systems and as subsystems of a broader system. It helps to define the relations
between the subsystems how they affect one another and what is the synergy effect of their
interaction.

55

Jacek Cypryjanski
Situational approach is as it were the supplement of the system approach. It was formulated in the
1970s within the management theory. According to Kast and Rosenzweig (1973) the situational
approach to organizations and their management assumes that an organization is a system
consisting of subsystems and is separated from the environment with clearly identifiable boundaries.
The situational point of view tries to understand interrelations between and inside the subsystems as
well as between the organizations and their environment. It tries to define the relationships model and
variables configurations. It emphasises a multivariate nature of organization structure and tries to
understand how organizations operate in different conditions and various circumstances. Situational
approach heads for exploration of organizational structure and management techniques tailored to the
specific situation.

Situational approach has been successfully adopted to solve problems of IT investments evaluation
(Farbey et al. 1993, 1999, Peters 1994, 1996). As noted Serafeimidis (2001): There is a clear need
for contingent evaluation approaches in order to deal with the range of circumstances encountered.
This implies that IT projects, as well as their contexts have certain characteristics which influence the
choice of a suitable evaluation method. Similarly, every evaluation methodology or technique has
characteristics which point to the set of circumstances in which it could be applied more successfully.
Development of this approaches resulted in three findings which form a theoretical basis of the
proposed classification:
model of the evaluation system with three phases of the evaluation process and relationships
between them;
synthetic approach to problems of evaluation in general and identification of problem sources
typical for IT investments;
Synthetic approach to requirements which the appraisal (result of the evaluation) should fulfil in a
decision process.
The next three sections describe the above mentioned findings. The fifth section presents the
proposed classification and describes its relations with taxonomy of techniques drawn up by Bannister
and Remenyi.
2. Evaluation as a system
To present an evaluation as a system one should define its goal, the subsystems it consists of and
interrelations between them. The systems environment should be defined and how it interacts with
the system. It is important to notice that the definition of system does not follow any strict rules or
principles but should be guided by the stated purpose of the study (Katz and Kahn, 1980, Jokela et al.
2008). Here the evaluation is seen as a system which goal is to generate the appraisal that meets two
conditions: unambiguously and objectively reflects the actual state and is significant in the decision
process. The evaluation system consists of four subsystems: input, subsystem performing the
evaluation process, control subsystem and output (figure 1).

Subsystem performing the evaluation process, in turn, consists of three subsystems carrying out the
following tasks:
identification of all relevant factors and relationships between them,
quantification of factors and relationships in such measurement scale that is proper for generating
appraisal,
Generation of the appraisal.
In the model shown in figure 1 these subsystems (evaluation phases, functions of evaluation process)
were named respectively: understanding, measurement, and assessment, as in the definition by
Remenyi et al. (1997): evaluation is a series of activities incorporating understanding, measurement,
and assessment. It is either a conscious or tacit process which aims to establish the value of or the
contribution made by a particular situation. It can also relate to the determination of the worth of an
object.

Decision process, for which the evaluation is carried out, determines the way how assessment
subsystem generates the appraisal. For example while making an investment decision, one should
evaluate the investment efficiency in order to know whether the benefits are worth the investment
expenditures? In this case assessment may consist in the calculation of return of investment. In order
56

Jacek Cypryjanski
to reflect reality objectively, ROI has to include all components of costs and benefits. It means that
understanding subsystem requires actions allowing identification of all costs and benefits while
measurement subsystem expressing them in monetary terms. If measurement subsystem cannot
express some benefits (e.g. higher customer satisfaction, more accurate decisions) in money, two
scenarios are possible. The first is that understanding subsystem will search for measurable variables
to describe given benefit. In this case the quality of the generated appraisal will depend on how well
the selected variables describe the benefit. In the second scenario we do not calculate ROI in the
assessment subsystem. Instead, total investment costs are calculated while benefits are presented in
a descriptive form. Of course it is not a formal efficiency appraisal yet, however it gives decision
maker an idea about investment efficiency.

Figure 1: Evaluation as a system
This simple example illustrates two issues. First of all it describes relations between understanding,
measurement and assessment subsystems. It shows how actions taken and methods used in one
subsystem (phase of evaluation process) affect the way of performing and methods used in other
subsystems. It also clarifies that decisions about actions carried out in these subsystems are taken by
a compromise between the objective reflection of reality (search for measurable variables to describe
given benefit) and usefulness in decision process (instead of ROI, calculating total investment costs
and presenting benefits in a descriptive form). Secondly, it shows environmental influence on the
evaluation system. On the one hand a decision process determines the requirements which an
appraisal should fulfil to be useful. On the other hand what we evaluate and when we do it causes
problems with obtaining such an appraisal. These problems - specific to individual cases - are named
here as conditions of evaluation. Selection of methods for evaluation process tasks as well as criteria
and measures should be done through analysis of the requirements and conditions. Hence the control
subsystem presented in the model (Figure 1) is shown as a system carrying out four tasks:
requirements and conditions analysis,
criteria/measurement selection,
defining evaluation activities,
Selecting methods for activities realization.
These tasks, analogically to the tasks of the subsystem performing the evaluation process should be
seen as subsystems which interact on each other. Later in this paper we look closely at conditions
and requirements as important arguments in favor of the proposed classification.
57

Jacek Cypryjanski
3. Conditions
Problems of evaluation can be assigned to one of the three categories:
indeterminacy problems with identification of all factors relevant to evaluation and relationships
between them,
variability problems of determining how all factors and relationships between them will change
over time,
Immeasurability problems with quantification of all factors and relationships in such
measurement scale that is proper for generating appraisal.
In general indeterminacy and variability are perceived collectively as uncertainty. But when we
analyze the problems of evaluation in the context of applicable methods then this distinction becomes
important. Different methods are used for dealing with indeterminacy, e.g. system analysis, and
different for variability, e.g. statistical methods. Indeterminacy, variability and immeasurability may
occur with different intensities, and each time dealing with any of them may be crucial for the
evaluation. Once the priority will be analyzing cause and effect relationships between changes in an
organizations IS and effectiveness of business processes (indeterminacy). Another time it will be
searching for quantitative variables (ratios) which best reflect different quality categories in a given
situation, e.g. customer satisfaction, employee satisfaction or corporate knowledge (immeasurability).
Yet another time it will be the problem of identification of events which result in the fact that effects
and investment expenditure may differ from the expected (variability). The listed problems and their
scale form specific conditions of a particular evaluation.

Are these categories of problems specific to IT investment evaluation? Powell (1999) remarks that
many sources (Strassman 1997, Willcocks 1996, Wen and Sylla 1999) suggest that IS/IT investment
is different from other investment decisions because the costs and benefits are harder to identify and
quantify and the intangible factors are likely to be significant. However he also notes that the idea
that the measurement of costs and benefits is problematic has surfaced in fields other then IS/IT. It
would be difficult to accept that indeterminacy, variability and immeasurability are category of
problems specific to any type of evaluation. The problems appear in different evaluation types
including evaluation of IT investments. But without a doubt the higher intensity of problems the more
difficult investment evaluation. The specificity of IT investments evaluation is that indeterminacy,
variability and immeasurability appear in great force and their source are:
wide range and complexity of IT systems influence on organizations,
nature of information,
Rapid IT development.
The wide range of influence arises due to the fact that information systems realise information
processes which on the other hand are an inseparable item of any activities, both in the field of
management and operation. The complexity of the influence is that information systems only enable
achieving benefits which depend on many other factors (Lucas 1993, Markus and Soh 1993,
Mahmood and Mann 2000, Remenyi et al. 2000). As Remenyi et al. (2000) pointed out: the value of
the IT investment depends entirely upon the way in which it is able to make the organization more
efficient and effective. Information is a unique good, different from material goods. The value of
information generated by IS may only be examined in information-user-task frame of reference. The
phenomenon of information asymmetry makes the ex ante estimation of information value impossible
wherever the effect of the activities (in which information is used) is dependant on information content.
Value of information is a highly complex issue (Bannister and Remenyi 2000), especially when
considered in conjunction with the wide range of IT systems influence on organizations. All
complemented by development of IT, the fastest growing general purpose technology in history
(Jovanovic and Rousseau 2005).

Looking into the presented problems from an evaluation system perspective we notice that they
concern understanding and measurement subsystems. This has also been proved by empirical
studies conducted by Ballantine et al. (1999). Of course the problems influence the assessment
subsystem as well, but only indirectly through relationships between these three subsystems. This
means that the quality of appraisal is largely dependent on how understanding and measurement are
realized and which methods of identification and quantification have been applied. Therefore while
developing classifications, methods of identification and quantification should be taken into account
58

Jacek Cypryjanski
equally with the methods of assessment. They should also be assigned to individual roles they play in
the process of evaluation.
4. Requirements
The effects of investments as well as any other activities may be considered in two main aspects: for
the intended purpose and for the efforts. In the first case effectiveness is analysed, i.e. a degree of
goal achievement while the aspect of expenditures is omitted. Only those effects have been taken into
consideration which were intended. In the second case efficiency is analysed, i.e. effects of activities
and their expenditures. All obtained effects are taken into consideration, no matter whether they were
intended or not. This means that while evaluating effectiveness and efficiency, different scopes of
effects are taken into account. The evaluation of investments requires examining both aspects.
Focusing on the first aspect only may lead to a situation in which expenditures overcame the effects.
Analysing only the relation between effects and expenditures may cause a situation in which the goal
would not be achieved.

Because of the time of evaluation criterion we distinguish retrospective evaluation (ex post) and
prospective one (ex ante). In fact both evaluations are carried out ex post, as it is difficult to imagine
to evaluate something that do not exist. The difference lies in the subject of evaluation. In the
retrospective evaluation we asses activities, and what we call an ex ante evaluation is in fact a
retrospective evaluation of intentions (plans of these activities).

As far as the subject of evaluation is concerned we distinguish an synthetic evaluation when all the
activities are evaluated, and fragmentary evaluation when only some aspects are evaluated. The
reason for which we evaluate effectiveness and efficiency is another classification criterion. According
to this criterion we divide evaluation into absolute and relative. The absolute evaluation of
effectiveness allows us to know whether and to what extend the analysed activities lead us to the
intended goal. The absolute evaluation of efficiency let us know whether the benefits are worth the
investment expenditures. Whereas the relative evaluation is about comparing effectiveness and
efficiency of different activities.

Different decision situations require different appraisal. First of all, appraisals may differ as far as their
scope is concerned: only efficiency is assessed, or only effectiveness, or both. In spite of the fact that
appraisal of efficiency and effectiveness are complementary, there may be some situations where
only one of them is necessary. In some particular cases, e.g. in relative evaluation, where the aim is
dichotomous and the undertaking do not generate any additional effects, it is enough to estimate and
compare only investment expenditures. In addition, depending on whether the decision is to accept a
project (such as implementation of CRM system), or to choose one of the possible variants of the
project (e.g. which of the available systems to choose, in what order to implement IS modules, etc.), it
is more or less important that the appraisal is synthetic, and the measure taken as a criterion
expresses the phenomena in money terms. Making a decision in the first case requires such an
efficiency assessment that would allow to present the evaluated issue compared to other activities in
a company. This in practice means the necessity of applying financial measures. In the second case
the situation allows to accept more fragmentary assessments and measures different to financial
ones. These differences in the scope and the way of assessment may be considered as requirements
towards generated appraisal.
5. Proposed classification of methods
As the model of a system in figure 1 shows we can assign methods used in the process of evaluation
to particular subsystems. In this way we obtain a classification of methods according to the criterion of
functionality (as each of the subsystems performs a different function). According to this classification
we distinguish four categories of methods:
methods of identification,
methods of quantification,
methods of assessment,
Meta approaches.
In practice, a number of methods are composite approaches, e.g. information economics by Parker,
Benson and Trainor or Gartner total cost of ownership. However, creating a separate category for
these methods would destroy the sense of the classification. In the case of composite approaches
59

Jacek Cypryjanski
which support the realization of more than one phase of evaluation process we can assign them to
each of these phases. At the same time we can define how a given method supports the realization of
a specific phase. It is relatively simple as composite methods are usually in the form of procedures.

Meta approaches task is to support subsystems of evaluation process control. Therefore they should
be perceived as a set of procedures and analytical frameworks supporting requirements and
conditions analysis, criteria/measurement selection, definition of evaluation activities and finally
selection of methods for activities realization. A good example of meta approaches is the matching
process developed by Farbey, Land and Targett (1999). It consists of a three-stage procedure and a
series of matrices (analytical frameworks) which enable requirements and conditions analysis as well
as method selection. In the matching process requirements are analyzed based on six criteria (matrix
dimensions). The same is true in the case of conditions analysis, however we can assume that three
of the matrix dimensions relate to the problem of indeterminacy (nature of system: specific
infrastructure; directness of impact: direct, indirect; leadership role: follower, leader), two to the
problem of variability (certainty of impact: certain, uncertain; industry situation: stable, turbulent) and
one to the problem with quantification (type of benefits: quantifiable, qualitative). The selection of
methods comes down to pointing out one of the four categories of methods. This division does not
take into consideration the functionality criterion therefore is quite general.

The proposed classification was created as complementary to taxonomy of techniques drawn up by
Bannister and Remenyi (2000). This concept is illustrated in Figure 2.

Figure 2: Categories of methods and approaches to investment evaluation
Taxonomy of Bannister and Remenyi classifies approaches to evaluating IT investments into three
basic groups of techniques (fundamental measures, composite approaches and meta approaches)
which can be used in two different ways (positivist/reductionist and hermeneutic). According to
Bannister and Remenyi (2000) fundamental measures are metrics which attempt to parameterize
some characteristic or closely related set of characteristics of the investment down to a single
measure () composite techniques combine several fundamental measures to obtain a balanced
overall picture of value/investment return while meta approaches attempt to select the optimum set
of measures for a context or set of circumstances. The descriptions of the particular categories show
that the authors focus in their taxonomy on measures of evaluation (which according to functionality
criterion were defined here as methods of assessment), omitting identification and quantification
methods (as far as they are not a part of composite methods, as in the case of information
economics). It also shows that authors narrowly define meta approaches as methods supporting
60

Jacek Cypryjanski
selection of measures while here meta approaches support selection of methods for the whole
evaluation process. The proposed integration of the two classifications is probably against the
taxonomy authors intention. However, it seems justified if perceiving evaluation from the system
perspective.
6. Summary and conclusion
The evaluation process has its specificity (common and differentiating features of individual cases)
determined by what, why and when is being evaluated. The paper shows that:
The specificity can be expressed as requirements which an evaluation has to meet to be
significant in the process of investment decision making and conditions in which the evaluation is
made.
The conditions concern the intensity of the three main problems of investment evaluation:
indeterminacy, variability and immeasurability. The higher intensity of the problems the more
difficult evaluation.
The specificity of IT investments evaluation is that indeterminacy, variability and immeasurability
appear in great force and their source is the nature of information, rapid IT development as well
as the wide range and complexity of IT systems influence on organizations.
The requirements and the conditions of evaluation determine the way in which the evaluation
process should be realized: evaluation criteria and measures, factors essential for evaluation,
activities necessary for the factors estimation and methods of these activities realization.
Perceiving evaluation from system perspective:
shows that selection of identification and quantification methods is as essential as selection of
criteria and measures of the assessment, as evaluation problems affect directly these two phases
of evaluation process;
clarifies that decisions about actions carried out in the evaluation process are taken by a
compromise between the objective reflection of reality and usefulness in decision process;
Shows that meta approaches to be useful in practice should take into account functionality
criterion.
The classification of methods according to functionality into methods of identification, quantification
and assessment makes the selection process simpler. It allows avoiding mistakes such as perceiving
methods which are of different roles as alternative ones or underrating methods of identification and
quantification which could be of great importance in evaluation of IT investments.
References
Ballantine, J. A., Galliers, R. B. and Stray, S. J. (1999) Information Systems/Technology Evaluation Practices:
Evidence from UK Organizations, in: L. Willcocks, S. Lester (eds.) Beyond the IT Productivity Paradox:
Assessment Issues, John Wiley & Sons, Chichester.
Bannister, F. and Remenyi, D. (2000) Acts of Faith: Instinct, Value and IT Investment Decisions, Journal of
Information Technology, Vol. 15, pp. 231-241.
Berghout, E. and Renkema, T.-J. (2001) Methodologies for IT Investment Evaluation: A Review and Assessment,
in: W. Van Grembergen (ed.) Information Technology Evaluation Methods and Management, Idea Group
Publishing, Hershey.
Botchkarev, A. and Andru, P. (2011) A Return on Investment as a Metric for Evaluating Information Systems:
Taxonomy and Application, Interdisciplinary Journal of Information, Knowledge, and Management, Vol. 6,
pp. 245-269.
Dos Santos, B. L. (1991) Justifying Investments in New Information Technologies, Journal of Management
Information Systems, Vol. 7, No 4, pp. 71-90.
Farbey, B., Land, F. and Targett D. (1992) Evaluating Investments in IT, Journal of Information Technology, Vol.
7 No. 2, pp. 109-122.
Farbey, B., Land, F. and Targett, D. (1993) How to Assess Your IT Investment: a Study of Methods and Practice,
Butterworth Heinemann, Oxford.
Farbey, B., Land, F. and Targett, D. (1999) Evaluating Investments in IT: Finding a Framework, in: L. Willcocks,
S. Lester (eds.) Beyond the IT Productivity Paradox: Assessment Issues, John Wiley & Sons, Chichester.
Jokela, P., Karlsudd, P. and stlund, M. (2008) Theory, Method and Tools for Evaluation Using a Systems-based
Approach, The Electronic Journal Information Systems Evaluation, Vol. 11 No 3, pp. 197 212, available
online at www.ejise.com.
Jovanovic, B. and Rousseau, P. L. (2005) General Purpose Technologies, NBER Working Paper No. 11093,
Cambridge.
61

Jacek Cypryjanski
Kambil, A., Henderson, J., Mohsenzadeh, H. (1993) Strategic Management of Information Technology
Investment: An Options Perspective, in: R. D. Banker, R. J. Kauffman, M. A. Mahmood (eds.) Strategic
Information Technology Management: Perspectives on Organizational Growth and Competitive Advantage,
Idea Group Publishing, Harrisburg.
Kast, F. E. and Rosenzweig J. (1973) Contingency Views of Organization and Management, Science Research
Associates Inc. Publ., Palo Alto.
Katz, D. and Kahn, R. L. (1980) The Definition and Identification of Organisations, in J. A. Litterer (ed.)
Organizations, Structure and Behavior, 3rd. Edit.,John Wiley & Sons, NewYork.
Kumar R. L. (1996) A Note on Project Risk and Option Values of Investments in Information Technologies,
Journal of Management Information Systems, Vol. 13, No 1, pp. 187-193.
Lee, Y.-Ch. and Lee, S.-S. (2011) The valuation of RFID investment using fuzzy real option, Expert Systems with
Applications, Vol. 38, No 10, pp. 1219512201.
Lucas, Jr., H. C. (1993) The Business Value of Information Technology: A Historical Perspective and Thoughts
for Future Research, in: R. D. Banker, R. J. Kauffman, M. A. Mahmood (eds.) Strategic Information
Technology Management: Perspectives on Organizational Growth and Competitive Advantage, Idea Group
Publishing, Harrisburg.
Mahmood, M. A. and Mann G. J. (2000) Special Issue: Impacts of Information Technology Investment on
Organizational Performance, Journal of Management Information Systems, Vol. 17 No. 1, pp. 3-10.
Markus, M. L. and Soh C. (1993) Banking on the Information Technology: Converting IT Spending into Firm
Performance, in: R. D. Banker, R. J. Kauffman, M. A. Mahmood (eds.) Strategic Information Technology
Management: Perspectives on Organizational Growth and Competitive Advantage, Idea Group Publishing,
Harrisburg.
Parker, M. M., Benson, R. J. and Trainor, H. E. (1988) Information Economics: Linking Business Performance to
Information Technology, Prentice Hall, Englewood Cliffs.
Parker, M. M., Trainor, H. E., Benson, R. J. (1989) Information Strategy and Economics: Linking Information
Systems Strategy to Business Performance, Prentice Hall, Englewood Cliffs.
Peters, G. (1994) Evaluating your computer investment strategy, in: L. Willcocks (ed.) Information Management.
The Evaluation of Information Systems Investment, Chapman & Hall, London.
Peters, G. (1996) From Strategy to Implementation: Identifying and Managing Benefits of IT investments, in: L.
Willcocks (ed.) Information Management. The Evaluation of Information Systems Investment, Chapman &
Hall, London.
Powell, P. L. (1999) Evaluation of Information Technology Investments: Business as Usual?, in: L. Willcocks, S.
Lester (eds.) Beyond the IT Productivity Paradox: Assessment Issues, John Wiley & Sons, Chichester.
Remenyi, D., Money, A., Sherwood-Smith, M. and Irani, Z. (2000) Effective Measurement and Management of It
Costs and Benefits, 2nd Edit., Butterworth-Heinemann, Woburn.
Remenyi, D., Sherwood-Smith, M. and White, T. (1997) Achieving Maximum Value from Information Systems: A
Process Approach, John Wiley & Sons, Chichester.
Serafeimidis, V. (2001) A Review of Research Issues in Evaluation of Information Systems, in: W. Van
Grembergen (ed.) Information Technology Evaluation Methods and Management, Idea Group Publishing,
Hershey.
Strassmann, P. A. (1997) The Squandered Computer: Evaluating the Business Alignment of Information
Technologies, The Information Economics Press, New Canaan.
Wen, H. J. and Sylla Ch. (1999) A Road Map for the Evaluation of Information Technology Investment, in: M. A.
Mahmood, E. J. Szewczak (eds.) Measuring Information Technology Investment Payoff: Contemporary
Approaches, Idea Group Publishing, Hershey.
Willcocks, L. (ed.) (1996) Investing in Information Systems: Evaluation and Measurement, Chapman and Hall,
London.
62
Academic Group and Forum on Facebook: Social, Serious
Studies or Synergy?
Ruth de Villiers and Marco Cobus Pretorius
School of Computing, University of South Africa, Pretoria, South Africa
Dvillmr@unisa.ac.za
Marco.pretorius@gmail.com

Abstract: An academic group and discussion forum were established on Facebook for a cohort of postgraduate
students studying the concepts and principles of eLearning. The Forum had a constructivist, student-centric
ethos, in which students initiated topics for discussion, while the course leader and administrator facilitated.
Previous research has been conducted, involving content analysis of the topics and academic discourse, but the
present study focuses on social aspects, investigating social- and study-related pursuits and determining whether
synergy can exist between them. A literature review shows how social networking by students, initially social,
began to overlap with academia, leading to the use of groups for academic purposes and forums for subject-
related discussions. In the present study, data was triangulated and two methods of data analysis were used.
Qualitative analysis was done on free-text data from students reflective essays to extract socially-related themes.
Heuristic evaluation was conducted by expert evaluators, who investigated forum discourse in line with
contemporary learning theory and who considered the social culture of participation. Findings of the qualitative
analysis of students perceptions and results of the heuristic evaluation of forum participation confirmed each
other, indicating a warm social climate and a conducive, well-facilitated environment that supported individual
participation styles. It fostered inter-personal relationships between distance learners, as well as study-related
relationships due to peer teaching and insights acquired from social negotiation. The environment supported
student-initiative, but was moderated by facilitators. The mixed-methods research approach of evaluating
students essays and conducting expert analysis of forum discussions showed the advent of a virtual community
with a synergy between social aspects and academia. Most participants experienced a sound balance of social-
and study-related benefits, but with a stronger focus on academic matters.

Keywords: eLearning, evaluation, Facebook group, online discussion forums, qualitative analysis
1. Introduction
Social networking sites (SNSs) are increasingly used in academia. This paper discusses the social
climate of an academic group and online discussion forum (ODF) established on the SNS, Facebook,
to enhance learning for postgraduate distance-learners studying Concepts and Principles of
eLearning at the University of South Africa (UNISA). Most of the students were professionals.

ODFs are a common feature in web-based groups and eLearning environments. UNISA, a distance-
education institution, provides ODFs on its official site, but we offered an alternative supplementary
group and discussion forum on Facebook for a postgraduate cohort. It had a constructivist, student-
centric nature, in which students personally initiated the discussion topics. The course leader and an
administrator facilitated as guides on the side, rather than as sages on the stage. The aim was to
encourage interaction that provided subject-related information and academic discourse. Early
research about the Group (de Villiers, 2010) involved content analysis of the topics and discussions,
using quantitative frequency counts of interaction types, and qualitative discourse analysis to
investigate the academic content. The study showed that active participation in the Forum supported
learning and enhanced performance. Secondary benefits also occurred, including the emergence of
peer-to-peer relationships. The present study therefore focuses on social aspects of the Group.
2. Literature review
Various studies have addressed students use of SNSs and determined that interactions were
primarily social (Madge, Meek, Wellens and Hooley, 2009; Selwyn, 2009). In an online survey of 600
student users, Mazman and Usluel (2010) found they visited Facebook for approximately 30 minutes
daily, mainly for socializing. A meta-analysis of 36 studies on students and teachers use of
Facebook, indicates little educational use (Hew, 2011). According to Lampe, Ellison and Steinfield
(2008), Facebook is ubiquitous on US campuses with the typical user visiting for 80 minutes daily.
However, Lampe et al found that academic matters such as lectures, reading materials, deliverables,
and instructors were mentioned and about 15% of students used Facebook to contact lecturers.
Selwyn (2009) studied Facebook Walls of UK undergraduates, visiting over 600 sites with public
viewing profiles. Four percent of the exchanges related to academic schedules, venues, lectures and
deliverables, while another theme was criticism of keen students, seminars and lecturers.
63

Ruth de Villiers and Marco Cobus Pretorius
Nevertheless, educational use is on the increase and explicit academic use of SNSs is reported. Four
case studies on social networking by students (Jones, Blackey, Fitzgibbon and Chew, 2010) show a
divide between students learning space and personal space, yet acknowledge that educators should
leverage SNSs and create environments for independent learning, reflection, and communities of
inquiry. Mazman and Usluel (2010) define educational use of Facebook as involving communication
(discussions and information); collaboration in groups; and resource sharing via videos and links.

There is increasing academic use of Facebook in South Africa, the home base of the authors. Bosch
(2009:147) did a virtual ethnographic study of Facebook profiles of 200 students, supplemented by
interviews with students and staff who communicated on Facebook and found that the experience
undid traditional power hierarchies. Students were more engaged on Facebook than on the official
course management site. Many belonged to groups for societies and academic programmes, where
they shared resources and logistical information and checked class-related material. Visagie and de
Villiers (2010) (not the present author) surveyed 32 academics and established that 56% of them
would consider using Facebook as an academic tool. As academic use of Facebook increases,
research is being conducted on subject-related discourse in groups and communities. The primary
author of this paper did detailed analysis of the academic content of the discourse on a postgraduate
discussion forum (de Villiers, 2010). First-year students participated in peer-initiated topic-based
conversations in a systematic and well-articulated way (Rambe and Ngambi, 2011). Informal learning
occurred in a social-constructivist community where students and instructors conversed and shared
knowledge to help each other understand the subject matter better (Ractham and Firpo, 2011).
3. Research design and methods
The research question addressed in this study is:

Did the venture serve both social- and study-related pursuits in a synergistic manner?

To evaluate whether academic forums on Facebook can have a synergistic value, we focused on the
social culture and interaction patterns described by students in reflective essays and identified by
heuristic evaluators studying the discussions. We investigated whether the Forum supported personal
participation styles and valuable interaction. The tone, nature, impact and facilitation of the
discussions were considered, as well as the ethos of the community. This study evaluates the
integration of social aspects and serious studies, by using a mixed-methods research approach
(Creswell, 2009), which was triangulated by two evaluation methods and two different datasets:

Study 1 Qualitative analysis of free-text data from students reflective essays: These perceptions
were qualitatively analysed using a form of grounded theory.

Study 2 Heuristic evaluation (HE) by four expert evaluators: These experts investigated forum
contributions to determine the social climate and to establish whether the ethos of the discussions
conformed to contemporary eLearning theories. These evaluations were mainly quantitative.

The research incorporated data triangulation as both essays and forum discourse were analysed, and
methodological triangulation via the combination of qualitative analysis and heuristic evaluation. For
ethical reasons participants were informed that research was being conducted, and they signed
informed consent forms.
4. Study 1: Qualitative analysis of free-text data from students reflective
essays
Membership of the Group was encouraged, but not compulsory. Thirty of the 40 in the cohort joined.
Twenty seven completed the course, including 21 Group members, twelve of whom were very active
on the Forum. Textual data from the students reflective essays was analysed by grounded theory
and categorised under themes and sub-themes that emerged. The findings are presented and
supported by groups of quotations from students essays, in the students words. The students whose
reflections are quoted, are cited, e.g. P1 represents Participant 1 and NP1 is Non-Participant 1.
4.1 Social vibes and ethos of the virtual community
In off-forum essays, students gave varying perspectives on the ethos and impact of the discourse.

64

Ruth de Villiers and Marco Cobus Pretorius
Virtuality became Reality

Distance dissipated, as participants got to know each other and conversed in the presence of peers:
Since it is often outside the classroom that students get to real knowledge sharing, social
networks can play a major role in informal social learning, giving access to each others
implicit knowledge.
The group are friendly, enthusiastic and passionate about the subject... the interaction is
excellent.
The group became a community and had a sense of real-world talking and listening:
If I share my thoughts, I like to know who is listening. It is gratifying to interact with
people with a common goal.
What you teach fellow students is embedded in your mind longer... because you say it to
people
(P3, P5, P12, P14, P17).
Culture and tone of interactions

The environment was warm and conducive to discussions. Some students built relationships and
conversed off-forum. Although real-world academia can be self-focused and competitive, the Group
culture was not geared to personal achievement:
The best is that users freely share their sources of information, how they interpret
concepts, and their personal experiences.
Wisdom of crowds... the whole is greater than the parts.
Make or break depends on support of peers... those with experience and intent to help
can mentor and guide novices in social networking.
I enjoyed interacting with fellow students on a social level, although I did not benefit
academically.
Interaction was enjoyable and fulfilling.
I have (previously) used forums to pose questions and seek answers, but I disliked the
dull, standardized and uncreative way they were presented.
The tone was informal yet cordial: Nobody addressed the lecturer as Madame, yet on the other
hand, there was no use of shortcuts like B4, 2b or LOL (P4, P6, P7, P13, P16, P27).

Removal of isolation
Standard, boring distanceLearning was enriched; The sense of isolation dissipated.
To a long-term distance-learner, it was a thrilling experience. The first exotic e-fun occurred when
fellow-students introduced themselves as if sitting alongside me, but writing from Australia, Japan,
Namibia, Pretoria (P1, P4, P12).

Challenge, yet affirmation

Written words do not vaporise like spoken words and this calls for careful reflection before posting:
... a new-found sense of pressure to understand what I was reading.
Someone else was going to be reading it, and giving their opinion (P13, P21).
However, contributing brought affirmation: The brief experience when I shared my views was a
turning point...My confidence peaked....
It was heartening to see that a number of fellow-students agreed... .
Being introvert, I only made one comment, but it is a start.
Participating with the professor and fellow students, I felt honoured to be part of the
exercise and especially getting accolades for my contributions (P2, P25, P27).
65

Ruth de Villiers and Marco Cobus Pretorius
4.2 Support for individual styles of participation
Most students found the Group and Forum supportive. Several exercised pro-active leadership and
initiated topics, while others saw it as a place to participate in discussions without the exposure of
contact-learning. Yet others chose not to contribute, but observed and listened. They benefitted,
although some fellow-learners did not appreciate having observers. While some students found the
atmosphere of the Forum to be non-intimidating, other tended to be daunted:

Better than a real classroom

Some participants were more comfortable conversing on Facebook than in a conventional classroom:
Sharing is less rigid than when responding to questions in a class.
Learners are not frowned on when they express themselves in whatever way they feel
comfortable.
People are less afraid, and speak without fear of being mocked (P7, P17).
It eliminated possible first-line prejudices that might have occurred in a contact situation.
One considered the content and not the person (P21).
Got more feedback than in class situations, where a few students may dominate (P4).
Non-intimidating a place to take initiative

The ethos encouraged some members to be forthcoming:
Should I take the initiative?...it was clear this was a place to take charge... Grasping the
new-found freedom, I decided to start....
Some are outspoken and involved in everything, with quick responses, but others keep
to themselves.
We could ...have a brainstorm session (P7, P13, P17).
Daunting

Some felt intimidated and inadequate. They feared negative responses or no response:

Exposure to ...some refined and polished contributions, led to feelings of academic inferiority.
People can be afraid to express views, because they are unsure of relevance and
accuracy.
You would like to contribute or ask questions, but wonder if you will look stupid (P5,
P10, P21).
Responses to postings

Contributors anticipated responses to their postings and were disappointed if this did not happen fast:
You are demoralised if no feedback is forthcoming.
The time-independent nature of the interactions meant that discussions were sometimes
drawn out, preventing immediate feedback....
You (have to) wait for the response when somebody is online (P8, P16, P21).
Observers

Some perceived the Group as a safe space for learning without contributing. They chose to learn by
watching and listening, yet without the negative connotations of lurking. Certain participants were
disturbed by these onlookers:
I experienced frustration when just a few participants contributed, though I realise that
some preferred to read what others wrote rather than contributing.
Some joined the group, but did not make any postings.
Some students joined but kept silent... just watching, a bit creepy! (P1, P17, P27).
66

Ruth de Villiers and Marco Cobus Pretorius
P25, who was an observer, explained, I go on forum to see if someone asks what I want to know. It
helps me know if I am going in the correct direction. I log in daily and am disappointed when there are
no new contributions.
4.3 Academia on Web 2.0 and Facebook
On Web 1.0 and via eLearning 1.0, learners access existing educational Web content. In contrast,
Web 2.0 and eLearning 2.0 (Ebner, 2007) empower learners to personally contribute content.

Web 2.0 and a paradigm shift

In the context of education Web 2.0 means a learner-centric approach (P16).

More a social revolution/ social phenomenon than a technological issue.
...a paradigm shift... we need to relinquish tried and tested ways, which takes time and
not everyone joined the revolution (P7, P8, P16).
Some could be even more sociable than before but others are just not sociable (P12).
Several cited Ebner (2007), Technical issues will be solved quickly, but to change the
thinking about learning and teaching is hard and long.
We cant expect everyone to feel comfortable with social tools, but change is a
constant... (P3, P11).
Academia on Facebook

Some were convinced that this was the way forward:
For someone like me, who already uses Facebook and enjoys working smartly, Fb
provides a single point of entry from which I interact socially, stay up to date, and
participate in communities... I am comfortable using it as a learning tool.
This type of discussion forum works with what is already available.
We are the mobile-interconnected-global-village generation with Web 2.0 Fever (P7,
P13).

Furthermore, Facebook is ideal for forums. It is well-structured with predefined areas for discussions,
the Wall for banter, pictures and videos, membership lists, ways to handle permission and access;
...has global interconnection and You can reach members personally by accessing
profile pages. ...a co-operative environment that fosters trust among learners and
instructor, allowing students to learn from one another (P7, P11, P13, P18).
P11 made a strong statement: Educational institutions should use Facebook for learning and make
links from their institutional websites.

Shy users and silent users

Facebook breaks barriers for those who are shy or who feel vulnerable. Despite being a public space,
it provides concealment that shields members when they pass opinions:

Those who struggle to socialise or have difficulty with social skills found it easier to communicate on
Facebook than face-to-face. Collaborative online learning brings major changes, so that learners with
low self-esteem can communicate and comment without physical interaction (P4, P8, P10).

Then there were the silent observers see Section 4.2. Some were insecure and chose not to
communicate, but essays indicated that others had indeed joined, but were unable to access the
Forum. At least two of them wrote on the Wall, but did not find out how to take part in discussions until
it was too late (P4, P5). The usability of Facebook and access to Groups has since been improved.




67

Ruth de Villiers and Marco Cobus Pretorius
Asynchronicity and synchronicity

Asynchronous interaction via social networking offers Ebners (2007) Triple A Factor: anytime,
anywhere, anyone, regardless of location and time (P3). Opinions varied on asynchronicity:

Some appreciated that questions and answers could be carefully thought out before posting,
whereas others felt that it detracted from spontaneity... debate is interrupted by time lapses or
conversely that it moved fast, I struggled to contribute. Furthermore, asynchronicity results in
different threads and ... at times, it was difficult to follow them all (P1, P17, P21).

There was little use of synchronous Chats, although some learners held small-group conversations in
real time.

Non-compulsory membership

Several participants would have liked membership to be mandatory, but the course leader took a
considered decision not to enforce it. A high achiever who chose not to join, explained why she
appreciated the flexibility: I have a solitary, intrapersonal and introspective learning style. I ponder
and evaluate, and write down thoughts... I tackle problems and solutions alone (NP1).
4.4 Control and management by the facilitators
Management of the forum was challenging. Since the explicit ethos was student-initiation of
discussions, we positioned ourselves as facilitators between the extremes of strong control and
hands-off. We served as guides on the side, not as sages on the stage. Management involved
carefully watching accuracy of the content, as well as monitoring security.

Security

Some students felt threatened by security breaches: It is difficult for me to use Facebook socially, let
alone as a learning tool. My reservations are due to lack of security... (P11).

We erroneously admitted an intruder, believing he was a student whose registration was not finalised.
He participated, then posted advertisements for motivational courses and financial products! A
disconcerted student unveiled him when they communicated off-forum and she challenged him with
an academic question he could not answer: It is exciting to say I have encountered an e-stalker! Yet I
must question how he managed to infiltrate our group (P1). As facilitators, we immediately removed
him. The shrewd P1 picked up another anomaly: A profile image introduced a beautiful young lady
and we chatted away on academic matters. Her achievements amazed me. After a few weeks she
admitted to being he, an older student, who had borrowed his daughters Facebook membership
(with Profs permission) due to logistical difficulties.

Control, please!

Some students wanted tighter management:
Such platforms need proper control and facilitation; All content should be verified.
People should not be allowed to say just anything there was irrelevant content on the
Wall (the intruder).
Without verification or personal discernment, learners could be misled by inaccurate
statements.
A weekly question from the facilitators might have encouraged more interaction.
Another queried whether a discussion forum could be effective without central guidance from a
lecturer or teacher (P3, P4, P12, P17).

Constructivism

Others appreciated the constructivist-style freedom and low-level control:

The Forum was an implementation of the current focus on cognitivism and constructivism.
68

Ruth de Villiers and Marco Cobus Pretorius
Well moderated; well managed.
A new paradigm of teacher-learner interaction. The course leader merely facilitates and
guides.
It was not dictated by the teacher who is adapting to new ways of teaching and guiding.

It could have been managed by fixed principles, but that would curtail the conversation which was
not the idea behind this free, natural learning interaction.
The input snippets received from the leader and administrator are gold nuggets.
(P1, P2, P7, P8, P16. P18).
Reliability and validity

Members and a non-member expressed concern about how to distinguish between fact and the
opinions of peers: There was potential both to confuse and illuminate, confusing when its a collection
of I think... without proper backing. However, when the posts are well thought through and backed
with credible references, the potential for real learning is high (P21).
Teachers should set standards and test contributions before they are posted on the site
(P3).
Our response to this is, first, that pre-approval is infeasible in forums and, second, as facilitators, we
were loath to destroy spontaneity. Monitoring must be done after postings, and be handled with
discerning public comments and private communication with offenders. If content was merely weak,
we did not react, but on one occasion when discourse veered off-track, the course leader responded
by pointing to theory. This concern may have been a reason why some non-participants did not join
the Group: What proves that the points shared by a student are true and valid? (NP2).
4.5 Balancing academic and social interactions
To users accustomed to using SNSs for entertainment, the playing fields now offered study facilities!

Successful integration of social and serious
The Wall and Introduce Yourself provided informality. They offset the distance and set a
friendly context for the study-related pursuits. Most members felt that social networking
and serious studies could be effectively combined. Several mentioned the incorporation
of fun, entertainment, informality, interactivity into learning (P4, P7, P8, P14, P16,
P17, P21).
I definitely recommend eLearning via Facebook.
I learned to melt into social networking scenes, let the resistance go, flow with the wave,
yet keep wearing the academic hat.
Push and pull factors: friends pull; academia pushed us to view Facebook as a serious
tool.
Social and educational tasks are executed simultaneously. I peep at the study group site
each time I log on (P1, P7, P10).
Given their ubiquity, it would be short-sighted to ignore Web 2.0 applications for educational
purposes (P21, citing Ebner, 2007).

Even a non-participant commented, Facebook has caused addiction... a study group there could be a
good way to study (NP6).

Distractions

Some struggled with distractions:
Other Facebook interactions and the whole Internet could easily pull one away....
...numerous inviting sites could attract learners to something totally different.
Family and friends found me and nagged to be my friend; ...friends determined to
poke me.
69

Ruth de Villiers and Marco Cobus Pretorius
It calls for a change in mindset among those who see it as a fun tool and miss its
essence in learning (P1, P16, P17).
Potential distraction was a reason why one non-participant did not join. Literature indicates that SNS
tools and systems incorporate high interactivity to keep users interested. This could distract from
learning (NP2).
4.6 Nature of discourse and debate
Simulated face-to-face discussions were enriching for distance learners:

Interactive communication between peers
The ability to interact with people of similar interests from anywhere in the world, was a
definite advantage. One could tap into the collective consciousness of a diverse group of
people.
We are exposed to having views challenged and can engage in discussions of the
subject matter.
Opinions differ over same material, but without challenging others disrespectfully.
Different perspectives on the same topic... (P2, P7, P10, P21).
New insights

Students learned from their peers and it is significant that the more active participants all performed
well in the examination. Matters emerged that learners had not identified independently:
Collectively the learners are exposed to an abundance of information... collaboratively
they digest content and information within a short time.
...useful perspectives, beyond what one would obtain by merely reading the articles.
The whole community benefits from one anothers insights.
Current information and state-of-the art development make a significant contribution to
learning.
And a perceptive point made by different students:
By posting ideas, we solidify our thoughts. By reading others responses, our ideas are
refined.
I gained insight through reading posts of others, and the process of thinking through my
responses helped clarify issues. When reading fellow students input..., my own
interpretation changed
(P2, P10, P16, P21, P25).
Generational differences

Perceptions and approaches deferred. Some older students joined Facebook as novices and became
avid contributors. Three participants mentioned their need to print the discourse, while some students
from the Net-generation preferred the e-word to the printed or spoken word:
The ability to recall and regain online discussions is vastly superior to non-eLearning
scenarios of searching through paper-based materials or trying to recall verbal
conversations (P21).
I view Facebook as a purely social tool for the younger generation and unsuitable for
academic purposes. It was a novel approach, but should have just been an experiment...
(P27).
5. Study 2: Heuristic evaluation by expert evaluators
Four expert evaluators, who are profiled in Table 1, conducted a heuristic evaluation (HE) to
investigate the social climate of the Forum and to assess contributions against contemporary
eLearning theories that are based on human-centred values. In order to do the evaluation, they
considered sets of criteria (also termed heuristics) to establish whether the discussions conformed to
70

Ruth de Villiers and Marco Cobus Pretorius
the pedagogies associated with constructivism, customization and creativity, as well as judging the
social aspects of the experience.
Table 1: Profiles of the expert evaluators
Evaluator Occupation Expertise Involvement in Group

A Researcher Evaluation; ELearning;
heuristic evaluation (HE)
No involvement
B Lecturer and post-
graduate student
ELearning environments;
HE
Member and active
contributor
C Usability
practitioner
Usability evaluation;
ELearning websites; HE
Administrator
D IT professor ELearning; HE;
Human-computer interaction.
No involvement
Each expert evaluator performed his/her evaluation independently. All four were double experts,
namely experts both in eLearning and in heuristic evaluation. One was a student who had been a
member of the Group and another was the Group administrator. The evaluators did not see the off-
forum reflective essays (Study 1 data), but considered and evaluated the discourse on the Forum.

The evaluation template comprised four sets of criteria, twelve criteria in total, phrased as evaluation
statements and rated on a 5 to 1 Likert scale, where 5 was Strongly agree and 1 was Agree. There
were also spaces for evaluators to provide open-ended comments.

Table 2 tabulates the criteria against the quantitative results and is followed by a discussion on the
four factors evaluated.
Table 2: Results of the heuristic evaluation
Strongly agree
(5)
Agree (4) Neutral (3) Disagree (2) Strongly disagree (1)

Category and criteria rated on the scale above Average rating

1. Constructivism
The activities undertaken in the Group are highly constructivist. 3.5
Participants in discussions think independently and make personal interpretations. 4.125
Discussions moved beyond the curriculum and applied concepts in the real world. 4.75
Cross-criterion average 4.125
2. Customisation
Participants can customise the time and place of their interactive learning. 5.0
The discussion forum is learner-centric in that participants could select and initiate
their own topics for discussion and could contribute personal content.
5.0
Cross-criterion average 5.0
3. Creativity
Academic discussions in the Forum represent an innovative way of using Facebook
for learning purposes.
5.0
Participants responded to the Group environment in creative ways. 4.25
Cross-criterion average 4.625
4. Social climate of the Group
Interaction on the Forum took place in a friendly and conducive environment. 4.5
The distance learners who joined the Group got to know each other. 4.5
The ethos of the Forum supported individual styles of participation. 4.0
Cross-criterion average 4.33
The way the Forum was managed, resulted in a space that was:
rigid/strictly
controlled
(1)
firmly
controlled

(2)
balanced and
well
moderated
(3)
led by
students, with
leaders on the
side (4)
led by
students, with
leaders hands-
off (5)


3.5
Rate the activities and discussions on a spectrum from:
(1) Solely Social.....................................to......................................Serious Studies
(5)

3.5
Constructivism involves personal goals, knowledge construction and interpretation, and multiple
perspectives on issues. Constructivist learning is characterised by active learning, independent
71

Ruth de Villiers and Marco Cobus Pretorius
research, collaboration, application to authentic tasks, and real-world situated learning. Customisation
entails learner-centricity and adaptability, allowing learners to take initiative regarding (some of) the
content, foci and circumstances of learning. Creativity is characterized by innovation within
functionality and by engagement and motivation of learners.

There was close consensus between the four evaluators ratings. Differences between ratings
assigned to particular criteria never exceeded 1. Table 2 shows the average rating assigned to each
criterion, as well as the cross-criterion average for each factor. In investigating the implementation of
constructivism, evaluators acknowledged the social-constructivist nature of interactions on the Forum.
The nature of the Forum provided scope for participants personal insights and independent
interpretations, and encouraged the application of theoretical concepts to real-world phenomena
beyond the curriculum. The cross-criterion average rating for constructivism was 4.125. Customisation
of learning was unanimously rated at 5.0, since participants could choose the time and place of their
activities, while learner-centricity allowed them to initiate topics and match their needs by contributing
(or not) in their preferred style. With regard to creativity, the expert evaluators cross-criterion average
was 4.625. They felt that Facebook provided a novel and engaging environment for learning in a
social context. It was supportive in that the learning occurred in an environment that was attractive,
friendly and familiar to most of the Group. The atmosphere fostered innovative strategies, such as
posting links to academic articles and communicating one-on-one off-Forum.

Ratings on the Forums social climate averaged 4.33. Evaluators regretted that participation was not
higher, but in their open-ended responses, summarized this academic venture on Facebook as a
very positive experience and a novel way of using social media, where students got to know each
other academically. The expert evaluators recognised the community as a platform of trust with
positive energetic vibes, and enough moderation to ensure correct feedback without dampening the
student voice. In evaluating management and facilitation, two evaluators selected 3, and two
assigned 4, thus averaging 3.5 and indicating a well-moderated Forum, yet primarily student-led.
Similarly, on the spectrum between solely social and serious studies, two chose 3 and two chose
4, again averaging 3.5, right of centre and indicating sound balance but stronger on the academic
aspects.
6. Conclusions
This section summarizes the findings by concisely re-visiting the research question and by
highlighting findings that contribute to new knowledge about ODFs on social networks.

Data collection and analysis involved data triangulation and methodological triangulation. The findings
of two studies, namely, qualitative analysis of free-text data from students reflective essays and
heuristic evaluation by experts of contributions to the discussion forum, confirmed each other, and
thus provided a positive answer to the research question:

Did the venture serve both social- and study-related pursuits in a synergistic manner?

Synergy results when the combination of factors produces a joint impact greater than the sum of their
separate effects. This Facebook venture was indeed synergistic, as students benefitted both socially
and academically in social-constructivist interaction. The social setting strengthened the academic
interactions while, conversely, academic discourse in the eLearning domain provided a bond that
related them socially as peers with similar interests.

Free-text essays articulating the students own perceptions were analysed qualitatively, and forum
interactions were evaluated heuristically according to contemporary learning paradigms with human
values. Both sets of results indicated a harmonious social climate that fostered meaningful academic
discussions, as participants posted, responded, received feedback and gained insights that enhanced
their studies. Conversely, the study-related pursuits of research, interpretation, and discussions on
theoretical concepts, led to social negotiation and interpersonal connections. The supportive ethos
encouraged most members to be forthcoming, while others, feeling inadequate, experienced the
Group as a safe human environment for learning without contributing.

Moreover, the findings provide new information regarding the climate and culture that can be obtained
in an ODF on Facebook. Some important points are summarized:
72

Ruth de Villiers and Marco Cobus Pretorius
The nature of discourse in the supportive Facebook environment emulated face-to-face
postgraduate contact, providing a perception that the distance-learners actually knew each other.
Participants real-world personalities became evident as they exercised their individual
communication and learning styles within the virtual community.
There was an ethos of voluntary sharing, rather than a culture of academic competitiveness.
The environment was facilitated in a way that encouraged student-centricity, yet the forum was
effectively moderated when necessary.
This study showed a synergistic balance of social- and study-related aspects, conducive to studies
and to social engagement, but with a stronger focus on academia. Although not all the students in the
cohort joined or contributed actively, formerly isolated distance-learners in the Group became a
community of practice in the domain of eLearning. These findings should encourage academics to
establish groups and discussion forums on social networks.
References
Bosch, T.E. (2009). Using online social networking for teaching and learning: Facebook use at the University of
Cape Town. Communicatio 35(2), pp 185-200.
Creswell J.W. (2009). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. SAGE
Publications Inc.
de Villiers, M.R. (2010). Academic use of a Group on Facebook: Initial findings and perceptions. Proceedings of
the 2010 Informing Science and IT Education (InSITE) Conference. Cassino, Italy, June 2010.
Ebner, M. (2007). ELearning 2.0 = eLearning 1.0 + Web 2.0. Proceedings of the Second International
Conference on Availability, Reliability and Security (ARES'07), pp 1235-1239.
Hew, K.F. (2011). Students and teachers use of Facebook. Computers in Human Behavior 27, pp 662-676.
Jones, N., Blackey, H., Fitzgibbon, K. and Chew, E. (2010). Get out of MySpace! Computers & Education 54, pp
776-782.
Lampe, C., Ellison, B.N. and Steinfield, C. (2008). Changes in use and perception of Facebook. Proceedings of
CSCW08, pp 721-730, San Diego, California, USA.
Madge, C. Meek, J., Wellens, J. and Hooley T. (2009). Facebook, social integration and informal learning at
university: It is more for socializing and talking to friends about work than for actually doing work. Learning,
Media and Technology 34(2), pp141-155.
Mazman, S.G. and Usluel, Y.K. (2010). Modeling educational usage of Facebook. Computers & Education 55: pp
444-453.
Ractham, P. and Firpo, D. (2011). Using social networking technology to enhance learning in higher education: A
case study using Facebook. Proceedings of 44th Hawaii International Conference on System Sciences
2011.
Rambe, P. and Ngambi, D. (2011). Towards an Information Sharing Pedagogy: A Case of Using Facebook in a
Large First-Year Class. Informing Science: The International Journal of an Emerging Transdiscipline, 14, pp
61-89.
Selwyn, N. (2009). Faceworking: exploring students education-related use of Facebook. Learning, Media and
Technology 34(2), pp 157-174.
Visagie, S. and de Villiers, C. (2010). The consideration of Facebook as an academic tool by ICT lecturers across
five countries. Proceedings of SACLA 2010, Conference of South African Computer Lecturers Association.
South Africa.
73
Evaluating the Process of Delivering Compelling Value
Propositions: The Case of Mobile Payments
Denis Dennehy, Frederic Adam and Fergal Carton
Business Information Systems Department, University College Cork, Ireland
d.j.dennehy@umail.ucc.ie
fadam@afis.ucc.ie
f.carton@ucc.ie

Abstract: The provision of mobile phoned based payments (m-payments) services to the general public requires
the cooperation of a number of specific stakeholders, each contributing part of the overall solution, but also each
with their different motives, resources and capabilities to deliver compelling value propositions to consumers. The
need to combine these multiple perspectives makes exploring the requirements for the design and
implementation of m-payment initiatives a complex activity. As a result, sustainable economic business models
have yet to emerge for m-payment scenarios. The business model concept has been amply demonstrated to be
a very useful tool to understand how to design commercially viable offerings over the last 15 years. The purpose
of this research is to theorise the linkages between the numerous elements of an m-payment business model and
to evaluate the process whereby stakeholders in an m-payments ecosystem create, exchange and deliver an m-
payment solution. By leveraging the business model concept and the process modelling techniques associated
with it, the research represents the various stakeholders, in an economic value context, by identifying the role
played by each stakeholder and the share of the profit which they expect in return for their contribution. The study
is unique in that it tracks the activities and decisions of stakeholders involved in a real-world m-payment initiative
from concept stage through to launch stage. As a result, the study provides new insights into the complex and
sensitive issues that need to be considered by practitioners, while also providing researchers with a balanced
and holistic perspective to a complex phenomenon. It also leverages the business model concept to present
detailed models of the m-payment solution implemented in the case study. The preliminary data which we
abstracted from the case has validated the relevance of the research questions and will be a valuable
contribution to the future creation of an m-payment consortium.

Keywords: mobile payments, M-payments, business model, business process, value networks, value webs
1. Introduction
Due to the fundamental attributes of mobile business (i.e. anytime, anywhere, on any device),
organisations are increasingly leveraging such enabling technologies to create value to support
mobile users (i.e. employees, consumers) or mobile activities (i.e. tracking materials or products), to
improve their operations and subsequently to increase their competitive advantage and financial profit
(Coursaris et al., 2006). For example, financial institutions are investing in new payment systems in
order to reduce their operating costs, generate new revenue via new fees, and develop their customer
relationship, early mover advantage (Dahlberg et al., 2008; FINsights, 2008).

However, if organisations intend to exploit the attributes unique to mobile business, they will need to
do so within the context of effective business models that clearly articulate a compelling value
proposition for their employees and customers while addressing their various concerns (Coursaris et
al., 2006, p.7). To create value, organisations will need to learn from past business failures and
develop sustainable business models using conventional key performance indicators (KPIs) and not
rely on advertising and branding alone (Cassidy, 2003).

It is unclear why mobile services have not lived up to the hype or expectations as promised by its
proponents (Damsgaard and Hedman, 2009), there are suggestions that the main issues are poor
revenue sharing amongst stakeholders (Ballon and Van Bossuyt, 2006) or static business models
being used in the complex environment which is required to deliver mobile services (Coursaris et al.,
2006). The uncertainty around establishing a sustainable economic business model that can be
agreed by the multiple stakeholders has been identified as a contributing factor for the delayed launch
of m-payment initiatives (Chaix and Torre, 2010). What has been established and is generally
accepted by researchers and practitioners is that because the context of every m-payment service will
be different, then every m-payment solution needs to be specifically customised to satisfy key actors
in the m-payment ecosystem (Ondrus and Lyytinen, 2011; FINsights, 2008). These actors include:
consumers, merchants, mobile network operators, mobile device manufacturers, financial institutions,
software and technology providers, and governments (Dahlberg et al., 2007).

74

Denis Dennehy, Frederic Adam and Fergal Carton
Since no single actor can deliver an end-to-end m-payment service, the success of m-commerce
relies on partnerships, thus making partnership management a core competence that will enable
stakeholders in the ecosystem to form viable alliances and actor networks (Camponovo and Pigneur,
2002; Pigneur, 2002). Strong partnerships and alliances not only lead to high levels of trust and
cooperation but they also enable organisations to achieve market leadership which in turn increases
their market reach to co-opt consumers or suppliers within their value-network (Lewis et al., 2003;
Currie and Parikh, 2006). However, partnerships and alliances are just one strategic approach to
enhance market leadership, too many partners without strategic market positioning could result in
weak or unrealistic partnerships leading to business failure (Currie and Parikh, 2006).

While building alliances is one of the most important strategic approaches to creating value in e-
business, it is not the only approach for content and network providers to enter the market or increase
their competitive advantage (Camponovo and Pigneur, 2003). In order to increase their competitive
advantage, organisations will exploit their market position, negotiating power and access to critical
resources (Bouwman and Ham, 2003). In addition, access to key functions of billing and information
sharing is emerging as a critical success factor in the competition and development of sustainable
business models (ibid). For example, in the context of payment services, financial institutions have a
long tradition of cooperating with merchants whereas cooperating with telecoms and technology
vendors is a new experience for them (Dahlberg et al., 2008). Therefore, examining the actors roles
is not sufficient, the relationships and interactions between the actors will also need to be assessed
(Camponovo and Pigneur, 2002; Pigneur, 2002).

Even though considerable research has already been conducted to better understand different
aspects of the m-payment phenomenon, undertaking this study would answer the call for research
that will lead to an integrated view on m-payment business models (Pousttchi et al., 2009) as the
success or failure of previous m-payment initiatives were based on issues arising out of multiple
perspectives rather than a single perspective (Ondrus et al., 2005).
2. Value-webs
The value chain framework was initially intended to analyse traditional manufacturing industries only,
but in recent times it has been used to analyse the mobile industry. However, the framework needs to
consider other value configurations (i.e. value net) which better represent the mobile industry. Rather
than only focusing on infrastructure and activities, a more comprehensive analysis on elements such
as customer relationship, value proposition and partnerships can be achieved by applying the
business model concept (Camponovo and Pigneur, 2003).

In a complex value network or value-web (i.e. m-payment ecosystem) where the organisations are
engaged in inter-organisational investments, they are connected through intended relationships and
interdependencies which involve considerable risks, problem solving and having access to
complimentary knowledge (Dahlberg et al., 2008; Bouwman and Ham, 2003). This complexity and
combination inevitably requires such organisations to undertake a collective-decision process
(Bouwman and Ham, 2003). Unlike the traditional static and linear value-chain, value-webs are
flexible and each stakeholder will possess different capabilities and resources which will lead to
innovative solutions, when combined together (Moschella, 2003; Faber et al., 2004). As organisations
shift from single firm revenue generation to multi-firm control and revenue sharing issues, not only are
control and value issues of most relevance to business modelling, but two key questions also emerge
Who controls the value network and the system design? and Is substantial value being produced
by this model or not? (Ballon, 2007, p. 2).

Nevertheless, these collective-decision processes have a number of implications when compared to
internal process since no single partner has formal authority over another partner; they require
prolonged decision-making processes as adjustments need to be discussed and jointly agreed, they
demand several rounds of negotiations, there are high costs involved and the possibility of disputes
due to conflicting interests which do not always result in a win-win outcome for all stakeholders
(Demkes, 1999; Klein-Woolthuis et al., 2005; Faber et al., 2004). There are three types of participants
in any new value network (see Table 1): at the core of the network are the structural partners while
the contributing and support partners are loosely linked to the network (Bouwman and Ham, 2003).



75

Denis Dennehy, Frederic Adam and Fergal Carton
Table 1: Partnership tier (adapted from Bouwman and Ham, 2003)
Level Role
Tier 1 (Structural) Partners provide essential and non-substitutable tangible and/or intangible
assets to the enterprise on an equity or non-equity basis. They play a direct and
core role in making the customer value assumption and in creating the business
model.
Tier 2 (Contributing) Partners provide goods and/or services to meet requirements that are specific to
the enterprise, but otherwise they play no direct role in making the customer
value assumption and in creating the business model. If the assets they provide
are substituted, the value assumption and the business model could still stand.
Tier 3 (Support) Partners provide generic goods and services to the enterprise, without which the
enterprise would not be viable, but which otherwise could be used in connection
with a wide variety of value assumptions and business models.
Table 1 is significant as it indicates that not only do all partners have a role in a value-creating
network, whether its playing an important role and having influence in shaping the network or having a
minor role and being shaped by the network, such a network inevitably requires partnership
management (Galbreath, 2002; Kothandaraman and Wilson, 2001). Adopting the partnership tier
may be beneficial in order to overcome the symptoms of negative dynamics between some actors
which has resulted in misjudged resource strengths, complementary assets, and market size
evaluations in previous m-payment initiatives (Dahlberg et al., 2008, p. 9).

Cooperation in a value-web is challenging as there is evidence that the organisations experience
significant difficulties in attaining mutual benefits from co-operation because each partner may be
pursuing strategic goals that are different to the co-operation which in turn may lead to hiding of the
truth or acquiring sensitive information from the partners, because the partners are from different
industries (i.e. retailers, network providers) there is potential that the diversity could disrupt the
ecosystem, and revenue sharing issues (Faber et al., 2004; Ballon, 2007). These challenges can be
partly attributed to explain why the success of m-payment platforms remain hampered by recurring
and fundamental social, institutional and business challenges that requires a multi-level and multi-
perspective holistic approach as it provides a richer picture of the phenomenon (Gao and Damsgaard,
2007; Ondrus and Lyytinen, 2011; Currie and Parikh, 2006; Dahlberg et al., 2008). This means that
researching m-payment adoption issues without assessing the institutional and business context will
not provide sufficient explanations into a complex, networked technology such as m-payments
(Zmijewska and Lawrence, 2005; Ondrus and Lyytinen, 2011).
3. Leveraging the business model concept
Academics have increasingly given attention to developing the business model concept by defining
business models, examining their components, classifying them into categories and more recently
focusing on representations or developing descriptive models. Yet, there is a paucity of research that
theorises the linkages between the variables of the numerous business model domains (e.g. service,
finance, organisation, technology), and more specifically, business models used by networked
organisations (Faber et al., 2004; Bouwman and Ham, 2003).

When assessing the role of different stakeholders in the m-commerce ecosystem it is suggested to
briefly and clearly describe their business models (Camponovo and Pigneur, 2002). Business models
also offer a high level of abstraction which is the correct starting point when creating or redefining
business processes rather than analysing the business processes themselves (Weigand et al., 2006).
Business models and process models provide different support for decisions and requirements of
different stakeholders. By commencing a project with a business model design, it states what is
offered by whom to whom, rather than how these value-creating activities are selected, negotiated,
contracted and fulfilled operationally as is explained by a process model (Gordijn et al., 2000, p. 1).

A business model represents the interplay between multiple industries (Chesbrough and Appleyard,
2007) and is beneficial in determining the underlying logic that explains how an organisation creates
and delivers value to their customers while also capturing returns from that value (Magretta, 2002;
Shafer et al., 2005). A business model can be an influential tool for analysing, implementing and
communicating strategic choices (Magretta, 2002; Shafer et al., 2005). As the functions of a business
model are to articulate the value proposition, identify a market segment, define the structure of the
value chain and to estimate the cost structure and profit potential (Chesbrough and Rosenbloom,
2002), they are an important locus of innovation and are a critical source of value creation and
76

Denis Dennehy, Frederic Adam and Fergal Carton
competitive advantage for an organisation, its suppliers, partners, and customers (Amit and Zott,
2001; Mitchell and Coles, 2003).

Although the business model concept has been criticised for its murky definitions and loose
conception of how a company does business and generates revenue (Porter, 2001, p. 13), the
business model concept can be strengthened by aligning it with established theories (e.g. innovation
management, strategic management, resource-based theory) that also deal with control and value
creation issue in a network (Ballon, 2007). Nevertheless, business models have an important role in
business practice (Bodker et al., 2009) because a good business model is essential for any
organisation as it answers fundamental questions such as Who is the customer? and What does
the customer value (Magretta, 2002), and it is a vital source of value creation for an organisations
suppliers, partners and customers, as well as for innovation (Amit and Zott, 2001).

In the context of explaining mobile business there is extensive knowledge available on descriptive
business models and value systems, but there is a lack of models (i.e. causal models) which explain
the viability and feasibility of business models, as are case-related analysis and cross-sectional data
(Bouwman and Ham, 2003). Aligning the business model with the market environment as well as the
industry environment is crucial (Ondrus et al., 2009). Yet, a majority of managers find the business
model concept difficult as they either dont understand their current model enough when to know it
needs changing or how to make that change (Chesbrough and Schwartz, 2007; Johnson et al.,
2008).. Business models in a multi-sided network are about getting stakeholders on board, balancing
their respective interests and locking consumers in single or multiple platforms (i.e. multi-homing).
(Ballon et al., 2008). When managers consciously operate from a model of how the entire business
systems will work, every decision, initiative, and measurement provides valuable feedback (Magretta,
2002).

Business model innovation is the discovery of a fundamentally different business model in an existing
business, business model innovators redefine an existing product or service and how it is delivered to
the customer (Markides, 2006). Although the ultimate aim of designing a business model is to create
sufficient economic and customer value, the challenge is that it requires connecting and balancing
design choices in different business model domains while taking into account technical, market and
legal developments (Faber et al., 2004). In many instances, the customer value envisaged in the
initial design of a business model has little to do with the value as perceived by the customer (e.g. the
end-user) as it depends on the customers personal and consumption context (Chen and Dubinsky,
2003; Wieringa and Gordijn, 2005). By understanding the critical design issues in business models
and their interdependencies, rather than identifying relatively easy prescriptions, new insights are
inserted into the design of balanced business models as knowledge of how to effectively balance
requirements and strategic interests is limited in the business model literature (Faber et al., 2004).

Adopting the Osterwalder et al (2005) business model concept and aligning the research questions to
each pillar, (see Table 2), the authors believe that a multi-level and multi-perspective understanding
into the design and delivery of an m-payment initiative. As such, the business model concept can be
used as a vehicle for innovation and also as a subject of innovation (Zott et al., 2011). From a
practical perspective, the Osterwalder et al (2005) business model provides very strong support to
this research project as it proposes a complete set of elements, relationships and vocabulary to
describe and analyse a business model (Pousttchi et al., 2009).
Table 2: Research questions within the business model concept
Research Question Pillar Building Block of Business
RQ 1: How does the adoption of an NFC system change the
value propositions offered by service providers?
Product Value Proposition
Target Customer
Distribution Channel
RQ 2: How does the adoption of an NFC system change the
customer interface?
Customer
Interface
Relationship
Value Configuration
Core Competency
RQ 3: How does the adoption of NFC systems change the
infrastructure management of the m-payments network?
Infrastructure
Management
Partner Network
Cost Structure RQ 4: How can m-payment service providers agree to share
the costs and revenues associated with the delivery of an m-
payment?
Financial
Aspects
Revenue Model
77

Denis Dennehy, Frederic Adam and Fergal Carton
The research questions are designed enable the researchers to examine the effect of NFC enabled
payment systems on business model development. We are applying these questions to a case study
of the development and execution of an m-payment trial on a university campus. The method
employed by the researchers to investigate the research questions are presented in the next section.
4. Case study
The data required to achieve the research objective and the associated research questions will be
acquired from a real-world m-payment project which will take place on the campus of University
College Cork (UCC) and with 250 students as participants. The pilot m-payment project is being
facilitated by the Financial Services Innovation Centre (FSIC) in UCC and in collaboration with a
leading mobile phone network provider and other stakeholders (i.e. integration partners) from the
retail and payments industries. The integration partners in the project include: MNO, handset and
operating system manufacturer, the SIM card manufacturer and SIM card integration team, the mobile
wallet application developers, the funding account and card issuer, the payment transaction
processor, the NFC terminal provider, and IT technicians from the host university.

The data will be generated in the form of focus group interviews with the participants and face-to-face
interviews with the retailers and the other stakeholders. The project will go-live in mid-2012 and
interviews will be carried out at the pre-launch, mid-launch, and late-launch phases of the project. To-
date preliminary data has been gathered and the findings have been useful in guiding the overall
direction of the pilot project and to validate the relevance of the research objective and research
questions.
5. Research method
Adopting an exploratory research approach is appropriate for this study as it is particularly useful in
highly innovative industries, as well as to develop a better understanding of the business problem (i.e.
how to deliver compelling value propositions to consumers) by discovering new relationships or
patterns (Hair et al., 2007). Further, the use of case studies permits the researcher to examine the
phenomenon in its natural setting by employing multiple methods of data collection to gather
information from the different stakeholders with the goal of acquiring a rich set of data (Benbasat et
al., 1987; Denzin and Lincoln, 2000).

Case studies are also suitable for researching an area in which there is paucity of research and to
finding answers to how and why questions (Franz and Robey, 1984; Benbasat et al., 1987). The
benefit of employing a variety of data collection techniques is that collectively they offset limitations
specific to each individual technique while also addressing possible anomalies (Gallivan, 1997;
McGrath, 1984) and providing the opportunity to triangulate findings, thus reinforcing the conclusions
of the study (Kelliher, 2005; Benbasat et al., 1987).

The study will use focus groups and face-to-face interviews to provide a rich set of data, as well as
capturing the contextual complexity of m-payments (Yin, 1984; Benbasat et al., 1987; Remenyi and
Williams, 1995). Focus group interviews are suitable when the goal of using qualitative research is to
generate theoretical ideas and hypotheses which they intend to verify with future quantitative research
(Calder, 1977) and have been employed in previous mobile service studies (Jarvenpaa and Lang,
2005; Garfield, 2005). Participants will be selected from naturally formed groups (i.e. college class) as
such groups tend be more relaxed and at ease in conversations (Bryman, 2001). Focus group
interviews will also enable the researcher to gather large amounts of data quickly and provide the
researcher with multiple perspectives (Wilkinson, 2004). Focus groups are more naturalistic, when
compared with face-to-face interviews as they generally include a variety of communicative processes
such as storytelling, disagreement, humour and cajoling amongst the participants (ibid). Due to the
sensitive commercial aspects of this pilot project, face-to-face interviews will also be employed in
order to reveal issues specific to the various commercial stakeholders (i.e. retailers, mobile network
operator (MNO), and financial acquirer) involved in the project.
6. Preliminary findings
The university campus would appear to be an ideal environment to launch an m-payment service due
to the village effect; technology savvy consumers (i.e. BIS students and staff), merchants in the retail
and restaurant business with over thirty point-of-sales (POS), a wireless campus environment, and a
host actor (FSIC) all within close proximity to each other.
78

Denis Dennehy, Frederic Adam and Fergal Carton
Nevertheless, a number of unanticipated challenges emerged which subsequently delayed the launch
date for the project twice. While technological issues have been identified as fundamental reason for
the delay, other issues that threatened the roll-out of the NFC system included managing the partners
in the value network, and the issue of project costs and transaction costs remained the elephant in
the room. For example, interviews with the merchants revealed that they anticipate the potential of
m-payments in terms of it reducing the time and cost required to managing physical cash, as well as,
a faster through-put at peak service times (e.g. 12pm - 2pm). However, merchants also expressed a
high level of commitment to the project on condition that they did not incur the cost of installing the
NFC enabled terminals at the thirty three point-of-sales located on campus. Yet, even at the
prelaunch phase, cost issues remained unelaborated. These issues are specific to research
question three and four respectively: How does the adoption of NFC systems change the
infrastructure management of the m-payments network? and How can m-payment service providers
agree to share the costs and revenues associated with the delivery of an m-payment? Specific to
research question three, the researchers have identified that infrastructure management is a central
issue that requires diplomacy, coordination and the need for a shared terminology amongst the
integration partners.

Key to the adoption by merchants of this new payment channel are the associated interchange costs.
Negotiations between the payment processor and the merchants have been characterised by an
extremely stilted discussion which highlights the lack of experience in micro-payments among the
acquirers of the transactions. Average values per transaction at a campus POS are between 2 and
3. Certain known value confectionery items are sold at minimal cost compared to high street prices,
with margins of 5-8%. When the payment processors consider applying standard acquisition fees to
this level of purchase (for example, 14 cent per transaction), it can be seen that merchant margin is
wiped out. Furthermore, the merchant must open a merchant ID account with the acquirer, for a one
off fee of 250, and additionally spend 7 per terminal per year for support.

From the acquirers point of view, merchant support and transaction fees are based on traditional
debit and credit card interchange rates available from the card networks. However, from the merchant
viewpoint, the motivation to promote a new payment channel needs will not come from eating into the
tight margins on sub 15 spend items. On the contrary, both acquirers and card networks will need to
reconsider the value proposition to merchants (and consumers) if critical mass is to be reached on m-
payment adoption. These findings present an early indication that the cultural barriers to new
business model development are significant, with players inevitably burdened by their inherited
perceptions of customer value propositions.

Such issues demonstrate the need to identify how the stakeholders perceive themselves and the
other stakeholders within the value network and in the context of a partnership tier framework. The
absence of such a framework may partly explain why there was no explicit lead actor in the project
or why the stakeholders, who are established players in their own industries, engaged in a project
without having a clear business model from the out-set to ensure a win-win outcome for all
stakeholders.

From the consumers perspective, the authors have gathered additional data by carrying out a number
of focus group interviews at the pre-launch stage where participants (i.e. consumers) were invited to
discuss the value-propositions been offered by the use of the NFC enabled phone and the service
providers (e.g. research question 1). Key themes that were discussed included: their understanding of
the m-payment concept, the value propositions that would entice them to a) migrate and b) remain
with a new MNO, and the barriers that would prohibit them from participating in the trial. Participants
strongly favoured customer loyalty schemes that offered a range of options, instant or short-term
rewards, and the ability to use the rewards with other branded goods and services. Key barriers to
participating in the project included: top-up charges, the use of a low-end smartphone due to its
limited functionality and the fear of technical errors at the POS.

Unfortunately, the project was offically terminated by the MNO in May this year as the testing phase
highlighted a number of customer experience issues that could not be addressed in time for a third
launch attempt. Nevertheless, the MNO honoured its commitment to provide the registered
participants with a smartphone, as agreed from the out-set. The data gathered at the pre-launch
phase suggests that their is need for the creation of a common language (i.e. ontology) and the
development of a visualisation tool in order to support multiple stakeholders to address key
79

Denis Dennehy, Frederic Adam and Fergal Carton
stakeholder issues at the early stage of forming a value-network and in conjunction with the
envisaged business model. This new insight has prompted the authors to refine thier face-to-face
interview questions with the stakeholders in order to acquire a retrospective and multi-perspective
understanding into the complex m-payments phenonenom.
References
Amit, R. and Zott C. (2001). Value creation in E business. Strategic Management Journal 22: 493-520.
Ballon, P. (2007). Business modelling revisited: the configuration of control and value. info 9: 6-19.
Ballon, P. and Van Bossuyt M. (2006). Comparing business models for multimedia content distribution platforms.
IEEE, 15-15.
Ballon, P., Walravens N., Spedalieri A. and Venezia C. (2008). The reconfiguration of mobile service provision:
towards platform business models.
Benbasat, I., Goldstein D.K. and Mead M. (1987). The Case Research Strategy in Studies of Information
Systems. Mis Quarterly 11: 369-386.
Bodker, M., Gimpel G. and Hedman J. (2009). Smart Phones and Their Substitutes: Task-Medium Fit and
Business Models. Mobile Business, 2009. ICMB 2009. Eighth International Conference on. 24-29.
Bouwman, H. and Ham E. (2003). Designing metrics for business models describing Mobile services delivered by
networked organisations. Citeseer.
Bryman, A. (2001). Social science research methods. Oxford: Oxford University Press.
Calder, B.J. (1977). Focus groups and the nature of qualitative marketing research. Journal of Marketing
Research: 353-364.
Camponovo, G. and Pigneur Y. (2002). Analyzing the actor game in m-business.
Camponovo, G. and Pigneur Y. (2003). Analyzing the m-business landscape. Annals of telecommunications 58:
59-77.
Cassidy, J. (2003). Dot. con: how America lost its mind and money in the Internet era: Harper Perennial.
Chaix, L. and Torre D. (2010). Different models for mobile payments.
Chen, Z. and Dubinsky A.J. (2003). A conceptual model of perceived customer value in e commerce: A
preliminary investigation. Psychology and Marketing 20: 323-347.
Chesbrough, H. and Rosenbloom R.S. (2002). The role of the business model in capturing value from innovation:
evidence from Xerox Corporation's technology spin-off companies. Industrial and corporate change 11: 529.
Chesbrough, H. and Schwartz K. (2007). Innovating business models with co-development partnerships.
Research-Technology Management 50: 55-59.
Chesbrough, H.W. and Appleyard M.M. (2007). Open innovation and strategy. California Management Review
50: 57.
Coursaris, C., Hassanein K. and Head M. (2006). Mobile Technologies and the Value Chain: Participants,
Activities and Value Creation. Mobile Business, 2006. ICMB '06. International Conference on. 8-8.
Currie, W.L. and Parikh M.A. (2006). Value creation in web services: an integrative model. The Journal of
Strategic Information Systems 15: 153-174.
Dahlberg, T., Huurros M. and Ainamo A. (2008). Lost Opportunity Why Has Dominant Design Failed to Emerge
for the Mobile Payment Services Market in Finland? : IEEE, 83-83.
Dahlberg, T., Mallat N., Ondrus J. and Zmijewska A. (2007). Past, present and future of mobile payments
research: A literature review. Electronic Commerce Research and Applications 7: 165-181.
Damsgaard, J. and Hedman J. (2009). Mobile Services Revisited.
Demkes, R. (1999). Comet. A Comprehensive Methodology for Supporting Telematics Investment Decisions.
Denzin, N.K. and Lincoln Y.S. (2000). The discipline and practice of qualitative research. Handbook of qualitative
research 2: 1-28.
Faber, E., Haaker T. and Bouwman H. (2004). Balancing requirements for customer value of mobile services. 21-
23.
FINsights. (2008). Enterprise Payments. Technology Insights for the Financial Services Industry.
Franz, C.R. and Robey D. (1984). An investigation of user-led system design: rational and political perspectives.
Communications of the Acm 27: 1202-1209.
Gallivan, M. (1997). Value in triangulation: a comparison of two approaches for combining qualitative and
quantitative methods. Springer, 417.
Gao, P. and Damsgaard J. (2007). A framework for understanding mobile telecommunications market innovation:
A case of China. Journal of Electronic Commerce Research 8: 184-195.
Garfield, M.J. (2005). Acceptance of ubiquitous computing. Information Systems Management 22: 24-31.
Gordijn, J., Akkermans H. and Van Vliet H. (2000). Business Modelling is not Process Modelling. In Conceptual
Modeling for E-Business and the Web, ECOMO 2000. Springer.
Hair, J.F., Money A.H., Samouel P. and Page M. (2007). Research methods for business: John Wiley & Sons.
Hedman, J. and Kalling T. (2003). The business model concept: theoretical underpinnings and empirical
illustrations. European Journal of Information Systems 12: 49-59.
Jarvenpaa, S.L. and Lang K.R. (2005). Managing the paradoxes of mobile technology. Information Systems
Management 22: 7-23.
Johnson, M.W., Christensen C.M. and Kagermann H. (2008). Reinventing Your Business Model.
Kelliher, F. (2005). Interpretivism and the pursuit of research legitimisation: an integrated approach to single case
design. Electronic Journal of Business Research Methods 3: 123-132.
80

Denis Dennehy, Frederic Adam and Fergal Carton
Klein-Woolthuis, R., Hillebrand B. and Nooteboom B. (2005). Trust, contract and relationship development.
Organization Studies 26: 813-840.
Lewis, W., Agarwal R. and Sambamurthy V. (2003). Sources of Influence on Beliefs about Information
Technology Use: An Empirical Study of Knowledge Workers. Mis Quarterly 27: 657-678.
Magretta, J. (2002). Why business models matter. Harvard Business Review 80: 86-93.
Markides, C. (2006). Disruptive Innovation: In Need of Better Theory*. Journal of Product Innovation
Management 23: 19-25.
McGrath, J.E. (1984). Groups: Interaction and performance: Prentice-Hall Englewood Cliffs, NJ.
Mitchell, D. and Coles C. (2003). The ultimate competitive advantage of continuing business model innovation.
Journal of Business Strategy 24: 15-21.
Moschella, D. (2003). Customer-driven IT. Harvard Business School Press, Boston.
Ondrus, J. and Lyytinen K. (2011). Mobile Payments Market: Towards Another Clash of the Titans? Mobile
Business (ICMB), 2011 Tenth International Conference on. 166-172.
Ondrus, J., Lyytinen K. and Pigneur Y. (2009). Why mobile payments fail? Towards a dynamic and multi-
perspective explanation. IEEE, 1-10.
Pigneur, Y. (2002). An Ontology for m-business models. Conceptual ModelingER 2002: 3-6.
Porter, M.E. (2001). Strategy and the Internet. Harvard Business Review 79: 62-79.
Pousttchi, K., Schiessler M. and Wiedemann D.G. (2009). Proposing a comprehensive framework for analysis
and engineering of mobile payment business models. Information Systems and E-Business Management 7:
363-393.
Remenyi, D. and Williams B. (1995). Some aspects of methodology for research in information systems. Journal
of Information Technology 10: 191-201.
Seddon, P.B. and Lewis G.P. (2003). Strategy and business models: What's the difference. Citeseer.
Shafer, S.M., Smith H.J. and Linder J.C. (2005). The power of business models. Business horizons 48: 199-207.
Weigand, H., Johannesson P., Andersson B., Bergholtz M., Edirisuriya A. and Ilayperuma T. (2006). On the
notion of value object. Springer, 321-335.
Wieringa, R.J. and Gordijn J. (2005). Value-oriented design of service coordination processes: correctness and
trust. ACM, 1320-1327.
Wilkinson, S. (2004). Focus Group Research. Qualitative research: Theory, method and practice: 177.
Yin, R.K. (1984). Case study research: design and methods. 2003. Applied Social Research Methods Series 5.
Zmijewska, A. and Lawrence E. (2005). Reshaping the framework for analysing success of mobile payment
solutions.
Zott, C., Amit R. and Massa L. (2011). The Business Model: Recent Developments and Future Research. Journal
of Management 37: 1019.


81
Using Bricolage to Facilitate Emergent Collectives in SMEs
Jan Devos
1
, Hendrik Van Landeghem
2
and Dirk Deschoolmeester
3
1
Ghent University, Faculty of Engineering and Architecture, Campus Kortrijk,
Belgium
2
Ghent University, Faculty of Engineering and Architecture, Department of
Industrial Management, Gent, Belgium
3
Ghent University, Faculty of Economy and Business Administration, Gent
Belgium
jan.devos@howest.be,
hendrik.vanlandeghem@ugent.be
dirk.deschoolmeester@ugent.be

Abstract: Starting a new business is often done in a realm of improvisation if resources are scarce and the
business horizon is far from clear. Strategic improvisation occurs when the design of novel activities unite. We
conducted an investigation of so called emergent collectives in the context of a small and medium-sized
enterprise (SME). Emergent collectives are networks of information nodes with minimal central control and
largely controlled by a protocol specification where people can add nodes to the network and have a social
incentive to do so. We considered here emergent collectives around an enterprise resources planning (ERP)
software and a customer relation management (CRM) software in two open source software (OSS) communities.
We investigated how the use of bricolage in the context of a start-up microenterprise can facilitate the adoption of
an information system (IS) based on emergent collectives. Bricolage is an improvisational approach that allows
learning form concrete experience. In our case study we followed the inception of a new business initiative up to
the implementation of an IS, during a period of two years. The case study covers both the usefulness of bricolage
for strategic improvisation and for entrepreneurial activity in a knowledge-intensive new business. We adopted an
interpretative research strategy and used participatory action research to conduct our inquiry. Our findings lead to
the suggestion that emergent collectives can be moulded into a usable set of IS resources applicable in a
microenterprise. However the success depends heavily on the ICT managerial and technological capabilities of
the CEO and his individual commitment to the process of bricolage. Our findings also show that open ERP and
CRM software are not passing delusions. These emergent collectives will not take over proprietary ERP and
CRM software all of a sudden, but clearly the rules of the game are slowly changing due to the introduction of
new business models. The study contributes to the research of OSS as emergent collectives, bricolage and IS
adoption in SMEs.

Keywords: SMEs, bricolage, emergent collectives, open software, ERP, CRM, IS adoption
1. Introduction
Starting a new business is often done in a realm of improvisation if resources are scarce and the
business horizon is far from clear. Start-ups and small- and medium-sized enterprises (SMEs) often
adopt information technology (IT) and information systems (IS) in order to facilitate a start-up.
However, adopting IT/IS into an embryonic organizational structure with a lack of rigid business
processes is a complex and risky task. Many investments in IT/IS, such as Enterprise Resource
Planning (ERP) or Customer Relation Management (CRM), outsourced as well as in-sourced, never
fully reach the intended objectives and are therefore considered as not being successful. Despite our
knowledge of IT/IS implementation, a lot of IT projects still fail (Avison et al. 2006, Devos et al. 2008,
Bharadwaj et al. 2009, Group 2004). Past and recent research has also revealed that SMEs tend to
lean strongly on external expertise for IT adoption (Thong et al. 1996, Dibbern and Heinzl 2009). IT
outsourcing greatly increases the complexity of governing these endeavours and brings in new risks
and burdens for IS success (Aubert et al. 2005). Although SMEs have specific characteristics like
organizational flexibility, limited span of control and fast decision-making, the development of internal
resources and capabilities for IS adoption is still a critical problem because SMEs are resource
constraint (Raymond 1985).

In this work we intend to build on and move beyond existing work to provide conceptual
underpinnings for the study of bricolage applied for adopting IT in a start-up enterprise. Bricolage is
an improvisational approach that allows learning form concrete experience. We highlight the tension
between the dominant view of classical governance models rooted in control theory and the
alternative approach of bricolage. We discuss both the concept of bricolage and how a bricolage-
based arrangement might be used into the organizational context of an SME. We adopted an
interpretative research strategy and used participatory action research (PAR) to carry out our inquiry.
82

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
We conducted an investigation of emergent collectives in the context of a start-up. Emergent
collectives are networks of information nodes with minimal central control and largely controlled by a
protocol specification where people can add nodes to the network. Petrie (2011) refers to an
emergent collective as an ant colony in which its behaviour, and intelligence is the result of the rather
mindless interactions of individual ants following simple protocols of interaction that result in
qualitatively different global behaviour (Petrie 2011). The motivation for collectively acting lays within
the capacity of the networks to scale and to increase value for the user. We considered here the
emergent collectives around an ERP software and a CRM software in two open source software
(OSS) communities. We formulated our research question as: how can the use of bricolage facilitate
the adoption of emergent collectives in an entrepreneurial setting? In a real life case we followed the
inception of a new business initiative up to the implementation of an IS during a period of two years.
The case study covers both the usefulness of bricolage for strategic improvisation and for the
entrepreneurial activity in a knowledge-intensive new business.

This paper is structured in five main sections, starting with this introduction. In the following section
we review the recent literature on bricolage and IT. In the third section we elaborate on our research
methodology based on action research and we introduce the case study. In fourth section we bring
the findings of our inquiries. Section five discusses the conclusions and implications of our work for
academics and practitioners.
2. Bricolage
The concept of bricolage was introduced by the French anthropologist Lvi-Strauss in his book, La
pense sauvage published in 1962 and translated in English to The Savage Mind (Lvi-Strauss
1968). Bricolage is Lvi-Strausss term to describe the mythical thinking of primitive people, who used
a fixed set of ideas that they combined and recombined in different ways (Pohn 2003-2007). The word
bricolage is French and does not have a precise equivalent in English. It can be translated as
tinkering or playing/messing around. Lvy-Strauss uses bricolage as an analogy to spell out the
processes underlying mythical thoughts (Duymedjian and Ruling 2010). The bricoleur is the
handyman, tinkerer or do-it-yourselfer. It can be noticed that the words bricolage and bricoleur applies
to playing and refers to devious actions. Lvi-Strauss (1968) compares bricolage as the science of
the concrete as opposed to logical thinking grounded in (positivistic) science and characterized
bricolage as a particular way of acting as doing things with whatever is at hand. The science of the
concrete is characterized by a concern for exhaustive observation, systematic inventorying of all
elements and relies on a highly developed mode of understanding based on the intimacy with the
concrete (Duymedjian and Ruling 2010). The bricoleur is not a craftsman and bricolage does not
proceed in a straightforward, linear and rational way. Instead bricolage wander from one thing to
another and has a fragmented nature reflecting its affinity with play (Pohn 2003-2007). Being a
bricoleur also means being a thinker tinkerer with focus on instant objects and materials at hand to
approach solutions for problems faced (Coleman 2006). Bricolage is not very well articulated as a
theory. Lvi-Strauss describes the process of bricolage through the role description of the bricoleur. In
a dichotomous category the bricoleur is the opposed ideal-type of the engineer. From the seminal
work of Lvi-Strauss, three constructs can be inferred to characterize bricolage: 1) repertoire or the
material and immaterial resources that are collected independently of any particular project or
utilization, 2) dialogue or the activity of assembling objects and 3) outcome, whichs refers both to the
process and its results (Duymedjian and Ruling 2010). Bricolage is related to improvisation,
sensemaking, entrepreneurship and the work of technical systems (Duymedjian and Ruling 2010).

Bricolage was introduced in anthropology and found its way into cognitive sciences, Information
Technology (Ferneley and Bell 2006, Johri 2011, DesAutels 2011, Ciborra 2002), Entrepreneurship
(Phillips and Tracey 2007, Baker et al. 2003), Innovation Research (Fuglsang and Sorensen 2011,
Banerjee and Campbell 2009), Information Sciences (Coleman 2006) and Organization Theory
(Duymedjian and Ruling 2010, Weick 1998). In this work we elaborate on bricolage in IT. Pioneer of
the research on bricolage and IT is Claudio Ciborra (Ciborra 2002). Ciborra (2002) criticized the way
strategic thinking about IT in organizations is often presented as a linear, top-down, rational and
cognitive process. When put into use by practitioners strategic planning is a process of disassociation
from the theoretical foundations. The trajectory from IT strategy formulation down to implementation is
not an intentional process of design, but a chain of evolutionary processes that involve serendipity
and muddling through elements of surprise. The analysis of Ciborra (2002) is compliant with the
phenomenon of emergent collectives (Petrie 2011). His example of the early launch of the Internet is
most compelling: ..[..] ARPANET did not take off as expected and it was far from being an undisputed
83

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
success. What helped to transform a research network into the full-blown Internet was a myriad of
hacks, surprises, and improvisations, mostly stemming from the users environment, and the
benevolent and tolerant ARPA project management practices (Ciborra 2002). Ciborro (2002)
introduces the concept of bricolage as an alternative for the systematic and procedural way of
organizing and executing work. Bricolage, as opposed to the pre-planned way of operating can be
highly effective since it can fit the contingencies of the moment. Ciborra (2002) poses that information
systems have a high degree of flexibility in their use making them ideal for bricolage.

The resources at hand for IT bricolage are hardware and software artefacts. The IT bricoleur interacts
with existing software, by redesigning, modifying and adding new functionality and by doing so, new
ways of using the software are explored. Although it is common that bricolage is executed on a
operational level, bricolage is also experienced in strategic action. The IT bricolage approach is very
similar to the activities that can be observed in the emergent collectives of open source software
(OSS) communities (Ferneley and Bell 2006). OSS users as well as developers work intimately
together on requirements, try them out with tinkering to the code and in so doing a useful software
can emerge. Examples of such OSSs are the communities of OpenERP (www.openerp.com) and
MAGENTO (www.magento.org) which were used in the case study.

The instantiation of the concept of bricolage is suggested by seven oxymorons (Ciborra 2002). These
oxymorons represent a systematic approach for the establishment of a new organizational setting
where new systems can be adopted. The paradoxical reflections can provoke new ways of thinking
and consideration. Each of the oxymorons constitutes a thinking frame that excludes forms of
established organizational routines and existing control systems. We developed here the oxymorons
as propositions of the theory of bricolage.

First oxymoron is value bricolage strategically (VBS). The status of bricolage in an organization can
balance between a highly competent behaviour and incompetence. The bricoleur operates in a fuzzy
work zone that offers liberty and experimenting with the choices of which resources at hand will be
used. The solutions that come out of the process of bricolage need to be embedded in an everyday
experience and local knowledge as well as having a strategic impact. The second oxymoron is design
tinkering (DT). Prototyping and experimentation must be facilitated through arrangement of activities,
settings and systems. Knowledge is generated through design and by creating actions, and actions
are evaluated to build knowledge. Third oxymoron is establish systematic serendipity (ESS). A climate
for unexpected solutions must be provided through the concurrency of conception, implementation,
and execution that intermingle constantly. Fourth oxymoron is thrive on gradual breakthroughs (TGB).
The emerging ideas and solutions must lead to managerial routines that helps to bring to new institute
to the level of a simple organisational structure (Mintzberg 1993). Fifth oxymoron is practise unskilled
learning (PUL). Unlearning the old ways of thinking and challenging incremental learning while
incorporating the risks of behaving incompetent. Sixth oxymoron is strive for failure (SFF). Formative
evaluation of failures can generate new ideas and designs. Striving for excellence is the summative
evaluation of successes and does not lead to innovation or change. Finally the seventh oxymoron is
achieve collaborative inimitability (ACI). The activities of bricolage are highly idiosyncratic, often latent
and are not easy to imitate. This can be seen as a vital source for competitive advantage for SMEs to
remain agile and responsive to the business environment the inimitability should be the key for
creating a competitive advantage but collaboration, even with competitors, in developing strategic
applications should not be avoided.
3. Research methodology
This research projects aims at two goals: first we have to give an answer to a research question and
second it has to fulfil a business need. Although the last goal is strictly not necessary to acquire
scientific knowledge, it is part of our specific setup and research method. Therefore we adopted for
our investigations PAR since we were dealing here with a complex social system that cannot be
reduced for a meaning study (Baskerville 1999). Action research aims to solve current practical
problems while expanding scientific knowledge (Baskerville and Myers 2004). We have worked with
practitioners in a well chosen case study to solve an important practical problem: the adoption of an
information system based on emergent collectives in an entrepreneurial start-up.

Since PAR was chosen as our research method, this involves that we are taking an interpretive
stance of the research enquiry and that we are not aiming to a broad generalization of the results.
According to Baskerville (1999) action research implies the adoption of an idiographic viewpoint. Also
84

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
the interpretative perspective of the research process, is aiming at making sense of the
phenomenons under investigation. In the quest for an answer to our research question we brought
the theory of bricolage to a deeper stage of development and understanding. The use of theory in our
research is threefold: 1) as a guide to design, 2) as an iterative process of data collection and
analysis, and 3) as a final product (Walsham 2006). The setting up and carrying out of fieldwork is the
fundamental basis of any interpretative study (Walsham 2006, Klein and Myers 1999). All actions of
the researchers and the CEO were documented into logbooks. The findings of our actions were
coded out of our descriptions in the logbooks. We used axial coding to relate the concepts in the
descriptions with the theoretical proposition of the bricolage theory (Corbin and Strauss 2008).

For our research plan we drew on the action research process proposed by Baskerville (1999) and on
the PAR used in the work of Street and Meister (2004). However we also differentiated our research
plan slightly according to the specific situation we dealt with. Action research consist of a cyclic form
of five phases. Figure 1 illustrates the action research cycle. The action research cycle can be
performed as many times as needed for achieving a solution to the problem. We discussed within the
research team on the number of cycles and decided to conduct only two cycles: a baseline analysis
and an implementation cycle. The Client-System Infrastructure constitutes the agreement of our
research environment. The structural action research cycle starts with the diagnosing phase which
identifies the primary problems and leads to the theoretical assumptions of newly organization. In this
phase the researchers interviewed the CEO during several session to understand the past, present,
and future use of IT and how the CEO thinks IT could be beneficial for the organization.

Figure 1: The action research cycle
The action planning phase contains the organizational actions that deal with the problems defined in
the previous phase. The next stage is action taking and implements the planned actions. The
intervention of the researchers is non-directive. The change is sought indirectly in process of cut-and-
try. The test of the theoretical assumptions was done in the evaluating phase. The last phase is the
specific learning phase in which lessons learned are derived.

Case Study Endoxa

Endoxa is a Greek word and was used by Aristotle to acknowledge the tested beliefs of a community.
The CEO of Endoxa discovered a business opportunity which is similar to its existing operations. It is
obvious that new e-business initiatives, but also existing ones, suffer largely from a shortage of
logistic capabilities. In this new venture, Endoxa is aiming at becoming more than a drop shipping
agent, but rather as an network orchestrator of the complete cross chain supply of delivering products
to customers. To build the necessary supporting and enabling business processes for this new
attempt, Endoxa adopts the vision of emergent collectives of OSS and will combine existing
technologies into a new strategic information system. Although Endoxa is a very small
entrepreneurial enterprise it is compliant with the five criteria defined by Mintzberg, to constitute a
85

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
minimal organization (Mintzberg 1983). There is direct supervision, little formalized behaviour, an
organic structure, a strategy planning and the CEO formulate plans intuitively as an extension of his
own personality.

Before the research project took off, the CEO of Endoxa was already involved in another research
project on the nature of emergent collectives. As a serial entrepreneur the CEO was in search for
assistance to see how OSS could be of use to start a new company. As researchers we had the
chance to observe the take off of a new enterprise and this offered an excellent opportunity to enlarge
our understanding of how IT/IS can be of critical importance in organisations. An agreement for a
research partnership was formalized that stipulated the rights and obligations of the researchers and
the CEO and his collaborators. The actions of the researchers were performed in an open way and
were aiming at a beneficial impact on the organization. All actions were done in close harmony with
the CEO. The CEO provides the necessary knowledge to the researchers for being harvested in an
academic inquiry. However both parties had their objectives. It was clear from the beginning that the
CEO was aiming at a fruitful start-up for his enterprise. The objectives of the researchers were spelled
out in the research question. The design of the artefact is for the researchers is only a means to an
end. The theory or concept of bricolage was explained to the CEO, however the concept of emergent
collectives was very well known to the CEO.
4. Findings
First it was noticed that the stages of the action cycles do not always proceed in linear and straight
forward way. The appearance of the organisational actions is concurrent and not always
synchronised. It was the task of the researchers to shed light on the different actions and make an
appropriate analysis of the findings. We summarized our findings in table 1, showing the findings
during the baseline analysis cycle and in table 2, showing the findings during the Implementation
cycle.

During the first team meetings with the CEO ten actions were identified, starting from an overall
company mission that was spelled out in a proposal (BRIDEE) and submitted to a business school.
The overall company mission was detailed in a competitors analysis, a market research plan, a
business and a financial model. Two specific actions, e-AirwayBill and polling quotes Fedex were
defined as nice-to-haves but it was in no way sure that the implementation was feasible within
Endoxa. The choice for OpenERP and Magento was fixed and the organisational modelling was done
with these two products as mission critical systems. OpenERP was suggested as the back-office
solution and Magento as the front-office solution. Finally a project was set up for the acquirement of
government subsidies for a new start-up initiative.

For the action planning an agenda was set up to work on each of the actions during two days a week.
The researchers got mingled with the CEO and the collaborators of Endoxa and for each action a
planning was made. The action planning and action taking are also shown in table 1. The theory
building is mainly inferred from the evaluating and learning phases. For the bottom line we found five
actions that were classified as Value Bricolage Strategically (VBS). These fives actions, the Mission
Statement, the Competitors Analysis, the Market Research Plan, the Business and the Financial
Model can be seen as the transformed stakeholders needs into an enterprises actionable strategy.
However the down-top translation is not feasible into specific goals at every area of the enterprise.
The actions were considered as strategic but still subject to modifications and adaptations.

Two specific actions, e-AirwayBill and Polling Quotes Fedex were classified as Design Tinkering (DT)
and Practise Unskillful Learning (PUL). It was designing something that was already in use. The
lesson learned was that it is better to use what is already build and proven than to make something
new. This is actually compliant with the resource constrains in SMEs.

The actions OpenERP and Magento where of major importance for Endoxa and there was a tense
force coming from the CEO to strive for a breakthrough. This illustrates that an ERP system is mission
critical in SMEs and perceived as such by the CEO. The basic business processes like invoicing,
general ledger, accounts receivable and payable as well as the more strategic processes like
tendering, sales, order entry and bidding need to come together in one integrated system. Proprietary
ERP software was not an option for Endoxa, because of the costs. The actions were considered to be
a match with the oxymoron of Thrive for Gradual Breakthroughs (TGB). The strive for subsidies was
already considered by the CEO as not feasible due to a shortage of manpower and administrative
86

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
agility. Still the action was kept open in the hope that a file could be submitted for a positive
evaluation. This action was classified as a Thrive for Failure (TFF). The end of the first cycle and the
start of the second cycle did not follow a linear trajectory. During the baseline cycle, already actions
were defined for the implementation cycle. The implementation cycle was characterised by much
more diagnosed actions as can seen in table 2. The actions OpenERP and Magento were considered
as the most imported actions of the cycle and were matched with five oxymorons of bricolage: VBS,
DT, ACI, SFF and PUL.
5. Conclusions
In this paper we present the findings from a participatory action research describing how bricolage
can facilitate the adoption of emergent collectives in the form of OSS in a microenterprise. The use of
the PAR helped us to make our research more relevant to practice. We argue that our work differs
with that of consultants. Our theoretical perspective of bricolage was made clear in advance and
before any action was taken in the organization. We mapped the practical actions with the
propositions of the theory of bricolage, operationalized by the oxymorons. The relationships between
the elements of created artefact are made more visible then previously during the actions. Our
understanding of the constructs of bricolage has been increased.

Our research has revealed the pivotal roles of the CEO in which IT/IS is adopted and implemented. A
positive attitude of the CEO towards IT/IS was noticeable during the entire investigation period. This is
compliant with previous research on the role of the CEO and the adoption of IT/IS (Cragg and King
1993, Thong et al. 1996). However different roles of the CEO could be observed: first of all the role of
the owner-manager which always kept a sharp look on the profitability of the endeavour and the
strategic focus. Secondly the role of an individual high-end user who was intimately involved into the
daily use of the software in all the implemented business processes. Thirdly the role of CIO and IT
manager who steered the project of bricolage and utilized the mechanism of IT project management
like organizing steering committees meetings, communications session for the users, and
documenting the actions and realizations of the project members.

From our findings it could be noticed that the process of bricolage was sometimes getting in the way
of the daily business operations. Since Endoxa is a start-up this was not so important but this
indicates that the process of bricolage need to come to a moderate intensity to reduce the
organisational turbulence and to refocus on organisational efficiency. It was already noticed by
Ferneley et al. (2006) that IS bricolage is not without its dangers: the entropy of the IS can increase
as changes are made, rendering the IS architecture unmanageable and inefficient. Bricoleurs and
certainly entrepreneurial bricoleurs have to keep in mind that the process of bricolage should take
place within the boundaries of a minimal organisational structure (Weick 1993). Also at a certain point
after the change process is established, a phase of entropy-reducing is needed to allow the new
systems to take off and to fade away the organisational turbulence.

It has been shown that OSS in ERP and CRM type application domains, where conventional wisdom
says it is impossible to design from an open software perspective, holds a valuable promise. Many
software project leaders would not dare to choose for OSS in an entrepreneurial setting and would
prefer propriety software stating that the quality of the latter one is far more superior than open
source. Although we did not investigate that statement, in our empirical findings we found evidence
that the development of an information system with OSS is certainly not a straightforward process,
nor that the development process if free from errors and flaws, but this is no way other than for
propriety software. By choosing for OSS the SMEs has avoided the vendor lock inn that comes all too
often with the adoption of propriety software and has reduced the total cost of ownership of the
information system. A rough estimate has revealed that the cost of implementation of OpenERP
comes to the same high as a feasibility study for a mainstream propriety ERP vendor.
Table 1: The baseline analysis cycle
Diagnosing Action Planning Action Taking Evaluating Specifying Learning
1 BRIDEE Spelling out the
mission
statement?
Submitting for
a business
school
competition.
VBS Mission statement needs
refinement but is not mandatory
for a bottom up bricolage
approach.
87

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
2 Competitors
Analysis
List of three direct
competitors was
edited (big 3):
Shipwire,
Shipworks and
Easyshipping.
Investigation
of the support
of the web
shop
platforms of
each the big
3.
VBS The obtained information was
used as a benchmark for the
own realizations.
Diagnosing Action Planning Action Taking Evaluating Specifying Learning
3 Market
Research
Plan
What are the
questions that
web shops have
considering their
logistic
processes.
Offering a
platform for
the support of
the logistic
processes of
web shops
VBS To fuzzy to be of real value
4 Business
Model
A sound business
model
Refining and
adapting the
business
model to
current
insights and
developments
VBS To fuzzy to be of real value
5 Financial
Model
Calculating
financial flows,
cash flows, OPEX
and CAPEX.
Comparing
the figures
with partners
and
competitors
VBS To fuzzy to be of real value
6 e-AirwayBill Visualize the
XML-structure of
the transport
documents.
Testing DT, PUL Never build what is already build
by others.
7 polling quotes
Fedex
How to use Web
Services in
logistic processes
Test account
with Fedex
DT, PUL Never build what is already build
by others.
8 OpenERP All business
processes should
be implemented
in OpenERP
Adopting the
full set of
functions of
OpenERP
TGB The CEO had a strong belief in
open software products and the
use of OpenErp was mandatory.
9 Magento All logistics
processes of the
web shops should
be implemented
in Magento.
Adopting the
full set of
functions of
Magento
TGB The CEO had a strong belief in
open software products and the
use of Magento was mandatory.
10 Strive for
Subsidies
Research on
three levels:
Regional,
National and
European.
Try to work
through the
rigor
government
procedures for
subsidies
SFF Subsidies programs are not
easy accessible for SMEs. The
bureaucratic burden is to heavy.
Table 2: The implementation cycle
Diagnosing Action Planning Action Taking Evaluating Specifying
Learning
1 OpenERP Scheduling of Mailers
Version upgrading


Geotags

EBay module, Inventory
Management,
Warehousing
Extract Transfer Load
User and access rights
The company OpenERP
Feasible solution found
in OpenERP
Upgrade v.6.0.3 to v.6.1
was successfully
implemented

Module is available,
however not stable

Module is available,
deployment is put on
hold

Implement
DT
ACI


SFF

PUL

DT
VBS
OpenERP offers
a solution
The upgrade to
the latest version
is a pioneering
activity and is not
yet followed by
most competitors
Not all fancy
tools are needed
and useful
Resources are
constraint

88

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
Follow up of the
company OpenERP:
company visits.
Design up to a
workable system
The SME
organisation is
still dependent of
the evolution of
the opener OSS.
Diagnosing Action Planning Action Taking Evaluating Specifying
Learning
2 Magento MagentoERPConnect
(connection with
OpenERP)
Dropshipping scenario
Installing, configuring
and testing
DT Assurance that
Magento is of
use
3 polling
quotes
Fedex
Feasibility study Obtain shipping quotes
from Fedex
DT, VBS Connection to
Fedex is of
strategic
importance
4 Operations Daily routines &
procedures for backup and
recovery
Deploy DT Operations can
be implemented
in a OSS
environment
5 IceCAT Feasibility study Installing and testing SFF A lot of offerings
in a OSS are of
no use
6 Bista
Solutions
Alternative for the module
of drop shipping in
OpenERP
Investigate the feasibility DT, VBS Multiple sourcing
for the
acquisition of IT
7 Wiki Documentation tool for the
tools in the repertoire
Not for the business
processes:
The documentation for the
business processes should
be into OpenERP
Structuring is needed to
create real value
PUL Documentation is
real problem for
IS
implementation
projects.
8 Competition ShipWire
ShipEasy
ShipWorks
Constant focus on their
activities
VBS Keep up with the
pace of the
competitors
References
Aubert, B. A., Patry, M. and Rivard, S. (2005) 'A Framework for Information Technology Outsourcing Risk
Management', The DATA BASE for Advances in Information Systems, 36(4), 9-28.
Avison, D., Gregor, S. and Wilson, D. (2006) 'Managerial IT unconsciousness', Communications of the Acm,
49(7), 89-93.
Baker, T., Miner, A. S. and Eesley, D. T. (2003) 'Improvising firms: bricolage, account giving and improvisational
competencies in the founding process', Research Policy, 32(2), 255-276.
Banerjee, P. M. and Campbell, B. A. (2009) 'Inventor bricolage and firm technology research and development',
R & D Management, 39(5), 473-487.
Baskerville, R. (1999) 'Investigating Information Systems with Action Research', Communications of the
Association for Information Systems, 2(1), 32.
Baskerville, R. and Myers, M. D. (2004) 'Special Issue on Action Research in Information Systems: Making is
Research Relevant to Practice - Foreword', Mis Quarterly, 28(3), 329-335.
Bharadwaj, A., Keil, M. and Mahring, M. (2009) 'Effects of information technology failures on the market value of
firms', Journal of Strategic Information Systems, 18(2), 66-79.
Ciborra, C. (2002) The Labyrinths of Information: Challenging the Wisdom of Systems, Oxford University Press,
USA.
Coleman, A. S. (2006) 'William Stetson Merrill and bricolage for information studies', Journal of Documentation,
62(4), 462-481.
Corbin, J. and Strauss, A. (2008) Basics of Qualitative Research 3e, Thousand Oaks, California: Sage
Publications.
Cragg, P. B. and King, M. (1993) 'SMALL-FIRM COMPUTING - MOTIVATORS AND INHIBITORS', Mis
Quarterly, 17(1), 47-60.
89

Jan Devos, Hendrik Van Landeghem and Dirk Deschoolmeester
DesAutels, P. (2011) 'UGIS: Understanding the nature of user-generated information systems', Business
Horizons, 54(3), 185-192.
Devos, J., Van Landeghem, H. and Deschoolmeester, D. (2008) 'Outsourced Information Systems Failures in
SMEs: a Multiple Case Study', Electronic Journal of Information Systems Evaluation, 11(2), 73-84.
Dibbern, J. and Heinzl, A. (2009) 'Outsourcing of Information Systems Functions in Small and Medium Sized
Enterprises: A Test of a Multi-Theoretical Model', Business & Information Systems Engineering, 1(1), 101-
110.
Duymedjian, R. and Ruling, C. C. (2010) 'Towards a Foundation of Bricolage in Organization and Management
Theory', Organization Studies, 31(2), 133-151.
Ferneley, E. and Bell, F. (2006) 'Using bricolage to integrate business and information technology innovation in
SMEs', Technovation, 26(2), 232-241.
Fuglsang, L. and Sorensen, F. (2011) 'The balance between bricolage and innovation: management dilemmas in
sustainable public innovation', Service Industries Journal, 31(4), 581-595.
Group, S. (2004) Third Quarter Research Report, The Standish Group International.
Johri, A. (2011) 'Sociomaterial bricolage: The creation of location-spanning work practices by global software
developers', Information and Software Technology, 53(9), 955-968.
Klein, H. K. and Myers, M. D. (1999) 'A set of principles for conducting and evaluating interpretive field studies in
information systems', Mis Quarterly, 23(1), 67-93.
Lvi-Strauss, C. (1968) The Savage Mind, University Of Chicago Press (September 15, 1968).
Mintzberg, H. (1983) Structure in Fives: Designing Effective Organzations, Englewood Cliffs, NJ: Prentice Hall.
Mintzberg, H. (1993) Structure in Fives: Designing Effective Organizations, Prentice Hall.
Petrie, C. (2011) 'Emergent Collectives', Ieee Internet Computing, 15(5), 99-102.
Phillips, N. and Tracey, P. (2007) 'Opportunity recognition, entrepreneurial capabilities and bricolage: connecting
institutional theory and entrepreneurship in strategic organization', Strategic Organization, 5(3), 313-320.
Pohn, K. (2003-2007) 'Cosmicplay.net', [online], available: http://www.cosmicplay.net [accessed
Raymond, L. (1985) 'ORGANIZATIONAL CHARACTERISTICS AND MIS SUCCESS IN THE CONTEXT OF
SMALL BUSINESS', Mis Quarterly, 9(1), 37-52.
Thong, J. Y. L., Yap, C. S. and Raman, K. S. (1996) 'Top management support, external expertise and
information systems implementation in small businesses', Information Systems Research, 7(2), 248-267.
Walsham, G. (2006) 'Doing interpretive research', European Journal of Information Systems, 15(3), 320-330.
Weick, K. E. (1993) 'THE COLLAPSE OF SENSEMAKING IN ORGANIZATIONS - THE MANN GULCH
DISASTER', Administrative Science Quarterly, 38(4), 628-652.
Weick, K. E. (1998) 'Improvisation as a mindset for organizational analysis', Organization Science, 9(5), 543-555.

90
Determining the Maturity Level of eCommerce in South
African SMEs
David Freeme and Portia Gumede
Rhodes University, Grahamstown, South Africa
d.freeme@ru.ac.za
G08G4290@campus.ru.ac.za

Abstract: According to the United Nations (2002: 14) report on eCommerce adoption and diffusion in South
Africa, South Africa is one of 55 countries at stage 3 of the McKay, Prananto and Marshalls (2000, 3) model,
namely the interactive stage. This means that it hosts a more sophisticated level of formal interactions between
users and service providers, via e-mail and post comments. (United Nations, 2002: 15). Molla and Licker (2004:
91, 92) found that 83% of the SMEs surveyed owned websites. According to the Global Diffusion of the Internet
(GDI) criteria, these figures suggest that SA is at a medium stage of eCommerce maturity, neither immature nor
fully mature (Molla and Licker, 2004: 91, 92). In an attempt to measure the maturity levels of South African SMEs
a checklist, based on stages of development models, was developed using six relevant
frameworks/models/classifications identified using a quantitative research methodology and positivist approach.
The overall finding of the research was that South African SMEs are at stage 2 maturity of the McKay et al,
(2000, 3) model, namely an experimental online presence stage.

Keywords: SMEs, eCommerce, stages of development, web site functionality, maturity, checklist
1. Introduction
The pervasive nature of the internet has changed the way in which countries conduct business
worldwide. Since its emergence in the 1990s, eCommerce has quickly become the way of conducting
business on a global scale (Cloete, 2002: 2). Of particular interest in the field of eCommerce is its
proliferation in small businesses. In South Africa there are between 1.6 and 3 million SMEs and they
contribute 30% to the countrys GDP (Motjolopane and Warden, 2007: 3; Berry et al 2002: 13). There
are as many definitions for an SME as there are views on their characteristics. (Gamage, 2003;
Gilmore, Gallagher and Henry, 2007 and Cloete, 2002b). For the purpose of this paper the definition
of an SME is one that complies with the requirements of the South African National Small Business
Amendment Act, No 26 of 2003. The Act lists a number of requirements that need to be met in order
to be classified as a SME. For the retail and wholesale sector these are: number of employees: 50,
total annual turnover (sector dependent): between R6m to R13m, and total gross asset value (fixed
property exclude): R3m.

The statistical accuracy for any SME study, including the actual number of SMEs in South Africa, is
low as a large number of SMEs, at least three quarters, are in the informal sector of business and so
are essentially legally unrecognized (Berry et al, 2002: 12). Accuracy aside, it is not disputed that
SMEs play a critical role in any economy and their ability to conduct eCommerce is of prime
importance to ensure their active inclusion in the new economy. The use or level of progression of
eCommerce in SMEs has typically been studied and adoption encouraged. SMEs generally face
unique and challenging barriers that have inevitably affected the level of adoption or assimilation of
eCommerce into daily operations. This laggard approach to eCommerce adoption is a characteristic
of developing countries. In South Africa, it was suggested that the available technologies are not
adopted to the extent that is necessary for survival in the current business environment (Cloete, 2002:
3; Kruger, 2007: 4).

Research in eCommerce adoption usually concentrates on factors that affect adoption, barriers and
stage models (Mohamad and Ismail, 2009: 3, 4). Many maturity models have been proposed for
eCommerce diffusion, with varying approaches and focus points. Mahdi and Steinmueller (2002: 2)
highlighted two classifications of approaches to determining eCommerce diffusion; the application
approach and the organizational approach. The application approach delves into the tools of
eCommerce, such as websites and the adoption of information and communication technologies
(ICT). The organisational approach, in contrast, focuses on the softer and subjective matters
surrounding the adoption of eCommerce, such as the direct relationship between the owners attitude
towards ICT and the level of adoption and diffusion of eCommerce. The former approach is adopted
for this paper and recognises that as eCommerce activities continue to accelerate in the economic
environment, there is an increasingly important need for scholarly identification and analysis of the
input factors in the design of a website, to enhance the quality of that website, which is believed to
91

David Freeme and Portia Gumede
positively affect eCommerce diffusion (Chang, Kirk and Litecky, 2001: 125). A quantitative research
methodology was used in this study and a positivist approach employed.

Companies, particularly SMEs, can enter the eCommerce arena at different levels of sophistication.
Sophistication refers to the information, processes, structures and skills adopted by a company for
facilitating transactions online. Some SMEs enter as brochure-ware sites as the first step in creating
a web presence. Others use the internet as a means of conducting business, taking sales orders
online and processing payments offline. Yet others engage in relatively more complex operations,
such as offering online catalogues, receiving online orders and handling online payments. Recently
websites have begun to employ newer technologies and features, such as blogs, RSS and alternate
payment processes to enhance the shopping experiences of their customers (Ally, Cater-Steel and
Toleman, 2007: 1009).

The research findings will hopefully propel further research into eCommerce methods enabling policy
makers to recommend solutions and initiatives to improve South Africas current SMEs eCommerce
maturity rating.
2. Research premise
The overall research question was What is the eCommerce maturity level of South African SMEs
based on their websites information content, functionality and sophistication?

Typically, the initial step is the development of static websites where no prior programming knowledge
is required. The next stage of development provides functionality which helps in customers decision
making (such as order catalogues). As the eCommerce experience increases, databases are
introduced and the website becomes dynamic and increasingly interactive. This is followed by
personalisation, customization, search functions etc. indicating a higher level of sophistication of a
website (Fisher et al, 2007: 255). Beck, Wigand and Konig (2005: 38) state that websites can be
categorised according to technical measures of what is included in them and how they function. The
argument being that more sophisticated websites will include applications such as email, online
transaction facilities, and customer service or support.

Ally et al, (2007, 1010) proposed that the several stages of development models like E-Commerce
Maturity Model (KPMG, 1997), Commitment-Implementation Matrix Model (Stroud, 1998), the
eCommerce Levels (OConnor and Gavin,1998), Business Lifecycle Model (Berryman, 1999), Intranet
Maturity Model (Dastard and Schemers, 1999), eCommerce Adoption Model (Daniel et.al. 2022) and
the Stages of Growth for business Model (Pregnant et al, 2002), classified a web site by comparing its
functionality to an eCommerce capability and activity list.

Six relevant frameworks/models/classifications were identified to determine the most prevalent
functionalities, features and content used to evaluate websites of any industry type. These were:
The Centre for Electronic Commerce (CEC) website evaluation framework.
Model of Internet Commerce Adoption (MICA) (Walcott, 2007),
The extended Model of Internet Commerce Adoption (emic) (Doolin et al, 2002),
Ally et als. (2007) 5 stage model,
Doolin et als. (2002) 14 levels of functionality,
Garcia-Boboli et als (2005) three web presence classifications.
Each framework/model/classification framework/model/checklist was analyzed according to the level
of sophistication, functionality, interactivity and complexity of implementation by a content element or
content feature, shown in Table 1. The premise being that as websites build on either complexity or
sophistication, so the features and functional components of the site increase (Burgess et al, 2009:
522) indicating the maturity level of eCommerce in SMEs. Ally et al, (2007: 1011) proposed that an
evaluation and assessment of an organisations website against a framework will determine the level
at which an organisation currently stands. It indicates the organisations maturity at a particular point
in time.


92

David Freeme and Portia Gumede
Table 1: Comparison of functionality and features and identified Website Checklists
Functionality/
Feature
CEC MICA eMICA Ally et al Doolin et al
Garcia-Borbolla et
al
Advertising X X X
Order form X X
Online payment X X X X
Offline payment X
Email X X X X X X
Shopping cart X
Minimum security X X X X
Customer
registration and login
X
Order tracking X X X X
Advanced security X
Customizability X X X
Links X X
Enquiry X X X X
Technical
information
X
Promotion X X X
Catalogue X X X
Contact information X X
Customer support
(FAQs, sitemaps)
X X X X
Chat room,
discussion groups,
blog
X X X
Multimedia X X X X
Feedback, Polls X X
Database search X X X
QuickLinks X X
Pricing information X X
RSS feed X X
Company
information
X X X
Customer policies X
Help function X
Links to distributors X X
Graphic/Images X X X
Cookies X
Instant Messaging X
History X
3. Website evaluation checklist
A checklist was constructed by comparing and contrasting the differences in the identified quantitative
website frameworks/models/checklists, shown in table 1. The checklist was classified into 3 content
areas based on our interpretation of how the checklist item relates to the research of Ho (1997) and
Burgess and Cooper (2000):
Static content, which is synonymous to a simple or formal or ornamental presence on the web.
Static content is informational in nature and relatively low in sophistication (largely imitative and
inspired by novelty of innovation) in terms of the programming and technical expertise.
Interactive content that groups the features and functions that support transactions between the
buyer and the SME, facilitated online.
Transactional content that groups the features and functions that facilitate communication, be it
internally (within the SMEs boundaries) or externally.
93

David Freeme and Portia Gumede
3.1 Static content
Company Information: This broad term refers to information such as the companys mission
statement, financial information, and history. This is usually found in the section called about us or
the home tab/page of a typical website. Garcia-Borbolla et al, (2005: 175) state that this type of
information does not have a specific target audience and so serves a promotional purpose.

Contact Information: This is considered basic information pertaining to the physical address, email
contact and other contact detail, such as telephone and fax numbers (Doolin et al, 2000). The aim of
this section is to show the communicative technologies that the SME makes use of.

Pricing Information: The prices of the products and/or services provided must be present in the
website. This is representative of the SMEs willingness to attract sales from their audience and thus
marks the first step towards online transactional activities (Garcia-Borbolla et al, 2009: 181; Gwetu,
2005: 63).

Product Catalogue: This may be viewed as a virtual version of a leaflet/brochure. Promotional
activities, such as advertising and promotions (for specials etc.), are encompassed in this feature
(Garcia-Borbolla et al, 2005: 176).

Graphics/Images: This refers to the simple 2 dimensional graphics and images that support the
aesthetic or design appeal of the website (Fisher et al, 2007: 256).
3.2 Transactional content
Order Form: facilitates online ordering where the user enters all the information needed to
successfully complete an order.

Online payment option: a site that has online purchasing capacity to allow payment offline or online.
The latter is more complex and sophisticated to implement and monitor, is now more prevalent with
organisations especially with the introduction of stringent security measures.

Offline payment option: This applies when the banking details for payment have been supplied on the
site alongside contact information to confirm payment. The promotional activities all take place online
but the final payment step is done offline.

Shopping cart: This is used to keep the history of the users saved purchases and facilitates what is
termed 2-click purchasing (Elliot et al, 2000: 14).

Security: A policy document or lock on transaction data, not only on credit cards are evaluated (Elliot
et al, 2000: 14). There are no distinguishing between advanced and basic security settings in this
study. It is suggested that encryption and privacy seals should be used to assure security and privacy
of online shopping.

Order tracking: This is an after sale procedure and ensures that the user has order confirmation,
delivery time and trust assurance (Elliot et al, 2000: 14).
3.3 Interactive content
Email facilities: the provision of email facilities as part of the website (for real-time contact with the
business) is more sophisticated than a static email addresses mentioned in the contact us page.\

Customer registration and login: this is a feature found in most sites and it is the threshold for
exploration of many of the other functional components (personalization and access) mentioned in
this proposed framework (Grant and Pearson, 2010: 185).

Customisability and personalization: This is an enhanced customer service that facilitates
personalization of the layout, design, history, features, applications etc. (Elliot et al, 2000: 14). The
ability to provide a personalized, customized interaction for the user allows for website design that
differentiates product and service offerings (Green and Pearson, 2010: 186).
94

David Freeme and Portia Gumede
Links to social media sites: This is a new feature introduced in this study. This is in recognition of the
movement towards online communities and collaborative online tools facilitated by Web 2.0. The
inclusion of links to social media demonstrates the recognition of the marketing opportunities afforded
by social networking. An example would be a link to social networking sites Facebook and LinkedIn
(Gilmore, 2011: 1).

User groups: Chat rooms, blogs, and discussion groups are included in websites in the bid to create
online communities. This can also be used as a form of feedback for the hosts of the websites (Elliot
et al, 2000: 14).

Multimedia: This may be novel or expert use of multimedia items such as audio, video, 3D graphics
and animations.

Polls: Polls allow users to vote on issues and influence the organizations decision making on issues
such as policies and service delivery. They are a form of feedback and demonstrate a SMEs
willingness to partner with users (Gwetu, 2005: 64).

Search Engine: This enables the user to perform a search for words on all site pages while database
search enables queries of information that exists in the database, an example would be a search for a
specific staff member (Gwetu, 2009: 63).

QuickLinks: These are menu based links that take one to other parts of the site. This is a convenient
navigational feature in the website (Gwetu, 2009: 63).

RSS Feeds: This can be in the form of news headlines or blog entries. It is a highly interactive feature
(Walcott, 2007: 266).

Help function: Help serves as a convenience and ease of use tool that improves not only the
learnability of the website but also user experience (Elliot et al, 2000: 4).

Newsletters: the provision of electronic.newsletters.

Links to Distributors: This is considered a broader approach to customer services as the idea of
integration is introduced. The website provides links to upstream or downstream partner websites
(Elliot et al, 2000: 14)

Customer support: Burgess and Cooper (2000) consider customer support mechanisms, used to
facilitate an improved customer experience, are of medium interactivity in terms maturity levels. FAQs
and sitemaps are examples (Elliot et al, 2000: 14).
4. Sample selection
As this research forms part of an IS Honours dissertation, a sample of 300 randomly selected SMEs
were identified from two online SME directories; Metropolitan SME repository and Small Business
directory. These two directories are diverse both in size, BEE status and economic sector. The
website http://www.SMEportal.com/allcompanies contains each SMEs product, company profile and
supplies the website link. The second source, Small Business directory
(http://www.smallbusinessdirectory.co.za/) serves the same purpose but differs in categorising the
SMEs by business sector instead of by name. Of the identified 300 SMEs only 30 had working
websites sites. However there were some websites that had links disabled, these web sites were not
considered.
5. Weighting
To eliminate the emphasis on certain features the checklist features and functionalities were not
weighted. Elliot et al, (2000: 8) suggests that discussions about differing levels of importance for
categories tends to distract attention from the main issue, this being the sophistication of websites to
ascertain the intensity of eCommerce use of websites. If the element in the checklist was present at
an acceptable level, then a single point is awarded. If the element is not present, no point is awarded.
A more elaborate assessment could have been developed, like the use of a five-point Lickert scale for
each element. However, there are inconsistencies inherent in the approach and the variation in the
evaluation framework reduces the reliability of the instrument (Elliot et al, 2000: 8).
95

David Freeme and Portia Gumede
6. Findings and analysis
6.1 Static content
The frequency of static content present in the surveyed SME websites is shown in Figure 1. The
results show that all the sampled websites have product advertising. This supports the premise that
SMEs websites are essentially digital brochures or catalogues. 97% of the sampled websites had
graphics or images. Only 47% of the websites evaluated had pricing information for their products and
/or services. More than half of the sites advertised without pricing information. These sites
predominantly had enquiry or request quote email facilities to attain pricing information. Unexpected
findings were a low 33% frequency of company history and high 81% frequency for other contacts
information.

Figure 1: Frequency of static content in SME websites
6.2 Transactional content
20 out of the 30 sampled websites had no transactional content. The most commonly used feature in
all the websites with transactional content was an order form, to make either online or offline orders.
This feature is relatively easy to implement (Gwetu, 2009: 65) as opposed to the online shopping
features. This is particularly disconcerting as the majority of websites evaluated were for businesses
that sold tangible products that could be traded online. However, Mahdi and Steinmueller (2002: 11-
12) state that the implementation of transactions is a function that some SMEs choose to avoid
regardless of the level of their experience (Mahdi and Steinmueller, 2002: 11-12).

The low frequencies for transactional content (even a 0% for the emerging mobile payment method:
electronic wallet), suggest that SMEs avoid features that are difficult to implement and monitor,
possibly due to the security and backend obligations involved (Walcott, 2007: 265). SMEs, even
though they are perceived largely entrepreneurial in a business context, portray a somewhat risk
averse in their attitude towards the implementation of complex features on their websites (Gwetu,
2009: 65). In addition, from the simplistic definition of eCommerce, which states that it is the
purchasing and selling of products online, the lack of transactional content in SME websites in
essence means they do not facilitate eCommerce (Yeung and Lu, 2002: 487). Figure 2 (below)
illustrates the findings pertaining to transactional content of the sampled websites.
96

David Freeme and Portia Gumede

Figure 2: Frequency of transactional content in SME websites
6.3 Interactive content
Figure 3 depicts the frequency of the interactive features and functionalities used on the websites.
The results show that 87% of the websites have email facilities or links for enquiries. The remaining
13% either have no email contact or have one in a contact us sections. Of concern is that only 17%
of the websites have links to social sites, even though this feature is easy to implement on a website
(Gilmore, 2011: 1). The universal appeal and nature of social sites and their extended use as a
marketing and feedback tool in the corporate environment, this makes it an essential and inexpensive
tool for the growth of SMEs (DiMicco et al, 2008: 712; Gilmore, 2011: 1).

Figure 3: Frequency of interactive content in SME websites
97

David Freeme and Portia Gumede
Aside from navigational features, such as links to distributors (57%) and QuickLinks (77%), all the
websites evaluated scored below 30% on interactive features. Most of the websites only had email
facilities and QuickLinks as interactive features. The use of audio and video features is extremely low
on the surveyed websites. Search engines, be it Word or site search, was 30%. This low percentage
could be indicative of the websites consisting of only a few pages of content, thus reducing the
justification for a search engine. These results reveal that currently SMEs put more emphasis on
informational content as opposed to interactive features (Gwetu, 2009: 65).
7. Maturity level of eCommerce based on results of the survey
Currently, by most developing countries standards, South Africa has an advanced ICT infrastructure.
This, however, even though it may have translated into an increase in adoption of IT by businesses,
does not translate into a high maturity level for eCommerce (Molla et al, 2006: 6; Atrostic, 1999: 96).
The above survey results indicate that most of the websites host static content implying that South
African SMEs are at stage 2 maturity based on the stage models of McKay et al, (2000: 3), Ho (1997)
and Burgess and Cooper (2000). These findings suggest that these SMEs are still experimenting with
online presence and this is reflected in the low maturity rating. Stockdale and Standing (2006: 386)
term this phase paddling, where SMEs are likely to have email and internet connection but are
hesitant to exploit these technologies to their full potential.

Additional findings are the sampled SME websites were predominantly composed of static content.
Two thirds of the sampled websites had no transactional content and little interactive content. The
sophistication and quality of the websites was low. The percentage of SMEs possessing a website
was low when considering the total number of website links and SME searches (over 300 searches)
that were performed in the process of applying the checklist. 70% of the surveyed SMEs had no
transactional and/or interactive features at the same time. The fact that the SMEs surveyed host
websites had low sophistication and hardly any quality content reinforced the finding of a low
eCommerce maturity level. The maturity level of eCommerce implemented by the SMEs in the South
African retail and wholesale sector should be addressed as only a small percentage of SMEs have
more than a simplistic web presence online. In a study conducted in year 2000, only 1% of the
sampled SMEs had achieved an integrated eCommerce status. This figure was predicted to increase
to 36% by the end of 2004 (Molla and Heeks, 2007: 94). The findings of this research do not support
this projection. The findings of this research support the premise that eCommerce is not an agenda
item for most SA SMEs with websites.
8. Conclusion
The paper, and the research in general, has attempted to find a method of measuring the maturity
levels of South African SMEs by using a checklist based on the stages of growth model. Initial
indications are that this method of research could be quite valuable as it overcomes several of the
shortcomings of current honours degree academic research like the short time frame for honours
degree research, non-return of questionnaires, different interpretations of survey questions, etc.
Further research and validation on the derived checklist is currently being undertaken.
References
Ally, M., Cater-Steel, A. and Toleman, M. 2007. A web site sophistication model based on value-added
technology solutions and services. ACIS 2007 Proceedings, December 5-7, Toowoomba, 99(1), 1008-1016.
Atrostic, B. 1999. Defining and measuring e-Commerce: a status report. The Brookings Institution, 4(99), 1-19.
Bauer, C. and Scharl, A. 2000. Quantitive evaluation of web site content and structure. Internet Research:
Electronic Networking Applications and Policy. 10 (1): 31-43.
Beck, R., Wigand, R. and Konig, W. 2005. The diffusion and efficient use of electronic commerce among small
and medium sized enterprises: an international three industry survey. Electronic Markets, 15 (1), 38-52.
Berry, A., von Blottnitz, M., Cassim, R., Kesper, A., Rajaratnam, B. and van Seventer, D. 2002. The economics of
SMEs in South Africa. Trade and Industrial Policy Strategies, December (2002), 1-116.
Burgess, L. and Cooper, Extending the Viability of MICA (Model of Internet Commerce Adoption) as a Metric for
Explaining the Process of Business Adoption of Internet Commerce. International Conference on
Telecommunications and Electronic Commerce, Dallas (November).
Burgess, L., Cooper, J., Gibbons-Parrish, B. and Alcock, C. 2009. A longitudinal study of the web by regional
tourism organisations (RTOs) in Australia. 22nd Bled eConference Proceedings, June 15-22, Slovenia, 519-
531.
Chang, L., Kirk, P. and Litecky, C. 2001. Design quality of websites for electronic commerce: Fortune 1000
webmasters evaluations. Electronic Markets, 10(2), 120-129.
98

David Freeme and Portia Gumede
Cloete, E.2002. SMEs in South Africa: Acceptance and Adoption of e-Commerce. Department of Information
Systems, University of Cape Town, 1-12.
Daniel, E., Wilson, H. and Myers, A. 2002. Adoption of ecommerce by SMEs in the UK: towards a stage model.
International Small Business Journal, 20(3), 253-270.
DiMicco, J., Millen, D., Geyer, W., Dugan, C., Brownholtz, B. and Muller, M. 2008. Motivations for social
networking at work. ACM, 8(2008), 711-720.
Doolin, B., Burgess, L. and Cooper, J. 2002. Evaluating the use of the web for tourism marketing: a case study
from New Zealand. Tourism Management, 23 (2002), 557-561.
Elliot, R., Morup-Petersen, A. and Bjorn-Anderson. 2000. Towards a framework for evaluation of commercial web
sites. Electronic Commerce: The end of the beginning. Proceedings of the 13
th
international electronic
commerce, June 19-21, Slovenia, 1-15.
Fisher, J., Craig, A. and Bentley, J. 2007. Moving from a web presence to e-Commerce: The importance of a
business-web strategy for small-business owners. Electronic Markets, 17(4): 253 -262.
Garcia-Borbolla, A., Larran, M. and Lopez, R. 2005. Empirical evidence concerning SMMEs corporate websites:
explaining factors, strategies and reporting. The International Journal of Digital Accounting Research, 5(10),
171-202.
Green, D. and Pearson, M. 2010. Integrating website usability with the electronic
commerce acceptance model. Behaviour & Information Technology, 30(2):181-199.
Gilmore, J. 2011. How to Put a Facebook Link on My Website. [Online]. Available: How to Put a Facebook Link
on My Website | eHow.com http://www.ehow.com/how_5832801_put-facebook-
website.html#ixzz1ZnyFyRAU [Accessed: 01 October 2011].
Green, D. and Pearson, M. 2010. Integrating website usability with the electronic
Commerce acceptance model. Behaviour and Information Technology, 30(2):181-199.
Gwetu, M.V. 2009. Web application by South African health institutions. University of Venda, 60-61.Lightner, N.J.
2004. Evaluating e-commerce functionality with a focus on customer service. Communications of the ACM,
47(10), 88-92.
Ho, J. 1997. Evaluating the World Wide Web: A Global Study of Commercial Sites. Journal of Computer-
Mediated Communication, vol. 3 no. 1.
Lighter, N.J. 2004, Evaluating e-commerce functionality with a focus on customer service. Communications of the
ACM, 47(10), 88-92.
Mahdi, S. and Steinmueller, E. 2002. E-COMMERCE INDICATORS for the WWW: methodological approaches
for assessing sectorial e-Commerce maturity. New Indicators for the Knowledge-Based Economy, July, 1-
69.
McKay, J., Prananto, A. and P. Marshall. 2000. E-Business Maturity: The SOG-e Model. In Proceedings of the
11th Australasian Conference on Information Systems (ACIS). Queensland University of Technology,
Brisbane, Australia.
Mohamad, R. and Ismail, N.A. 2009. Electronic Commerce Adoption in SME: The trend of Prior studies. Journal
of Internet Banking and Commerce, 14(2), 1-16
Molla, A. and Heeks, R. 2007. Exploring E-commerce benefits for businesses in a developing country. The
Information Society, 23 (2): 95-108.
Motjolopane, I.M. and Warden, S.C. 2007. Electronic commerce adoption by SMEs Western Cape, South Africa.
Information Resource Management Association Conference, Vancouver: 1-15.
Stockdale, R. and Standing, C., a. (2006) Classification Model to Support E-Commerce Adoption Initiatives,
Journal of Small Business Enterprise and Development, Vol, 13(3), 381-394.
Rao, S.S., Metts, G. and Monge, C.A. 2003. Electronic commerce development in small and medium sized
enterprises: a stage model and its implications. Business Process Management Journal, 9(1), 11-32.
United Nations. 2002. Benchmarking E-government: A Global Perspective -Assessing the UN Member States.
United Nations Division for Public Economics and Public Administration, 1-74
Walcott, P.A. 2007. Evaluating the readiness of e-commerce websites. International Journal of Computers, 1(4),
263-268.
Wu, M., Zhang, L., Xing, Q., Dai, L. and Du, H. 2007. E-commerce adoption in Chinas service SMEs: a study
from web usability perspective. Journal of Business Systems, Governance and Ethics, 2 (4), 1-13.
Yeung, W. and Lu, M. 2002. Functional characteristics of commercial web sites: a longitudinal study in Hong
Kong. Information and Management, 41 (2004): 483-495
99
Advancing GeoMarketing Analyses with Improved Spatio-
temporal Distribution of Population at High Resolution
Srgio Freire and Teresa Santos
e-GEO Research Centre for Geography and Regional Planning, FCSH,
Universidade Nova de Lisboa, Lisboa, Portugal
sfreire@fcsh.unl.pt
teresasantos@fcsh.unl.pt

Abstract: Knowing the spatiotemporal distribution of population at the local level is fundamental for many
applications, including risk management, health and environmental studies, territorial planning and management,
and GeoMarketing. Census figures register where people reside and usually sleep, and are frequently the only
data source available for such analyses. Currently, the analysis of service areas and population served is mostly
made considering only census data as source of population distribution, while some businesses clearly serve
mostly a daytime population. However, population density is not constant within census enumeration areas. Also,
due to human activities, population counts and their distribution vary widely from nighttime to daytime, especially
in metropolitan areas, and this variation is not captured by census data. Raster dasymetric mapping within
geographic modeling allows transforming raw population counts to population density limited to specific areas
where the variable is present, in more detailed temporal periods, by using ancillary data sets and zonal
interpolation. In GeoMarketing, this information is especially useful for retail sales, banking, insurance, lodging,
real estate, and franchising. These refined distributions can be used to improve such analyses as site selection,
service area and population served, assessment of potential markets, routing activities, location-allocation, and
gravity models. This study uses such a dasymetric mapping approach for detailed modeling and mapping of the
spatiotemporal distribution of population in the daily cycle. These data sets are used to assess the location and
the varying population contained in the service areas of existing and prospective commercial facilities in the daily
cycle, for different types of businesses. Applications in GeoMarketing using spatial analysis are illustrated for
three different scenarios involving private sector services where maximizing coverage of target population is
paramount for success. The case studies show that when the spatiotemporal distribution of population is
considered, the obtained set of solutions differs from the one produced by using census-based data alone. The
results demonstrate that enhancing population distribution data through geographical modeling can greatly
benefit spatial analysis in GeoMarketing, resulting in the production of better information that ultimately allows
improved decision-making.

Keywords: GeoMarketing, population distribution, dasymetric mapping, service area, maximum coverage,
Oeiras
1. Introduction
1.1 GeoMarketing as a structured process
GeoMarketing is a fairly recent discipline that combines the power of geographic visualization and
analysis with Marketing techniques and insight, aiming at more efficiently attaining the ultimate goal of
the latter: to sell products, services, or ideas. Although the concept of marketing mix has long
included Place among the four Ps (others being Product, Price, and Promotion) (McCarthy 1960),
its importance has been underestimated in formal technical analyses that support decision-making.
Driving the need for GeoMarketing is the basic premise that markets vary from place to place (and
with time) and that business strategies should take this fact into account. For many businesses, the
decision of where to locate their commercial outlets will be the most important determinant of their
success.

The emergence of GeoMarketing was facilitated by advancements in the spatial analysis and
visualization capabilities of Geographic Information Systems (GIS), but their usefulness for conducting
studies in business and economics remains to be fully explored (Cheng et al. 2007; Mishra 2009). As
a structured process supporting decision-making, GeoMarketing analyses are more than a simple
task, instead involving the following sequential steps: (1) formulating the problem, (2) obtaining and
processing the required data, (3) conducting the analysis, and (4) producing conclusions and
recommendations. Spatial and non-spatial data sets are required to characterize both the supply
(facilities, service, product, competitors) and the demand (population, existing and potential
customers) in a given market. However, as with every information system, the quality of the results is
never higher than the accuracy of the input data used.

100

Srgio Freire and Teresa Santos
Concerning their geographical scale or scope, analyses can be conducted at four levels: (i)
continental, (ii) national (among countries), (iii) regional (comparing regions or cities), or (iv) local
(within a city or settlement). Regarding the required socioeconomic data (including demographic) to
match these levels of analyses, (i) and (ii) require country-level totals, (iii) demands data by
municipalities or communes, while (iv) requires disaggregated data that represent intra-urban
variations.
1.2 The relevance of population distribution for GeoMarketing
Knowing the spatiotemporal distribution of population at the local level is fundamental for many
applications, including risk management, health and environmental studies, territorial planning and
management, and also GeoMarketing (Freire 2010). The capability of obtaining accurate simple
population totals within the service area is often a basic indicator of potential demand for a service
and corresponding financial success, especially for those which have a local demand and serve the
immediate neighborhood.

A typical problem for GeoMarketing has been how to best locate a commercial facility in order to
maximize the potential customer coverage and therefore profit (Jeong et al. 2008). Conducting such
an analysis requires as a minimum data on population/potential customer distribution and information
concerning the business. However, there is a surprising lack of academic research studying the
importance of demographic data for such analyses and testing more advanced data sources beyond
those solely based on the census.

Census figures register where people reside and usually sleep, and are frequently the only data
source available for such analyses. Currently, the analysis of service areas and population served in
Portugal and many countries is made considering only residence-based (nighttime) census data as
source of population distribution, while some businesses clearly serve mostly a daytime population.
However, population density is not constant within census enumeration areas, although it is
commonly represented as such. Also, due to human activities, population counts and their distribution
vary widely from nighttime to daytime, especially in metropolitan areas, and may be misrepresented
by census data.

Geographic modeling and dasymetric mapping allows re-distributing population to specific areas
where it is present in more detailed temporal periods, by using ancillary data and zonal interpolation
(Eicher and Brewer 2001). In GeoMarketing, this information is especially useful for retail sales,
banking, insurance, lodging, real estate, and franchising. These refined distributions can be used to
improve such analyses as site selection, service area and population served, assessment of potential
markets, routing activities, location-allocation, and gravity models.

The present work aims at presenting the development of a dasymetric mapping approach for detailed
modeling and mapping of the spatiotemporal distribution of population in the daily cycle, and
demonstrating its value for improving geographical analyses in GeoMarketing. Applications using
spatial analysis are illustrated for three different scenarios involving private sector services where
maximizing population served (i.e. potential demand) is paramount for success.
2. Study area and data
2.1 Study area
Detailed spatiotemporal population modeling was performed for Oeiras and Cascais, two of the
eighteen municipalities that comprise the Lisbon Metropolitan Area (LMA), the main metropolitan area
in Portugal. Demonstrations of applications of these data sets in Geomarketing analyses are
implemented in the municipality of Oeiras (Figure 1). This municipality occupies 46 km
2
and has a
resident population of 162,128. Density of resident population varies significantly throughout the study
area, from high density in multi-story apartments to low-density in rural areas. Even at the census
block group level, some polygons are enormous and do not reflect their uneven population density.
Despite the gravitational pull of the adjacent city of Lisbon (the national capital), Oeiras recently
created several technological and office parks and has acquired an intensive tertiary activity.
Therefore daytime population displays distinct spatial distribution and densities from the census,
totaling 148,937 people.
101

Srgio Freire and Teresa Santos
This area has ideal characteristics for this study, namely with regard to urban and suburban character
and presence of strong economic activity.

Figure 1: Location of the study area in Portugal and in the Lisbon Metropolitan Area
2.2 Data
Input variables used for modeling population distribution include both physiographic and statistical
data. In the first group are census tracts, street centerlines and land use and land cover (LULC), while
the second includes census counts (INE 2001), data on workforce by workplaces, and commuting
statistics (INE 2003) for the study area. These data were obtained from various sources and in
different formats which are listed in Table 1. The target year for modeling population density is 2001.
Table 1: Main input datasets used for modeling nighttime and daytime population
Data set Source Date Data type
Street centerlines Private 2004 Vector polyline
LULC (COS90; CLC2000) Public 1990; 2000 Vector polygon
Census block groups Public 2001 Vector polygon
Census statistics Public 2001 Database (MS Access)
Workplaces and employment Public 2001 Table
Commuting statistics Public 2001 Table (O/D matrix)
COS90 is a digital LULC map at the scale 1:25,000 covering almost the entire country, however it
dates from 1990. Therefore, to ensure temporal consistency among input data sets, it was decided to
update it to some extent using the more recent CORINE Land Cover database for the year 2000.
3. Geographic modeling and analysis
The methodology was implemented in a Geographic Information System (GIS) and includes two main
stages: a) modeling spatiotemporal population distribution and b) building and analyzing sample
Geomarketing applications.
3.1 Modeling spatiotemporal population distributions
The modeling of population distribution is based on dasymetric mapping using street centerlines as
spatial reference unit to allocate population counts. The most recent statistical and census data
(2001) provide the population counts for each daily period, while physiographic data sets define the
spatial units (i.e., grid cells) used to disaggregate those counts. This general approach was proposed
by the Los Alamos National Laboratory (New Mexico, USA) to map daytime and nighttime population
distributions in the US at 250-m resolution (McPherson and Brown 2003), and is adapted and applied
to Portugal.
102

Srgio Freire and Teresa Santos
The map of nighttime population distribution was obtained by using a grid binary dasymetric mapping
method to disaggregate residential population from census zones to residential streets. First,
available digital LULC maps were improved, relevant classes selected, and combined, in order to
identify residential land use. Street centerlines were also modified in order to better represent the road
network existing in 2001. Then, freeways are removed from consideration and the resulting eligible
streets are combined with residential land use classes from LULC data to obtain residential streets.
These are subsequently rasterized at 25 m and the population from census block groups (source
zones) are interpolated to the respective residential street cells (target zones) using areal weighting

The modeling of daytime population distribution considers mobility statistics. It results from the
combination of two components: a) the daytime population in their places of work or study the
workforce population surface, and b) the population that remains home during the day - the daytime
residential population grid. The latter is obtained by multiplying the nighttime distribution by the
percentage of resident population who, according to official statistics (INE 2003), does not commute
to work or school. In the absence of other information, it is assumed that non-commuters remain in
their residences in the daytime period. The workforce population surface was created by
georeferencing 2167 workplaces and schools and respective workforce and students in the study
area. 562 of these facilities were georeferenced manually using ancillary data and field work. The
remainder workplaces were geocoded to the street centerlines using their addresses.

Using this methodology, four raster population distribution surfaces were produced, at 25 m
resolution: (i) nighttime (residential) population (Figure 2), (ii) daytime residential population, (iii)
daytime worker and student population, and (iv) total daytime population (Figure 3). Additionally, an
ambient population surface is produced by computing a weighted average of nighttime and daytime
distributions, considering the proportion of nighttime and daytime periods occurring in a typical 7-day
weekly cycle.

Figure 2: Grid of nighttime population distribution in Oeiras in 3D
The resulting 25-m population grids were subsequently aggregated to 50 m cells for analysis and
visualization purposes, thus representing densities by 2,500 m
2
(0.25 ha). The nighttime distribution
was validated using the higher-resolution census blocks units as reference (i.e. ground truth) in a
correlation analysis. A correlation coefficient (Pearsons r) of 0.79 was obtained, showing a good
performance of the model. Additional details on the population distribution modeling are provided in
Freire (2010).

This method also efficiently accommodates people that work at home, by not assuming that all active
population leaves their residences during the workday. The main value of these results includes the
increased spatial resolution of nighttime distribution (higher than census data), the fact that both
103

Srgio Freire and Teresa Santos
nighttime and daytime distributions share the same spatial reference basis (therefore support cell-by-
cell comparison, and that previously unavailable daytime distribution is represented.

Figure 3: Grid of daytime population distribution in Oeiras in 3D
3.2 Sample applications to GeoMarketing
For demonstrating the usefulness of detailed spatiotemporal population in GeoMarketing, three typical
sample applications using demographic analysis are illustrated in the municipality of Oeiras. One
application concerns quantifying the actual coverage for an existing network of facilities, whereas the
other two applications concern expansion and location planning, aiming at selecting the optimum
location (in practice the best) for new facilities, considering both their target demographics and hours
of operation.

The scenarios involve types of business services having local influence and markets, therefore the
service areas are based on accessibility over the street network, computed considering either metric
distance or time. Although the business data are fictitious, the remaining data analyses are real and
accurate.
3.2.1 Case study A: Assessing population served by existing facilities network
A bank wants to quantify the current population coverage for its existing network of four branches in
the municipality of Oeiras (Figure 4). Accessibility is measured in metric distance and a 1-km service
area is defined for each facility, using the road network.

Figure 5 illustrates assessment of served population using the nighttime distribution (A) and the
daytime distribution (B).

Results of the assessment of served population within the service areas (Table 2) show a quite
different outcome obtained with each of the two surfaces in terms of raw figures and ranking of
facilities. While based on the census-like nighttime distribution, facility #3 serves the most people
(4372) and facility #4 the least, using the population surface that matches banks daytime operating
hours it is facility #1 which serves the most people (8618) and facility #3 the least. Overall, 19173
people are served by the four facilities considering their daytime distribution, compared to 13073
when the assessment uses the residential distribution alone.
3.2.2 Case study B: Site selection for daytime service
A restaurant chain operates two profitable facilities and wishes to open a third facility in the vicinity.
Since this business targets mostly a local market due to their quick service and affordable meals, it
makes sense to measure accessibility as time and define its core service area as being within five
104

Srgio Freire and Teresa Santos
minutes walking distance (Figure 6). Among the two locations available for expansion (#1 and #2),
which one has the most population and potential within its service area?

Figure 4: Case study A: locations of existing banking facilities

Figure 5: Potential customers assessed using the nighttime distribution (A) and the daytime worker
and student population (B)
Table 2: Population served and ranking of banking facilities using the nighttime and daytime
population
Nighttime Daytime
Facility #
Population Rank Population Rank
1 4250 2 8618 1
2 3714 3 3220 3
3 4372 1 3202 4
4 737 4 4133 2
Total 13073 -- 19173 --
105

Srgio Freire and Teresa Santos

Figure 6: Case study B: existing (#3, #4) and prospective locations (#1, #2) and their service areas
Figure 7 shows a comparison of potential customers for the prospective locations assessed using the
nighttime distribution (A) and the daytime worker and student population (B).

Figure 7: Potential customers assessed using the nighttime distribution (A) and the daytime worker
and student population (B) grids
Whereas site #1 serves the most people in nighttime (1748), these are local residents less likely to be
regular customers. Using the daytime worker and student population for analysis (i.e. the target
demographics) shows that instead site #2 serves the most potential customers (1900 vs. 273).
3.2.3 Case study C: Site selection for daytime and nighttime business
Among three potential locations, a movie rental chain wants to select one site for expansion. Since
this a proximity-based business, accessibility can be measured as time and the service area defined
by a five-minute walking distance (Figure 8). Extending its operating hours late into the night, this
business targets both displaced (workers and students) and residential customers, in the daytime and
nighttime periods.

Figure 8: Case study C: prospective locations and respective service areas
106

Srgio Freire and Teresa Santos
Figure 9 shows a comparison of potential customers within their service areas using the nighttime
distribution (A) and the ambient population (B) surfaces.

Figure 9: Potential customers assessed using the nighttime (A) and the ambient population (B) grids
The analysis shows that location #3 would serve the most people in nighttime (1170). However, when
the daytime population is also considered (via the ambient population grid) in order to account for the
daytime and nighttime hours of operation, location #1 emerges as having the greatest potential.

These case studies demonstrate that GeoMarketing analyses relying solely on census-based data
(nighttime) for characterization of population served and business potential could misestimate the
prospective customer base contained in the service areas and indicate a solution with less potential
for success.
4. Conclusions
In metropolitan areas, the population distribution and densities are not static, varying significantly in
the daily cycle. Knowing and using the spatiotemporal distribution of population at the local level can
greatly increase the quality of basic spatial analyses in GeoMarketing. An approach was presented
that allows combining existing geographic information with official statistics to model and map
nighttime and daytime population distribution at high spatial resolution, able to support local-level
analysis. The model integrates location of workplaces and schools with daily commuting statistics.
The segmentation of daytime distribution into residential and worker and student grids further benefits
GeoMarketing analyses. Three sample applications of these population data sets were presented,
considering the nature of the business activity, their target demographics and hours of operation.
Results show that using population distribution data disaggregated in space and time can significantly
increase the detail and accuracy of spatial analysis, having the potential to greatly improve studies in
GeoMarketing.

Future developments should focus on: better modeling of distributed activities (e.g., cleaning,
security), and accounting for people present in transportation networks or involved in leisure and
shopping activities; increased temporal segmentations of population distribution, so as to represent
differences on a weekly basis (workdays vs. week-end) or on a seasonal basis (winter vs. summer);
and the use of statistical sources beyond census demographics to consider tourism influx in areas
and periods where that activity is important.
Acknowledgements
The authors thank GeoPoint Lda. (www.geopoint.pt) whose kind support motivated the initial
development of this work.
References
Cheng, E.W.L., Li, H., Yu. L. (2007) A GIS Approach to Shopping Mall Location Selection. Building and
Environment, Vol. 42, No. 2, pp 884-892.
Eicher, C.L. and Brewer, C.A. (2001) Dasymetric Mapping and Areal Interpolation: Implementation and
Evaluation. Cartography and Geographic Information Science, Vol. 28, pp125138.
Freire, S. 2010 Modeling of Spatiotemporal Distribution of Urban Population at High Resolution - Value for Risk
Assessment and Emergency Management. In Konecny, M., Zlatanova, S. & Bandrova, T.L. (eds.),
107

Srgio Freire and Teresa Santos
Geographic Information and Cartography for Risk and Crisis Management, Lecture Notes in
Geoinformation and Cartography. Springer Berlin Heidelberg, pp 53-67.
INE (Instituto Nacional de Estatstica), (2001) Recenseamento Geral da Populao e da Habitao. INE Lisboa.
INE (Instituto Nacional de Estatstica), (2003) Movimentos Pendulares e Organizao do Territrio
Metropolitano: rea Metropolitana de Lisboa e rea Metropolitana do Porto 1991-2001. INE, Lisboa.
Jeong, H., Um, J., Son, S.-W., Lee, S.-I., Kim, B.-J. (2008) Finding Optimal Position of Facilities based on
Population Density, Proceedings of International Workshop and Conference on Network Science, Norwich,
UK, June 23-27.
McCarthy, E.J. (1960) Basic Marketing: A Managerial Approach, Richard D. Irwin, Inc., Homewood, Illinois.
McPherson, T.N. and Brown, M.J. (2003) Estimating Daytime and Nighttime Population Distributions In U.S.
Cities for Emergency Response Activities, Preprints: 84
th
AMS Annual Meeting, AMS, Seattle, WA, 10 pp.
Mishra, S. (2009) GIS in Indian retail industry a strategic tool, International Journal of Marketing Studies,
Vol. 1, No.1, pp 50-57.

108
Activity Theory: A Useful Evaluation Methodology for the
Role of Information Systems in Collaborative Activity
Audrey Grace
Business Information Systems, University College Cork, Ireland
a.grace@ucc.ie

Abstract: The way in which information systems are used in organisations has evolved over time. While they
were initially used primarily for information processing and for supporting company centric efficiencies, they are
now extensively used to share information and to support collaboration both internally within an organization and
with external customers, suppliers and partners. While much IS research heretofore has concentrated on how
information systems facilitate information processing and the decision making of individuals in an organisation,
there is a growing need within organisations to analyse and understand how information systems facilitate both
internal and external information sharing and collaboration. This paper provides an overview of activity theory and
argues that this theory provides a holistic and insightful evaluation methodology which will allow researchers to
investigate how collaboration is achieved through all elements of IS (people, process, technology). The key
characteristics of activity theory that underpin its suitability for researching collaboration through IS are outlined.
Finally, a specific example of future research using this theory is described.

Keywords: activity theory, evaluation methodology, information systems, collaboration
1. Introduction
Historically, information systems were utilised within an organisations boundaries to minimise
operating costs and to improve the efficiency of internal business processes (Alter 1992; Mukherji
2002). The conceptualisation of the extended enterprise and boundary-less organisation encouraged
managers to broaden their search for efficiencies and discover new ways of creating value from their
supplier network and beyond (Elliott 2001; Prahalad and Ramaswamy 2002).

Indeed, the commercial diffusion of the Internet in the 1990s completely overhauled inter-
organisational integration as companies began to integrate Internet technologies with their existing
information systems by connecting a web front-end to their internal applications (Ash and Burn 2003;
Legner 2008). Externally focused information systems known as Inter-organisational Information
Systems (IOS) emerged that transcended organisational boundaries (Hong 2002; Shin 2006) and
enabled information sharing between organisations over the Internet (Bakos 1991; Majchrzak et al.
2000). The increasing role of web-based technologies to support all aspects of a companys business
operations (i.e. electronic business or e-business) has been widely acknowledged both by the
research community (Cagliano et al. 2005; cf. Evans and Wurster 1999) and by practitioners for
example: e-business reports have been published by many of the large consulting firms including
Forrester Group (cf. Johnson 2003) and Morgan Stanley (cf. Witter 1999).

It is common to think of e-business as digitally enabled information sharing to support co-ordination
and collaboration in three main areas: (i) within a business which focuses on supporting corporate
activities and the integration of departmental activities; (ii) between a business and its channel
partners (e.g. suppliers, distributors, or retailers); and (iii) between a business and consumers, which
includes electronic shopping, product marketing, information retrieval, entertainment and client service
activities (Shaw et al. 1997; Wu et al. 2003).

In relation to the co-ordination within a business, organisational effectiveness comes from information
sharing to leverage intellect and knowledge in order to achieve corporate objectives rather than
focusing on economies of scale in operations or physical sources of advantages (Ash and Burn 2003;
Venkatraman and Henderson 1998). The ability of organisations to make processes effective is
increasingly supported by intranets that facilitate team-level co-ordination (to achieve team objectives)
and exchange of information and knowledge (Thomas and Bostrom 2010; Venkatraman and
Henderson 1998). With respect to co-ordination and information sharing between a business and its
channel partners, business-to-business e-commerce refers to transactions and information sharing to
facilitate collaborative processes between organisations within supply chains (Cullen and Webster
2007; Mahadevan 2003). Regarding the interface between a business and consumers, because the
web enables organisations to support a high level of client interaction (Straub and Watson 2001), it
offers a unique opportunity to collaborate with individual clients in order to customise products and
services for them (Straub and Watson 2001; Venkatraman and Henderson 1998). This ability offers
109

Audrey Grace
many benefits to an organization, including increased client satisfaction and client loyalty (Piller and
Muller 2004; Wind and Rangaswamy 2001) as well as an opportunity to protect against
commoditisation through differentiation (Piller and Muller 2004; cf. Wind and Rangaswamy 2001).

Bearing in mind this growing orientation towards the use of information systems to enable
collaborative activity (both internally and externally), this paper argues that Activity Theory provides a
very useful conceptual framework for evaluating how effective information systems are within this
context. Section 2 provides an overview of Activity Theory. Section 3 discusses the growing use of
Activity Theory within a number of fields, including the IS field. Section 4 discusses the views of
Activity Theory on the use of technological and non-technological tools in collaborative activities.
Finally, the appropriateness of activity theory for evaluating the use of information systems in
collaborative activities, both within and outside the enterprise, is discussed (Section 5).
2. Overview of activity theory
The theory of activity (Engestrm 1987; cf. Leont'ev 1978; Vygotsky 1978) is rooted in cultural-
historical psychology and may be defined as a philosophical and cross-disciplinary framework for
studying different forms of human practices as development processes, with both individual and social
levels interlinked at the same time (Kuutti 1996, p. 25). Activity theory has its origins in the
Vygotskyian (1978) concept of artefact-mediated and object-oriented action whereby human beings
interactions with their environment are not direct, but instead the interaction between a human
individual and the objects of the environment is mediated by cultural tools (see Figure 1).


Figure 1: Mediation model (Vygotsky 1978)
Leontev (1978) further developed Vygotskys ideas of social and cultural mediation by developing a
hierarchical model of human activity. He argued that: (i) a minimal meaningful context for individual
actions must be included in the basic unit of analysis (i.e. an activity); and (ii) because the context is
included in the unit of analysis, the object of research is always essentially collective, even if the main
research interest is in individual actions (Kuutti 1996).

Inspired by this, Engestrm (1987) introduced an expanded version of the mediation model to reflect
the collective and collaborative nature of human activity. Engestrms representation of an activity
system (see Figure 2) depicts the relationship between three key elements of an activity. These are:
(i) the subject of an activity; (ii) other actors involved in the activity (i.e. the community); and (iii) the
shared object of the activity in which they are jointly engaged. These elements are represented by the
inner triangle drawn with broken lines in this figure.

Figure 2 also illustrates three contextual factors that mediate these relationships represented by the
extremities of the outer triangle in the figure. They are: (i) roles/responsibilities (i.e. the division of
labour between the subject and all other actors involved in the activity); (ii) rules/norms (i.e. explicit
governing regulations and implicit social/cultural norms); and (iii) tools (i.e. concepts, instruments,
language, signs, technologies).

These elements of the model mediate the relationship between a subject, other actors involved in the
activity (i.e. their work community) and their shared object. Furthermore, the mediating role/rule/tool
artefacts may be created or transformed during the development of the activity and carry with them a
particular culture and historical residue of that development (Engestrm and Miettinen 1999a).
110

Audrey Grace


Figure 2: The activity system, adapted from Engestrm (1987)
Activity theory holds that the human mind emerges, exists and can only be understood within the
context of human interaction with the world; and that this interaction (i.e. the activity) is socially and
culturally determined (Engestrm 1999b). Activities are considered inherently dynamic because
outcomes are characterised in terms of their dual individual and social existence in the consciousness
of the performing subject.

Even though some individuals may be more powerful in the collective activity, no one individual can
completely impose his or her view on other persons taking part in the activity. Because of this, it is
useful to view the perspectives of different subjects within the activity, when analysing an activity
(Boer et al. 2002; Virkkunen and Kuutti 2000).

An activity does not exist in a vacuum. Instead it exists as a node in a multi-dimensional network of
activity systems (Engestrm 1992). In the network around a central activity, there are typically such
activities as: (i) the object activity/activities (e.g. an object to be further transformed in the value
chain); (ii) supervisory activities (e.g. where rules or division of labour for an activity are defined or
rearranged; and (iii) support activities (Engestrm 1992; Kuutti and Molin-Juustila 1998), for example:
where tools and processes for an activity are developed or refined. The activity theory approach
emphasises the incoherencies, inconsistencies and tensions that exist within an activity system itself
and between an activity system and a neighbouring activity in its network (cf. Engestrm 1987;
Wiredu and Sorensen 2006). As a result, an activity system is constantly developing and working
through tensions within and between its components and is also evolving collectively with other
activity systems in the network.
3. Growing popularity of activity theory
Activity theory which was first introduced to the IS community by Bodker (1989; 1991) has the
potential to inform a much broader perspective on real-life uses of technology than the traditional
cognitive approach (cf. Kaptelinin and Nardi 2006; Korpela et al. 2004; Kuutti 1991). Activity theory
provides a very useful analytical framework for understanding and analysing the mediation of human
activities by both technological and non-technological artefacts (Bardram 1998a).

In recent years, activity theory has been applied extensively to research in the field of information
systems (cf. Anthony 2012; Hasan and Gould 2001; Huang 2011). Activity theory is also becoming
increasingly popular in a number of other fields. For example it has been extensively used in the field
of human-computer interaction (cf. Bannon and Bodker 1991; Mwanza 2002) and computer-
supported cooperative work (cf. Collins et al. 2002; Korpela and Soriyan 1998).

Researchers have also widely drawn upon this theory in studies on organisational learning (cf.
Ahonen and Virkkunen 2003; Engestrm 2004) and electronic learning (cf. Kaptelinin and Cole 2002).
111

Audrey Grace
Several international journals have published special issues devoted to studies based on activity
theory, including the Scandinavian Journal of Information Systems in 2000, Computer Supported
Cooperative Work in 1999 and 2002 and Interacting with Computers in 2003.
4. Activity theory on the use of tools
According to Activity Theory, a tool or artifact provides a means or instrument for carrying out an
activity (Mwanza 2002; Wertsch 1997). Tools embody cultural knowledge (Kuutti 1996) as well as the
formulisation of work practices (Christiansen 1996). They are collectively generated and maintained
(Kaptelinin and Nardi 2006). Furthermore, the context of an activity and the individual habits of the
subject may encourage the use of one tool over another (Wertsch 1997).

Tools can expand our possibilities to manipulate and transform different objects, but on the other
hand, the object may be perceived and manipulated within the limitations set by the tool (Bannon
1997). Thus, a particular tool may be both enabling and limiting in that it provides a set of options from
established patterns of achieving the object of the activity, yet it restricts the interaction to be from the
perspective of that particular tool (Kuutti 1996; Wertsch 1997). Put simply, a tool works well in our
activity if it allows us to focus our attention on the real object and badly if it doesnt (Bannon and
Bodker 1991).

Furthermore, Bannon and Bodker emphasise that tools should only be considered from the
perspective of their context (i.e. their actual use). Referring to the notebooks of Samuel Butler (1835-
1902), they cite his example of a complex machine, if intended for use by children, ceases to be a tool
and becomes a toy. They add that it is seriousness of aim and recognition of suitability for the
achievement of that aim and not anything in the tool itself that makes the tool (Bannon and Bodker
1991, p. 227).

Following the work of Vygotsky (1978), Engestrm (1987) distinguishes between technical tools which
are directed toward the control of processes of nature and psychological tools/signs which are
directed toward the mastery or control of behavioural processes. Examples of psychological tools and
their complex systems include: language, various systems for counting, schemes and conventional
signs (Vygotsky 1978). Technical tools include physical instruments or artefacts and may be either
technological or non-technological.

Following the work of Latour (1993), Kaptelinin (1996b) emphasises that the role of technical tools is
not limited to transmission of operational aspects of human interaction with the world. He argues that
implicit goals built into technical tools can also shape the goals of the people who use these tools.
Technical tools serve the double purpose of both doing something for you and reminding you of
something you can do (Christiansen 1996).

Activity theory holds that tacit knowledge is gradually formalised into culturally created technological
artefacts and that these artefacts mediate subsequent iterations of the activity (Kaptelinin et al. 1999).
Engestrm (1999b) also puts forward the notion that technical tools or mediating artefacts are integral
and inseparable components of human functioning. The idea is that humans can control their own
behaviour not from the inside, on the basis of biological urges, but from the outside, using and
creating artefacts (Engestrm 1999b). He asserts that this perspective presents an opportunity for the
serious study of artefacts as integral and inseparable components of human functioning (Engestrm
1999b, p. 29).
5. Discussion and conclusions
Activity theory provides an insightful evaluation methodology that will help to improve our scholarly
understanding of how information systems enable information sharing and collaboration both within
the enterprise and across enterprise boundaries for a number of reasons (see Table 1).

First, the people/process/technology nature of the Information Systems (cf. Duff and Assad 1980;
Keen 1993) is captured in activity theory (see Section 2). In addition, activity theory also incorporates
a pragmatic focus on how the objective of the collaborative activity is achieved through these three
elements of IS. Activity theory, thus, provides an excellent conceptual vehicle for evaluating how
technological information systems enable the objectives of collaborative activities to be reached while
also remaining cognisent of the collaborative roles that people play in the activity and the rules/norms
that guide these individuals as they interact together to achieve particular goals.
112

Audrey Grace
Table 1: Usefulness of activity theory for analysing how IS supports collaborative activity
Characteristic of Activity
Theory (AT)
Relevance to how IS facilitates
Collaborative Activity
Potential Future Research
AT provides a conceptual
framework that allows
researchers to analyse how
three mediating factors (rules,
roles & tools) mediate the
relationship between a subject,
other actors involved in a
collaborative activity and their
shared objective (Engestrm
1987)
Incorporates the three key
elements of IS (i.e. people, process
and technology) with a strong
emphasis on how the objective of
the collaborative activity is
achieved
Use the collaborative activity as
the basic unit of analysis in order
to analyse how the objective of a
collabortive activity is enabled
through people, process and
technology
AT focuses on information
sharing in a system rather than
individual information
processing (Korpela et al. 2004)
Collaborative activities, by their
nature, are underpinned by
information sharing (Jagdev and
Thoben 2001; Sarker et al. 2000)
Use AT to specifically investigate
how information systems facilitate
information sharing between
people involved in a particular
collaborative activity
AT incorporates both
technological and non-
technological tools (Kuutti 1991)
There is clear evidence that
information systems are not being
used to their full potential in
collaborative activity (Mawer et al.
2010; Muoz-Erickson et al. 2010)
Use AT to identify areas where
information systems could play a
greater role in facilitating
collaboration
AT emphasises the importance
of studying the context within
which information systems are
used, including the social
context (Crawford and Hasan
2006; Kaptelinin and Nardi
2006)
Collaborative activities are
inherently socio-technical systems
(de Moor and Weigand 2007; Ritter
et al. 2007)
Use AT to study how information
systems compliment/conflict with
the roles that the people play or
the rules that guide these people
in the collaborative activity
Second, at the heart of activity theory is emphasis focuses on information sharing between people in
a system, rather than focusing on individual information processing, as is the case in many IS studies
(cf. Korpela et al. 2004; Kuutti 1991). The mediated approach adopted by activity theory is, therefore,
useful in analysing how people share information with each other as they collaborate together. This is
particularly pertenent as the employment of information systems shifts from information processing to
information sharing and collaboration. Third, there is clear evidence that information systems are not
being used to their full potential in collaborative activity (Mawer et al. 2010; Muoz-Erickson et al.
2010). Because activity theory incorporates the study of both technological and non-technological
tools in enabling collaborative activities (Kuutti 1991), this theory could be used by researchers to
identify the limitations of information systems in supporting collaborative activities, or to uncover areas
where information systems could augment or replace non-technological tools in such circumstances.

Fourth, activity theory emphasises the importance of studying the context within which technology is
used, including the social context (Crawford and Hasan 2006; Kaptelinin and Nardi 2006). This makes
activity theory particularly useful for analysing how information systems are used to share information
between parties who are attempting to collaborate in a social setting. Crawford and Hasan (2006, p.
54) argue that complex phenomena associated with socio-technical systems...are prime targets for
research using activity theory because activity theory provides a framework for emerging patterns of
human activity in terms of changing purposes, awareness, focus of attention and tools. In conclusion,
this paper has underlined the need for an improved scholarly understanding of how information
systems support information sharing and collaboration within and across organizational boundaries. It
has illustrated the suitability of activity theory as an evaluation methodology for further research in this
area. Based on a number of key characteristics of activity theory, the paper highlights a number of
areas which would benefit from the application of this theory. Activity theory provides a holistic and
insightful conceptual vehicle which will allow researchers to investigate and evaluate how
collaboration is achieved through all elements of IS (people, process, technology). It can also be
employed to specifically examine how information systems support information sharing between
people who are collaborating together. Used as an evaluation methodology, activity theory can be
used by researchers to unearth the limitations of information systems in supporting collaborative
activities and to identify areas where information systems could play an increased role in such
contexts.

113

Audrey Grace
For example, the researcher is about to undertake a qualitative research project to study how
collaborative decisions are made between General Practitioners (GPs) and patients with multiple
chronic diseases on appropriate healthcare for the patient (see Figure 3). Because these decisions
may involve trade-offs between guidelines/treatments for various diseases, patient preferences are
relevant and decisions are thus collaboratively made between GPs and patients. The aim of the
project is to use activity theory to identify, operationalise and evaluate how information systems could
play a greater role in facilitating collaborative decisions in this specific context.


Figure 3: The doctor/patient activity system, adapted from Engestrm (1987)
It is planned to first review the current activity system whereby GPs and patients collaborate to decide
on appropriate medication; elective procedures; necessary lifestyle adjustments; etc. It is then
proposed to design and develop a new pilot decision support system to aid information sharing and
collaboration between the GP and the patient. The design of this system will be informed by reviewing
how the collaborative decisions are currently mediated by tools (both technological and non-
technological), the roles that GPs and patients play in making the collaborative decisions, as well as
the rules that guide such collaborative decisions. Existing decision aids and paper-based
rules/guidelines for individual chronic diseases will be coded into the new system. Any trade-
offs/conflicts that exist between guidelines for different conditions will be highlighted by the new
system.

A number of GPs will then be asked to use the pilot system and the transformed activity system (i.e.
incorporating the new decision support system) will be analysed. Activity theory will be used to
evaluate how the objective of the activity (the collaborative decision between the GP and the patient)
is currently achieved through people, process and technology, versus how it is achieved with the
introduction of the new decision support system. Drawing on activity theory, the researcher will
investigate if the actors involved in the activity (the GP and the patient) acquire an improved
understanding of the relevant clinical rules/guidelines that should guide collaborative decision in the
case of multimorbidity. The researcher will also analyse if the introduction of the new system alters the
roles that either party plays or alters the speed/nature of the collaborative decision that is reached.
References
Ahonen, H., and Virkkunen, J. "Shared Challenge for Learning: Dialogue between Management and Front-line
Workers in Knowledge Management," Information Technology and Management (2:1/2) 2003, pp 59-84.
Alter, S. Information Systems: A Management Perspective Addison-Wesley, Menlo Park, California, 1992.
Anthony, A. "Activity Theory as a Framework for Investigating District-Classroom System Interactions and Their
Influences on Technology Integration," Journal of Research on Technology in Education (44:4),
Summer2012 2012, pp 335-356.
Ash, C., and Burn, J. "A Strategic Framework for the Management of ERP enabled E-business Change,"
European Journal of Operations Research (146:2) 2003, pp 374-387.
Bakos, J. "A Strategic Analysis of Electronic Marketplaces," MIS Quarterly (15:3) 1991, pp 295-310.
114

Audrey Grace
Bannon, L. "Activity Theory," 1997.
Bannon, L., and Bodker, S. "Beyond the Interface: Encountering Artifacts in Use," in: Designing Interaction:
Psychology at the Human-Computer Interface, J. Carroll (ed.), Cambridge University Press, New York,
1991, pp. 227-253.
Bardram, J. "Designing for the Dynamics of Co-operative Work Activities," Proceedings of the 1998 ACM
Conference on Computer-Supported Cooperative Work, ACM Press, Seattle, 1998a, pp. 89-98.
Bodker, S. "A Human Activity Approach to User Interfaces," Human Computer Interaction (4) 1989, pp 171-195.
Bodker, S. Through the Interface: A Human Activity Approach to User Interface Design Lawrence Erlbaum,
Hillsdale, NJ., 1991.
Boer, N., Baalen, P., and Kuman, K. "An Activity Theory Approach for Studying the Situatedness of Knowledge
Sharing," Proceedings of the 35th International Conference on System Sciences, Hawaii, 2002.
Butler, S. "The Note-Books of Samuel Butler," in: EBook 6173, D. Price (ed.), Project Gutenberg, , 1835-1902.
Cagliano, R., Caniato, F., and Spina, G. "E-business Strategy: How Companies are shaping their Supply Chain
through the Internet," International Journal of Operations and Production Management (25:12) 2005, pp
1309-1327.
Christiansen, E. "Tamed By a Rose: Computers as Tools in Human Activity," in: Context and Consciousness:
Activity Theory and Human-Computer Interaction, B. Nardi (ed.), The MIT Press, Cambridge,
Massachusetts, 1996, pp. 175-198.
Collins, P., Shulka, S., and Redmiles, D. "Activity Theory and System Design: A View from the Trenches,"
Computer Supported Cooperative Work (11:1/2) 2002, pp 55-80.
Crawford, K., and Hasan, H. "Demonstrations of the Activity Theory Framework for Research in Information
Systems," Australasian Journal of Information Systems (13:2), May 2006, pp 49-68.
Cullen, A., and Webster, M. "A model of B2B e-commerce based on connectivity and purpose," International
Journal of Operations and Production Management (27:2) 2007, pp 205-225.
de Moor, A., and Weigand, H. "Formalizing the evolution of virtual communities," Information Systems (32:2)
2007, pp 223-247.
Duff, W., and Assad, M. Information Management: An Executive Approach Oxford University Press, London,
1980.
Elliott, S. "Collaborative Advantage: Winning Through Extended Enterprise Supplier Networks (Book)," Journal of
Product Innovation Management (18:5) 2001, pp 352-352.
Engestrm, Y. Learning by Expanding: An activity-Theoretical Approach to Developmental Research Orienta-
Konsultit, Helsinki, 1987.
Engestrm, Y. "Interactive Expertise: Studies in Distributed Working Intelligence. Research Bulletin 83,"
Department of Education, University of Helsinki, pp. 3-83.
Engestrm, Y. "Activity and Individual and Social Transformation," in: Perspectives on Activity Theory, Y.
Engestrm, R. Miettinen and R. Punamaki (eds.), Cambridge University Press, Cambridge, 1999b, pp. 19-
38.
Engestrm, Y. "New Forms of Learning in Co-Configuration Work," Journal of Workplace Learning (16:1/2) 2004,
pp 11-21.
Engestrm, Y., and Miettinen, R. "Introduction," in: Perspectives on Activity Theory, Y. Engestrm, R. Miettinen
and R. Punamaki (eds.), Cambridge University Press, Cambridge, 1999a, pp. 1-16.
Evans, P., and Wurster, T. "Getting real about Virtual Commerce," Harvard Business Review (77:6) 1999, pp 84-
94.
Hasan, H., and Gould, E. "Support for the sense-making activity of managers," Decision Support Systems (31)
2001, pp 71-86.
Hong, I. "A new Framework for Interorganisational systems based on the linkage of participants' roles,"
Information & Management (4:2) 2002, pp 261-270.
Huang, K.-H. "Learning in Authentic Contexts: Projects Integrating Spatial Technologies and Fieldwork," Journal
of Geography in Higher Education (35:4) 2011, pp 565-578.
Jagdev, H. S., and Thoben, K. D. "Anatomy of enterprise collaborations," Production Planning & Control (12:5)
2001, pp 437-451.
Johnson, C. "Highlight: US e-Commerce to Hit Nearly $230 Billion in 2008," p.
http://www.forrester.com/rb/Research/highlight_us_ecommerce_hits_$230_billion_in_2008/q/id/17217/t/172
12.
Kaptelinin, V. "Computer-Mediated Activity: Functional Organs in Social and Developmental Contexts," in:
Context and Consciousness: Activity Theory and Human Computer Interaction, B. Nardi (ed.), The MIT
Press, Cambridge, Massachusetts, 1996b, pp. 45-68.
Kaptelinin, V., and Cole, M. "Individual and Collective Activities in Educational Computer Game Playing," in:
CSCL 2: Carrying Forward the Conversation, T. Koschmann, R. Hall and N. Miyake (eds.), Lawrence
Erlbaum, Mahway, NJ, 2002.
Kaptelinin, V., and Nardi, B. Acting With Technology: Activity Theory and Interaction Design The MIT Press,
Cambridge, Massachusetts, 2006.
Kaptelinin, V., Nardi, B., and Macaulay, C. "Methods and Tools: The activity Checklist: A Tool for Representing
the 'Space' of Context," Interactions), July-August 1999, pp 27-39.
Keen, P. G. "Information Technology and the Management Difference: A Fusion Map," IBM Systems Journal
(32:1) 1993, pp 17-39.
115

Audrey Grace
Korpela, M., Mursu, A., Soriyan, H., Eerola, A., Hakkinen, H., and Toivanen, M. "Information Systems Research
and Development By Activity Analysis and Development: Dead Horse or the Next Wave?," in: Information
Systems Research: Relevant Theory and Informed Practice: IFIP TC8 WG8.2, B. Kaplan (ed.), Kluwer,
London 2004, pp. 453-471.
Korpela, M., and Soriyan, H. "Community participation in Health Informatics in Africa: An Experiment in Tripartite
Partnership in Lle-Lfe, Nigeria," Computer Supported Cooperative Work (7:3-4) 1998, pp 341-361.
Kuutti, K. "Activity Theory and its Applications to Information Systems Research and Development," in:
Information Systems Research Arena of the 90's, H. Nissen, H. Klein and R. Hirschheim (eds.), North
Holland, Amsterdam, 1991, pp. 525-549.
Kuutti, K. "Activity Theory as a potential framework for human-computer interaction research," in: Context and
Consciousness: Activity Theory and Human Computer Interaction, B. Nardi (ed.), The MIT Press,
Cambridge, Massachusetts, 1996, pp. 17-44.
Kuutti, K., and Molin-Juustila, T. "Information System Support for Loose Co-ordination in a Network Organization:
An Activity Theory Perspective," in: Information Systems and Activity Theory: Tools in Context, H. Hasan, E.
Gould and P. Hyland (eds.), University of Wollongong Press, Wollongong, Australia, 1998, pp. 73-92.
Latour, B. "On Technical Mediation: The Messenger Lectures on the Evolution of Civilization, Cornell University,
April 1993," Lund University., Lund, Sweden.
Legner, C. "The Evolution of B2B E-Services from First Generation E-Commerce Solutions to Multichannel
Architectures," Journal of Electronic Commerce in Organizations (6:2) 2008, pp 58-76.
Leont'ev, A. Activity, Conciousness and Personality Prentice-Hall, Englewood Cliffs, NJ, 1978.
Mahadevan, B. "Making Sense of Emerging Market Structures in B2B e-commerce," California Management
Review (46:1) 2003, pp 86-100.
Majchrzak, A., Rice, R. E., Malhotra, A., King, N., and Ba, S. "Technology Adaptation: The Case of a Computer-
Supported Inter-Organizational Virtual Team," MIS Quarterly (24:4) 2000, pp 569-600.
Mawer, A. J., Ng, W., and Jackson, M. D. "Collaboration and Information Sharing Results in Improved Failure
Analysis Tools, Techniques, and Outcomes," Electronic Device Failure Analysis (12:4) 2010, pp 44-46.
Mukherji, A. "The Evolution of Information Systems: Their impact on Organizations and Structures," Management
Decision (40:5) 2002, pp 497-506.
Muoz-Erickson, T. A., Cutts, B. B., Larson, E. K., Darby, K. J., Neff, M., Bolin, B., and Wutich, A. "Spanning
Boundaries in an Arizona Watershed Partnership: Information Networks as Tools for Entrenchment or Ties
for Collaboration?," Ecology & Society (15:3) 2010, pp 1-22.
Mwanza, D. "Towards an Activity-Oriented Design Method For HCI Research and Practice," Open University,
Milton Keans, 2002.
Piller, F., and Muller, M. "A New Marketing Approach to Mass Customisation," International Journal of Computer
Integrated Manufacturing (17:7) 2004, pp 583-593.
Prahalad, C., and Ramaswamy, V. "The Co-Creation Connection," Strategy and Business (27) 2002, pp 1-12.
Ritter, J., Lyons, J. B., and Swindler, S. D. "Large-scale coordination: developing a framework to evaluate socio-
technical and collaborative issues," Cognition, Technology & Work (9:1) 2007, pp 33-38.
Sarker, S., Lau, F., and Sahay, S. "Building an inductive theory of collaboration in virtual teams: An adapted
grounded theory approach," in: Proceedings of the 33rd Hawaii International Conference on System
Sciences, 2000.
Shaw, M., Gardner, D., and Thomas, H. "Research Opportunities in electronic commerce," Decision Support
Systems (21) 1997, pp 149-156.
Shin, D. "Distributed Inter-Organizational Systems and Innovation Processes," Internet Research (16:5) 2006, pp
553-572.
Straub, D., and Watson, H. J. "Research Commentary: Transformational Issues in Researching IS and Net-
Enabled Organizations," Information Systems Research (12:4) 2001, pp 337-345.
Thomas, D. M., and Bostrom, R. P. "Vital Signs for Virtual Teams: An Empirically Developed Trigger Model for
Technology Adaption Interventions," MIS Quarterly (34:1) 2010, pp 115-142.
Venkatraman, N., and Henderson, J. "Real Strategies for Virtual Organising," Sloan Management Review (40:1),
Fall 1998, pp 33-48.
Virkkunen, J., and Kuutti, K. "Understanding Organizational Learning by Focusing on Activity Systems,"
Accounting, Management and Information Technologies (10) 2000, pp 291-319.
Vygotsky, L. Mind in Society: The Development of Higher Psychological Processes Harvard University Press,
Cambridge, 1978.
Wertsch, J. "Collective Memory: Issues from a Sociohistorical Perspective," in: Mind, Culutre, and Activity:
Seminal Papers from the Laboratory of Comparative Human Cognition, M. Cole, R. Engestrm and O.
Vasquez (eds.), Cambridge University Press, Cambridge, UK, 1997, pp. 226-232.
Wind, J., and Rangaswamy, A. "Customerization: The Next Revolution in Mass Customization," Journal of
Interactive Marketing (15:1) 2001, pp 13-32.
Wiredu, G., and Sorensen, C. "The Dynamics of Control and Mobile Computing in Distributed Activities,"
European Journal of Information Systems (15:3) 2006, pp 307-319.
Witter, D. "The European Internet Report, Industry Report by Morgan Stanley," 1999.
Wu, F., Mahajan, V., and Balasubramanian, S. "An Analysis of E-Business Adoption and its Impact on Business
Performance," Academy of Marketing Science Journal (31:4) 2003, pp 425-447.


116
Dealing With Uncertainty Through KM: Cases in Four
Software SMEs
Ciara Heavin and Frederic Adam
Business Information Systems, University College Cork, Ireland
c.heavin@ucc.ie
fadam@afis.ucc.ie

Abstract: In the current climate, preparing for change is an issue for companies large and small. For a Small to
Medium Sized Enterprises (SMEs) where resources are significantly limited, it is imperative that efficient
practices are in place to leverage the wealth of knowledge available both inside and outside the firm. It is vital
that these organisations are swift and flexible enough to survive in this dynamic environment, this includes
developing the ability to take stock of the sources and types of knowledge that are valuable to them and
understanding how it is accessed and integrated into the firms body of knowledge. Considering the economic
turbulence, never has it been more important to focus on the knowledge capabilities of software SMEs, as it is on
the back of these types of small high-tech organisations that innovation, growth and potential recovery will be
achieved. Using a qualitative analysis approach in four Irish software SMEs, this study identifies sources of
knowledge and occurrences of knowledge activities (KAs) as a means of understanding the firms approach to
knowledge management (KM) and how this may be leveraged therefore providing them with the flexibility to deal
with environmental uncertainty.

Keywords: knowledge, knowledge management (KM), small and medium sized enterprises (SMEs), knowledge
activity (KA) and software
1. Introduction
Defining data, information and knowledge as distinct and independent phenomena is an demanding
endeavour. In particular it is noted that many authors use the terms information and knowledge
interchangeably, those (Dennis, Earl, El Sawy, Huber) that considered organisational information
processing in the 1970s, 1980s and early 1990s now focus their attentions on KM as an
organisational strategy. Figure 1 represents data, information and knowledge as a continuum.

Figure 1: Knowledge continuum (after Davenport and Prusak, 1998; Wurman, 2001)
In Figure 1, it is evident that the extremes of each phenomenon are distinct however there is
significant overlap between data/information and information/knowledge. According to Davenport and
Prusak (1998, p147) the distinction between knowledge and information is seen as more of a
continuum than a sharp dichotomy. Most projects that focus on internal knowledge [repository] deal
with the middle of the continuum-information that represents knowledge to certain users. Alavi and
Leidner (2001, p109) posit that information is converted to knowledge once it is processed in the
minds of individuals while knowledge becomes information once it is articulated and presented in the
form of text, graphics, words or other symbolic forms. The point where information becomes
knowledge and vice versa is difficult to pinpoint with complete accuracy, however there is no doubt
that these phenomena are closely linked. In order to adequately observe and measure knowledge in
an organisation, it is essential that an operational definition is established. Supporting the view of
Davenport and Prusak (1998) and the point indicated by the arrow in Figure 1, for this study
knowledge occurs when
117

Ciara Heavin and Frederic Adam
Information represents valuable knowledge to a group focused on achieving a particular task

This definition is used to identify instances or occurrences of individual knowledge types. From a
practical perspective, it is essential that an enterprise knows the type of knowledge that they need to
focus on (Zhao et al., 2012). Using the definition presented here, the aim of this study is to
understand how software SMEs utilise their knowledge capabilities to achieve their organisational
goals. It is important to state from the outset that factors such as leadership, culture, people,
organisational structure, technology and business processes are fundamental to a successful KM
approach (Hasanali, 2002; McDermott and ODell, 2001; Storey and Barnett, 2000; Sunassee and
Sewry, 2002) however they were considered as part of a larger study and are not the core focus of
this paper. This paper is structured as follows; the subsequent section briefly outlines the benefits in
pursuing KM. Next a classification of knowledge activity is defined, the importance of leveraging KM
to deal with environmental uncertainty is discussed and the research methodology is outlined. In
addition, the background to each case is presented and the findings are considered. Finally, the
authors consider the research findings and present the conclusions.
2. Harvesting the benefits of KM
Knowledge has become the source of competitive advantage and the source of organisational
empowerment. Nonaka (1994) maintains that organisations must realise the importance of knowledge
in order to survive in a highly competitive market place. He postulates that in an economy where the
only certainty is uncertainty, the one sure source of lasting competitive advantage is knowledge
(Nonaka, 1994, p14). Stewart (1997) further argues that knowledge has become the most important
factor in economic life. He acknowledges that knowledge is the chief ingredient in what organisations
buy and sell, and the raw material with which organisations work. Intellectual capital, not natural
resources, machinery or even financial capital, has become the one indispensable asset of
corporations (Stewart, 1997). With due consideration, the capacity to incorporate and apply the
specialised knowledge of organisational members is fundamental to a firms ability to create and
sustain competitive advantage (Drucker, 1993). This focus has forced organisations to re-think the
way they manage their business since the emphasis is no longer on tangible assets but on peoples
abilities and experiences (Sunassee and Sewry, 2002). As a result organisations are identifying the
strategies and technologies to manage this knowledge with the objective of gaining maximum benefits
from an organisation's knowledge pool (ODell and Grayson, 1998; Sunassee and Sewry, 2002).
However, organisational knowledge is of limited organisational value if the knowledge is not shared
and managed (Alavi and Leidner, 1999). Alavi and Leidner (1999) maintain that KM has emerged as a
new philosophy to control and support the flow of knowledge in an organisation. In addition, Bansler
and Havn (2002) purport that KM contributes to improved organisational productivity, flexibility and
innovation capabilities by enabling employees to share, integrate and reuse knowledge more
effectively. Sunassee and Sewry (2002) further suggest that companies which have implemented KM
solutions are better equipped to deal with business situations, as these companies have access to
previous know-how. As a first step, the nature of organisational knowledge activities (KAs) is
considered as a key components of KM.
3. Classification of knowledge activities
For the purpose of this research, a definition of KA proposed by Kraaijenbrink et al., (2006, p23) is
adopted as transactions or manipulations of knowledge where the knowledge is the object not the
result. It is evident that multiple researchers use different terms for the same/similar activity. Many of
these definitions share common verbs such as storing, creating and applying knowledge in an
organisational context. This research takes a balanced view of KAs, discounting the activities
proposed by Leonard-Barton (1995), as they have a sole technical focus. This research summarises
the terms widely used to describe KAs including knowledge acquisition (Alavi and Leidner, 2001;
Huber, 1990; Holsapple and Joshi, 2004; Kraaijenbrink et al., 2006), codification (Davenport and
Prusak, 1998; Faran et al., 2006; Kraaijenbrink et al., 2006; Nevo et al., 2007), storage (Alavi and
Leidner, 2001; Huber, 1991), maintenance (Conway and Sligar, 2002; Holsapple and Singh, 2004;
Holsapple and Whinston, 1996), transfer (Alavi and Leidner, 2001; Huber, 1990; Nonaka and
Takeuchi, 1995) and creation (Davenport and Prusak, 1998; Kayworth and Leidner, 2004; Nonaka
and Takeuchi, 1995; Pentland, 1995).
118

Ciara Heavin and Frederic Adam
4. Developing knowledge capabilities to deal with uncertainty
Organisations are cognitive in nature; as a result they learn and develop knowledge (Argyris and
Schn, 1978). Hedberg (1981) defines organisational learning as a two pronged process, the first
where organisations adjust themselves to deal with reality and the second where they effectively
leverage knowledge to improve their fit with the external environment. In order to achieve this the
organisation must have mechanisms to learn about and interpret external events (Daft and Lengel,
1986, p566). In order to maintain and develop organisational memory it is essential that an
organisation learn from both its internal context and external environment (Bennet and Bennet, 2004).

Organisations have no other brains and senses than those of their members (Hedberg, 1981, p6).
Considering this perspective, an organisation as an entity is completely reliant on the quality and
expertise of the sum of its employees. However, Argyris and Schn (1978) suggest that
organisations often know less than the sum of their members. This may be due to communication
issues e.g. information filtering, distortion and channel overload (Argyris and Schn, 1978). The lack
of a formal learning/knowledge repository can contribute to this. Huber (1989) points out that if
knowledge is not formally stored, it may be lost on three counts, firstly through staff turnover, secondly
through an organisation not knowing what to store based on future needs and finally through an
inability to share knowledge. One example of the benefits that may be derived from maintaining a
knowledge repository is Chrysler automobile company. They used an Engineering Books of
Knowledge to store an electronic memory of engineers past experiences (Davenport and Prusak,
1998). This repository was actively leveraged to inform engineer decision making in future
development projects.

In terms of problem solving, organisations build an advantage in boom times however, slack reduction
acts as an environmental indicator of crisis which can activate problem solving mechanisms, and this
in turn can lead to organisational learning (Hedberg, 1981). In addition, Pounds (1969) considers
management learning through problem finding. Where Hedberg (1981) presents slack as a trigger,
Pounds (1969) suggests that problems can be triggered through discrepancies in historical models.
These models act as an archive of past experience to estimate the short term future, though Pounds
(1969) admits in some cases that these models were carried in the heads of management supported
by routine reports. However, opportunity triggers are less evident as problem triggers are more
common (Hedberg, 1981). Yet, organisations may identify new opportunities in the market, which in
turn, facilitate learning. Together KM, organisational learning and memory influence how
organisations deal with knowledge and its impact on organisational effectiveness (Jennex and
Olfman, 2002).
5. Research approach
This study pursued a qualitative analytical approach (gerfalk and Fitzgerald, 2008) using multiple
case studies, each case was selected using purposeful sampling (Patton, 1990). The cases were
selected based on their size and industry sector. The software industry is a knowledge industry. Its
major product is knowledge itself and its major output is research which translates into new products
and services (p562) (Bernroider, 2002) Software development may be characterised as knowledge
work (Schnstrom and Carlsson, 2003). As the objective of this study was to explore the knowledge
approach leveraged by small software development firms, the focus of the study was on the two core
business processes of sales and software development. Based on a selection strategy, positional
methods were used to uncover sales and technical managers while other respondents were selected
based on reputation (Knoke, 1994). Twenty two individuals were interviewed; each interview was
approximately one hour in duration. Interviews were taped and transcribed. The exploratory nature of
the study coupled with the thick transcripts (p56) (Miles and Huberman, 1994) meant that qualitative
analysis could be conducted through the use of coding techniques (Miles and Huberman, 1994). The
classification of KAs was used for the purpose of data analysis in this study. Each KA was assigned a
code and this code was utilised to classify the nature of KAs, these categories were then assigned
chunks of data derived from the interview transcripts. Each transcript was analysed using the KA
codes and a memo was generated at the level of the interview.
119

Ciara Heavin and Frederic Adam
6. Findings
6.1 Background to cases

6.2 KM approach at HelpRead Ltd
HelpRead was focused on building a collective organisational memory that facilitates continued
growth through the introduction of new hires and new products. This is particularly important to them
in terms of acquiring external knowledge to inform new product development. Table 1 identifies 82
instance of KA at HelpRead; the majority of activities presented themselves through knowledge
acquisition, storage and transfer.
Table 1: Distribution of KAs at HelpRead Ltd

The study identified 82 KAs; 113 instances of knowledge types were uncovered. The difference in
these figures can be explained by single KAs leveraging multiple knowledge types in some instances,
thus increasing the instances of knowledge types identified in the analytic memos. It is also important
to note that, at the time, HelpRead Ltd. was not in a new product development phase. Using Table 3,
the difference in intensity between these types of activities is indicative of HelpReads position as a
growing organisation. Knowledge acquisition intensity at 21 percent (n=82) showed that fifty three
120

Ciara Heavin and Frederic Adam
percent of all knowledge acquisition activity was focused on gathering product knowledge supporting
Groens (2006, p124) view that in high-technology SMEs require knowledge from external sources to
support new product development.

At 13 percent (n=82) codification activity was relatively low intensity, this was reflective of the
uncertainty around what the company needs to know in the future. This is predominantly evident with
the Technical FAQ, which lacked buy-in from the entire development team. The Development
Manager admitted that as a team they didnt know what they should know. Most codification activity
was directly related to refining the discussions at group meetings into documents which are made
available over the Intranet. Over 90 percent (n=11) of all codification activity identified in Table 1 was
related to product development knowledge. Codification was largely not a sales related activity. The
well-defined scope of the Goldmine
TM
sales system meant that no KA was required to support the
refinement and distillation of sales related knowledge. In addition, the experience of the sales team
meant they know what important customer and sales related knowledge should be stored for future
use.

The high occurrence of storage activities at 29 percent was indicative of the importance placed on
storing knowledge in the new Intranet-based quality system - approx 74 percent (17 of n=24 storage
activities) of storage activity involved the Intranet. These activities primarily included storing software
project documents and employee skills documents, in line with the compliance requirements outlined
by IS9001:2000. The codification intensity also included the level of customer information captured
and stored by the sales team. This 29 percent reflected the move to store the knowledge gathered
from acquisition, codification and transfer activities. Maintenance activities at 10 percent highlighted
the companys focus on maintaining software and product development knowledge. Surprisingly,
transfer activity was high intensity at 19 percent - with closer inspection; the role of the Technical
Director was integral to this. At 6 percent, knowledge creation was very low. While Table 1 shows that
80 percent of knowledge creation activity was focused on product knowledge, in line with company
strategy, the lack of other types of knowledge creation may be explained by the pressures associated
with the recent rapid growth in employee headcount and the increased product portfolio.
6.3 KM approach at TravelSoft Ltd
At the time of interview a new Application Solutions Manager had been in place at TravelSoft for
approximately eight months. From a Telecoms background, he implemented a number of
organisational strategies to develop embedded processes and most importantly to bring a new
product to the travel software marketplace. It is primarily these management initiatives that
contributed to the high number of KAs, 147 instances presented in Table 2.
Table 2: KAs at TravelSoft Ltd

The KAs identified used more than one type of knowledge during a single KA, consequently providing
the rationale for the 211 instances of knowledge types identified for TravelSoft. The knowledge focus
121

Ciara Heavin and Frederic Adam
at TravelSoft was quite consistent and reflected the companys strategic objectives. The emphasis on
software development, project, process and product knowledge was marked. Knowledge of the travel
industry made up a quarter of the knowledge acquisition activity.

At HelpRead Ltd. 82 instances of KAs were observed, KA at TravelSoft was considerably higher at
147 instances. This intensity may be explained by a number of factors. Knowledge acquisition activity
at 11 percent (16 of n=147) was due to the acquisition of consultant knowledge on new product
development, employee training, relevant books, journals and travel conferences. In terms of
codification at 20 percent activity, project related knowledge was refined and stored. At 21 percent
activity, storage activity was almost in line with the volume of codification activity. This shows that
TravelSoft were good at following through on this type of activity. For example the steering committee
refine and store the new Adept framework templates in the relevant artefacts. R&D acquisition,
codification, storage and maintenance of knowledge contribute to the dense volumes of KA. Activities
such as Internet research in the travel area added to the level of knowledge acquisition activities,
while refining and storing this knowledge contributed to the volume of codification and storage activity.
At 14 percent, maintenance activity was lower than knowledge codification and storage activity. This
could be owing to the fact that some of the knowledge stored did not require updating, for example
conference and journal papers on the travel industry will not be changed although new papers may be
added over time resulting in increased storage activity.

Knowledge transfer at 28 percent (41 where n=147) represented the highest volume of KA.
Leveraging a variety of routine and non-routine modes (these are outlined in next section). This
organisation encouraged knowledge transfer at all levels of the organisation. Knowledge creation was
much lower at 9 instances (6 percent where n=147). These activities were all generated around new
product and process development placing these initiatives at the core of all KAs of TravelSoft at that
time. Table 2 shows a spread of 66 percent of KA at TravelSoft across knowledge acquisition,
codification, storage and maintenance activity, while transfer and creation activity account for 34
percent of all KA. By comparison, the distribution at HelpRead for the same activities was 73 percent
and 27 percent respectively. This shows that through their change process, TravelSoft were good at
leveraging the more valuable types of KA.
6.4 KM approach at Systems Solutions Ltd
KA at Systems Solutions was mostly characterised by its informal nature. The Managing Director
admitted that when he is involved with requirements analysis for the business intelligence division the
knowledge is documented and stored in an A4 pad. One Project Manager from the Application
Division admitted that it was not uncommon to calculate a project price on the back of a piece of
paper in the car park before attending a meeting with a prospective customer. Table 3 illustrates a
total of 105 KAs identified.
Table 3: KAs at Systems Solutions

One hundred and thirty one instances of knowledge type were identified across the KAs; this indicates
that some KAs leveraged multiple knowledge types. It is apparent that knowledge acquisition and
maintenance were exceptions in terms of their knowledge focus. Knowledge acquisition was focused
122

Ciara Heavin and Frederic Adam
on product and customer knowledge, these knowledge types were largely relevant to the Business
Service Management and SAP Solutions divisions focused on software resale. Knowledge
maintenance activity was focused on sales knowledge at 38 percent. The emphasis on sales primarily
reflects the knowledge requirements of these two divisions. From Table 3, the other KAs were
focused on software development and project knowledge serving the knowledge needs of the
Business Intelligence (data warehousing) and Application Development divisions.

Project related knowledge was codified, stored and maintained in order to meet the requirements of
pharmaceutical customers who must abide by Food and Drugs Authority (FDA) regulations. From
Table 3, it is evident that at 26 percent, storage activity was higher than both codification and
maintenance activity together at 21 percent. This may mean that Systems Solutions store large
volumes of documentation without refining and formatting it, and in the longer term, without updating
it. As a result, it seemed that they hold large these volumes for the sole purpose of protecting
themselves from external threats such as possible audits. At 15 percent, knowledge acquisition
appeared important, however more than half of this activity is attributed to sales and customer
interaction.

At 5 percent, knowledge creation activity was very low. The Managing Director was the main source
of the knowledge creation activity at Systems Solutions. It seems that the time pressures associated
with meeting project deadlines meant that there was little time for knowledge creation activity amongst
the divisions. In the case of Systems Solutions knowledge creation was not the responsibility of those
at an operational level.

Knowledge acquisition, codification, storage and maintenance account for 66 percent of all KAs while
knowledge transfer and creation amount to 34 percent. This was consistent with TravelSoft though it
differed in the case of HelpRead whose focus on knowledge storage activity through the new
company Intranet tips the balance of KA distribution towards the earlier activities.
6.5 KM approach at DocMan (Ireland) Ltd
At DocMan (Ireland) the total volume of KA was low in comparison to the other cases considered.
This may be endorsed by the nature of the well-defined work on software development components at
the DocMan site in Ireland. The operations at the Irish site are part of a larger document management
software component and the output from DocMan (Ireland) was integrated by the software integrator
at the Swiss headquarters. As a result of this task specificity, there was a set of core KAs from which
there was minimal diversification at the Irish site. The breakdown of KAs for DocMan (Ireland) Ltd. is
presented in Table 4.
Table 4: KAs at DocMan (Ireland) Ltd

Table 4 illustrates a significant level of knowledge consistency across all of the KAs. Software
development and project knowledge represented at least 57 percent of the knowledge focus for all six
KAs. This uniformity across activities also supports the task specialisation activity at the DocMan
(Ireland) site.

DocMan (Ireland) leveraged some external knowledge resources at 12 percent (7 where n=60)
knowledge acquisition, however the main source of knowledge is the headquarters in Switzerland and
this was achieved through knowledge transfer activity which was very high at 40 percent (24 where
123

Ciara Heavin and Frederic Adam
n=60) of total activity. It is from here that the majority of customer requirements were received as well
as any new product knowledge.

Knowledge creation activity was very low at 3 percent (2 where n=60). This may be the result of the
location of this development team with most new ideas being generated at a higher level at company
headquarters.

Although the total volume of KA was low, the split between the acquire, codify, store and maintain KA
at 57 percent and the knowledge transfer and creation activity at 43 percent is more evenly balanced
than that observed at HelpRead Ltd., at 73 percent to 27 percent respectively. The geographic
location, the task specialisation and the maturity of the parent organisation may be attributed to the
knowledge transfer capabilities identified at DocMan Ireland.
7. Discussion
The previous section explores the distribution of KAs across four software SMEs. As part of this
consideration, it is imperative to take a closer look at the firms motivation for pursuing different KAs in
terms of their wider organisational objectives. This is even more crucial in the case of an SME where
their ability to leverage the resources available to be them, enabling them to be flexible enough to
pursue alternative organisational goals is essential, as they are more susceptible to external forces.
Table 5 provides a sample of the organisational goals pursued by the SMEs. Each goal is
characterised by the knowledge types and KAs used to achieve the objectives of the firm at a
particular time.
Table 5: Achieving organisational goals through knowledge activity

Table 5 highlights that the KM approaches differ across organisations; this is typically due to the
difference in culture, organisational structure and organisational objectives therefore organisations
need to identify a unique strategy that suits their needs. Hansen et al. (1999, p109) state that, a
company's knowledge management strategy should reflect its competitive strategy and this is widely
reinforced in extant literature (Davenport and Prusak, 1998; Hasanali, 2002; Sunassee and Sewry,
2002). Aligning KM strategy to the business strategy seeks to clarify what the company must know to
in order to realise what the company can do. It is well supported that organisations, which did not
develop a separate KM strategy, ended up with their KM initiative losing focus, priority and impact
(Chourides et al., 2003).Considering the lens (KAs) used in this study to characterise the firms KM
approach, Figure 2 below provides a holistic view of the key components examined, these include
knowledge type, KAs, the underlying organisational strategy and the benefits derived from the KM
approach. Figure 2. illustrates a diagnostic instrument which could provide SMEs with the capability
of tangibly measuring their current KM approach.

Ideally, in times of uncertainty the firm should be flexible enough to leverage knowledge capabilities in
order to pursue the goals of the organisation at that time. This formalised systematic approach may
result in the establishment of knowledge rules which can be followed enabling the firm to develop
embedded KAs. By doing this, the organisation can learn from past experience to inform future
development.
124

Ciara Heavin and Frederic Adam

Figure 2: An overview of the KM approach
8. Conclusion
KM may be achieved if it is closely aligned with the strategic needs of the organization. This approach
seeks to identify the organizations requirements and evaluate a knowledge strategy based on the
businesss strategic vision. In a software development company, one organisational objective may be
to improve the efficiency of the software developers as a means of increasing profits on individual
projects. As a result an organisations knowledge strategy may be to avoid reinventing the wheel
leveraging existing programming code in new projects. Ideally the appropriate knowledge capabilities
should be in place to respond to the changing objectives of the firm or even support multiple goals
e.g. to support new product development activity and a focused sales strategy to improve customer
relationship management. SMEs need to give formal consideration to their KM approach, in order to
manipulate knowledge in a way that serves their specific decision making needs at a particular time.
References
gerfalk, P. J. and Fitzgerald, B. (2008) Outsourcing to an Unknown Workforce: Exploring Opensourcing as a
Global Sourcing Strategy, MIS Quarterly, Vol. 32. No.2, pp385-409.
Alavi, M. & Leidner, D.E. (2001) Knowledge Management and Knowledge Management Systems: Conceptual
Foundations and Research Issues, MIS Quarterly, 25(1), pp. 107-136.
Argyris, C. and Schn, D.A., (1978) Organizational Learning: A Theory of Action Perspective. Addison Wiley,
Reading, MA.
Bansler, J.P. and Havn, E. (2002) Exploring the Role of Network Effects in IT Implementation: The Case of the
Knowledge Management Systems, 10
th
European Conference on Information Systems, June 6-8, pp. 817-
829.
Bennet, A. and Bennet, D. (2004) The Rise of the Knowledge Organization, The Handbook on Knowledge
Management Volume 1, (Eds). C.W. Holsapple, Springer, Germany, pp. 1-20.
Bernroider, E. (2002) Factors in SWOT Analysis Applied to Micro, Small-to-Medium, and Large Software
Enterprises: An Austrian Study, European Management Journal, Vol. 20, No. 5, pp. 562-573.
Chourides, P., LongBottom, D. and Murphy, W. (2003) Excellence in knowledge management: an empirical
study to identify critical factors and performance measures, Measuring Business Excellence, Vol. 7, No.2,
pp. 29-45.
Conway, S. and Sligar, C. (2002) Unlocking Knowledge Assets, Microsoft Press, Redmond, WA.
Daft, R. L. and Lengel, R.H. (1986) Organizational information requirements, media richness and structural
design, Management Science, Vol. 32, No. 5, pp. 554-571.
Davenport, T.H., and Prusak, L. (1998) Working Knowledge. How Organizations Manage What They Know.
Boston, Mass.: Harvard Business School Press, 1998. Transaction Engineering Management, 23, pp163-
167.
125

Ciara Heavin and Frederic Adam
Drucker, P. (1993) The Post-Capitalist Society, Butterworth-Heinemann, Oxford.
Faran, D. (2006) Assessment Making Sense of it all, Knowledge Integration: The Practice of Knowledge
Management in Small and Medium Sized Enterprises, Jetter, A, Kraaijenbrink, J. and Wijnhoven, F. (eds),
pp. 101-114.
Hansen, M., Nohria, N and Tierney, T. (1999): Whats Your Strategy for Managing Knowledge? Harvard Business
Review, March-April, 106-116.
Hasanali, F. (2002) Critical Success Factors of Knowledge Management, Available
At:http://www.providersedge.com/docs/km_articles/Critical_Success_Factors_of_KM.pdf (Last Accessed
June 14
th
2009)
Hedberg, B. (1981) How organisations learn and unlearn, Nystrom and Starbuck (Eds) Handbook of
Organisational Design, Vol. 2, Oxford University Press, England, pp. 3-27.
Holsapple, C.W. and Whinston, T. (1987) Knowledge-based orgnanizations, The Information Society, 2, pp77-
90.
Holsapple, C. and Singh, M. (2004) The Knowledge Chain Model: Activities for Competitiveness, Handbook on
Knowledge Management, Eds Holsapple, C.W. ,Verlanger: Berlin.
Holsapple, C. and Joshi, K. (2004) A Knowledge Management Ontology, Handbook on Knowledge Management,
Eds Holsapple, C.W. ,Verlanger: Berlin.
Huber, G.P. (1984) The Nature and Design of Post Industrial Organisations, Management Science, Vol. 30, No.
8, pp. 928-951.
Huber, G.P. (1989) A Theory of the Effects of Advanced Information Technologies on Organizational Design,
Intelligence, and Decision Making, Organizations and Communication Technology, pp. 237-274.
Huber, G.P. (1990). A theory of the effects of advanced information technologies on organizational design,
intelligence, and decision making, Academic Management Review, 15(1), pp47-71.
Huber, G.P. (1991), Organisational Learning: The contributing Processes and the Literatures, Organisation
Science, Vol. 2, No.1, pp88-115.
Jennex, M. E. and Olfman, L. (2002) Organizational Memory/Knowledge Effects on Productivity, A Longitudinal
Study, HICSS35, IEEE Computer Society.
Kayworth, T. and Leidner, D. (2004) Organizational Culture as a Knowledge Resource, Handbook of Knowledge
Management 1, Eds Holsapple C.W., Verlanger: Berlin.
Knoke, D. (1994) Networks of elite structure and decision making, in Advances in Network Analysis: Research in
the Social and Behavioural Sciences, eds. S. Wasserman and J. Galaskiewicz, Sage, Thousand Oaks, CA.
Kraaijenbrink, J., Faran, D. and Hauptman, A. (2006).Knowledge Integration by SMEs Framework, Knowledge
Integration: The Practice of Knowledge Management in Small to Medium Sized Enterprises. Eds Jetter, A.,
Kraaijenbrink, J. Schroder, H., Wijnhoven, F., Springer.
Leonard-Barton, D. (1995) Wellsprings of knowledge: building and sustaining the sources of innovation, Boston ,
MA: Harvard Business School Press.
McDermott, R. and ODell, C. (2001) Overcoming Cultural Barriers to Sharing Knowledge, Journal of
Knowledge Management, Vol. 5, No. 1, pp. 76-85.
Miles, M.B and Huberman, A.M. (1994) Qualitative Data Analysis, Sage Publications.
Nevo, S., Wade, M.R., Cook, W.D. (2007). An examination of the trade off between internal and external IT
capabilities, Journal of Strategic Information Systems, 16, pp5-23.
Nonaka, I. (1994) A Dynamic Theory of Organizational Knowledge Creation, Organization Science, Vol. 5, pp.
14-37.
Nonaka, I. and Takeuchi, H. (1995) The Knowledge-Creating Company. Oxford University Press, New York, NY.
O'Dell, C., Grayson, C.J.J. (1998) If Only We Knew What We Know: The Transfer of Internal Knowledge and
Best Practice, The Free Press, New York, NY.
Patton, M. Q. (1990) Qualitative Evaluation and Research Methods, Sage Publications, Thousand Oaks,
California.
Pentland, B. (1995). Information systems and organizational learning: the social epistemology of organizational
knowledge systems. Accounting, Management and Information Technologies, Vol. 5 No. 1, pp1-21.
Pounds, W. F. (1969) The Process of Problem Finding, IMR, Fall, pp. 1-19
Steward, T.A. (1997) Intellectual Capital: The New Wealth of Organisations, Nicholas Brealy, London.
Schnstrom, M. and Carlsson, S, A. (2003) Methods as Knowledge Enablers in Software Development
Organizations, In Proceedings of the Eleventh European Conference on Information Systems, pp. 1707-
1718.
Storey, J. and Barnett, E. (2001) Knowledge Management initiatives: learning from failure, Journal of
Knowledge Management, Vol. 4, No. 2, pp.145-146.
Sunassee, N. N. and Sewry, D. A. (2002)A theoretical Framework for knowledge management implementation,
Proceedings of SAICSIT 2002, pp. 235-245.
Szulanski, G. (1994) Intra-Firm Transfer of Best Practices Project. American Productivity and Quality Center.
Houston, Texas.
Wurman, R. S. (2001) Information Anxiety 2. Que, Indiana, USA.
Zhao, J.., Ordez de Pablos, P. and Qi, Z. (2012) Enterprise knowledge management model based on Chinas
practice and case study, Computers in Human Behavior, 28, pp. 324330.
126
Analyzing Lessons Learned to Identify Potential Risks in
new Product Development Projects
Vered Holzmann
Faculty of Management of Technology, Holon institute of Technology H.I.T,
Holon, Israel
veredhz@hit.ac.il

Abstract: The paper presents a methodological implementation of the synergetic relation between past and
future by utilizing analysis of past occurrences to mitigate future possible risks. When an organization undertakes
the development of a new product, it should prepare a risk management plan that can outline the guidelines to
mitigate possible uncertainties with a negative impact on the development project or the expected product. This
process can be performed by analyzing the information accumulated during previous projects executed by the
organization during the processes of developing other deliverables. The method of transforming organizational
information into applicable risk management guidelines is based on two research techniques: content analysis
and cluster analysis. The content analysis approach requires a deep understanding of the lessons learned
dataset collected from past projects while documenting undesired events or failures. This process integrates
qualitative and quantitative procedures including a review of each record, a classification of relevant hazardous
factors, and a computation of recurring factors in completed projects and processes. The cluster analysis
approach uses the risk dataset to create a risk tree that represents relative weights for each risk factor, while
considering the reoccurrence of similar events in similar circumstances. Implementation of the methodology in
order to identify potential risks, in particular analyzing lessons learned in technological organizations, yielded
interesting organizational risk tree that shows a substantial weight accumulation in the areas of
miscommunications and misunderstanding of stakeholder responsibilities. The results emphasized the following
susceptible items: responsibility definition, delivery method, communication and information needs and
responsibilities, and change management. The findings exposed the 'soft skills' of project managers and project
teams, rather than technical issues or engineering problems, as being the vulnerable areas that should be
managed carefully in order to finish the project successfully. The study offers a generic validated methodology for
risk identification based on analysis of lessons learned, supported by results of an implementation of the
methodology in a high-tech company. The method of analysis can be applied by managers of new product
development projects to identify risk issues, classify them into groups, and construct a risk tree that represents
the project risk areas and their relative weights.

Keywords: risk management; lessons learned; project management; clustering; content analysis
1. Introduction
Every organization aims to get a competitive advantage over the other organizations playing in the
same arena. Especially today, when the business battle field is usually global, very dynamic and is
subject to countless changes, any opportunity to be better prepared for the future creates an
advantage. Therefore, a more accurate and relevant risk management plan will give any organization
this required advantage over its competitors. The current research was targeted to provide a better
and innovative method to set up risk management plans based on a systematic methodology
constructed of an array of consecutive sequential steps. The development of the methodology is
based on two assumptions: (1) Modern organizations manage portfolio of projects. Many of the
organizations nowadays are operating through a continuous implementation of projects. Although
every project is a different endeavor, organizations are constantly moving towards achieving their
vision by managing a portfolio of projects. Since all, or at least most, of the organizational past
projects were performed within a similar environment, and usually managed and at least partially
carried out by the same teams, it is worthwhile to embrace a broad approach that takes into account
not only the individual project but rather the organizational project portfolio (Kerzner, 2005). (2) Every
organizations archive contains huge volume of historical valuable information. Almost every modern
organization possesses an archive that contains a huge amount of information that can be transposed
into valuable management knowledge. Modern communication technologies enable people and
organizations to create, distribute, receive and store large quantities of data. There are number of
tools and techniques that an organization might use in order to transform its data into information, and
further into knowledge. Since the organization already has this information, no additional efforts are
required for its production (Lipshitz, Popper and Friedman, 2008; Hill, 2008).

Based on these two concepts, the research was targeted to develop a structured methodology for
project oriented organizations to design the guidelines for a risk management plan based on an
127

Vered Holzmann
analysis of documents that are already exist in the organization. Therefore, the research objectives
were defined as follows:
To develop a generic methodology that can identify organizational future risks based on
documented organizational known history. The methodology focuses on representing the
transposition of existing information, available in the form of documents or electronic records,
either structured or unstructured, into valuable managerial knowledge, for the purpose of
identifying prospective events, actions or occurrences.
To demonstrate the generic methodology by formulating a Risk Breakdown Structure (RBS) from
lessons learned documents. The study was targeted to generate a specific risk management tool
an RBS, which is an organizational risk hierarchy, from a specific type of documents, i.e.,
lessons learned documents.
The paper starts with a review of risk management tools and techniques, especially those based on
documentation analysis methods, followed by a brief description of the developed methodology to
transform lessons learned into outputs of a risk management plan. Then, a case study in an Israeli
high-tech company is presented. The detailed analysis is presented with a list of identified risks,
categorized by subjects and interrelations. Final comments summarize the results and the contribution
of the current research to the academic and professional communities.
2. Risk management tools and techniques
Cohen and Palmer (2004) define risk as the potential for complications and problems with respect to
the completion of a task and the achievement of a goal (p.IN11). Other definitions emphasize the
perception of risks as uncertainties. Thus, suggest a broader interpretation of the term risk as
management of opportunities, i.e., positive occurrences, rather than only management of potential
negative risks (e.g., Ward and Chapman, 2003; Garrett, 2005; Olsson, 2007).
2.1 Risk management methodology
The Guide to Project Management Body Of Knowledge (PMBOK Guide), published by the Project
Management Institute (PMI, 2008), describes the knowledge area of project risk management as the
processes that address risk management planning, identification, analysis, response, and monitoring
and control. Project risk management processes are targeted to increase the probability and impact of
events that are expected to positively affect the project as well as to decrease the probability and
impact of events that are expected to negatively affect the project or the achievement of its objectives.
Effective implementation of a risk management methodology and practice positively affects the
success rate of any project or process. The relationship between risk management and project
success has been described by various researchers investigating different industries (e.g., Raz,
Shenhar and Dvir, 2002; Cook, 2005; Nalewaik, 2005; Marxt and Link, 2002). The risk management
methodology presented by the PMBOK Guide (PMI, 2008) starts with risk identification, proceeds
through risk qualitative and quantitative assessment, followed by planning risk response, and ends
with a continuous process of risk monitoring and control. Similar methodology is presented by the SEI
(Software Engineering Institute) (Higuera and Haimes, 1996) and by the ERM (Enterprise Risk
Management) framework, developed by the Committee of Sponsoring Organizations of the Treadway
Commission (COSO, 2004). The PMBOK Guide methodology can be described by the following
chart.

Figure 1: Risk management methodolog by the PMBOK guide
128

Vered Holzmann
The risk identification process detects prospective events which might affect the project and
documents their characteristics. Risk assessment deals with evaluating two fundamental parameters
with regard to each identified risk event: the probability of occurrence and the impact of the identified
risk events. The risk evaluation might be either qualitative or quantitative, and it is used to grade each
of the identified risk events in order to prioritize them. Risk response planning involves the
development of optional actions targeted to increase opportunities and to reduce threats to project
activities. There are various strategies that can be implemented, including avoid, transfer, reduce, or
accept threat risks and exploit, share, enhance, or accept opportunity risks. The risk control process
tracks identified risks, monitors residual risks, identifies new risks and executes risk response plans.
Risk management outputs should be updated and communicated to relevant stakeholders throughout
the project lifecycle.

The initiating and probably the most important phase of risk management is risk identification. This
phase involves the detection and classification of all known, and as much as possible also unknown,
risks, thus producing the foundation on which the whole risk management process can be established
(Chapman, 2001). Therefore, the current study focuses on tools to enhance.
2.2 History-based methods for risk identification
Organizational information and knowledge, which is a product of historical occurrences, provides a
wide basis for analysis of future possible events. Historical information analysis techniques are derived
either from investigation of experiences that happened to employees and managers, or from the
investigation of documents that were produced continuously by various stakeholders recording
different activities. History-based methods include checklists, interviews, and documentation reviews.

A Checklist is an easy-to-use and effective aid which represents a list of possible risks, based on
historical information and accumulated organizational knowledge. A generic checklist may be
generated from published research such as Caper Jones's software risks, Rex Black's quality risks, or
Barry Boehm's top ten risk list (Ravindranath, 2007). Alternatively, the checklist can be provided by an
established professional association such as the Software Engineering Institute (SEI, 2005), the
National Institute of Standards and Technology (NIST, 2002), or other professional institutes in the
industry. A specific checklist might be produced by an organization, thus capturing particular risks that
previous projects within the organization have been exposed to (Hillson, 2004).

Interviews are conducted in order to capture existing knowledge by talking to people. Project team
members and managers, experienced project participants, project stakeholders and professional
experts are interviewed to produce a comprehensive list of risks (Zsidisin, Panelli, and Upton2000).

Documentation Reviews include a set of techniques by which organizational and project
documentation related to various types of activities is reviewed to analyze historical events for better
future management. Typical documents for review are lessons learned files or debriefing registers,
which record past experiences and can be used to enrich organizational knowledge (Kerzner, 1999;
Williams, 2003).

The current study is based on the concept that there is a synergetic linkage between the past and the
future. In order to be better prepared for future experiences we need to understand past occurrences.
The research methodology relies upon a history-based documentation, which is analyzed and
investigated in an ordered way to infer knowledgeable potential risks.
3. The research methodology for analyzing lessons learned to identify
potential risks
The research methodology is based on two research techniques: content analysis and cluster analysis.

The first two steps include qualitative and quantitative content analysis (Krippendorff, 2004; Neuendorf,
2002). Qualitative content analysis is a process of understanding, interpreting, analyzing and coding of
documents. In the current study, this process was performed using the Atlas.ti software package on an
especially developed codebook constructed of items from the project management body of knowledge
(PMI, 2008) with additional items related to business process flow (Jeston and Nelis, 2006). The
qualitative analysis requires reading each document, understanding it in the context it was written, and
categorizing by subjects or concepts. In the current study, the text units for analysis were the collected
129

Vered Holzmann
lessons-learned documents. The coding process was conducted using a specially-developed
codebook that contained 200 codes derived from the current body of knowledge in project
management. Quantitative content analysis is based on the same text units, though requires statistical
analysis. It contains word and phrases crunching, and descriptive frequency analyses assigned to
investigate the occurrences of codes, groups of codes and words. The quantitative analysis results
present numerical examinations of the interpreted text units and the related categorized codes, such as
word count, code frequency analysis, and other categories frequency analyses. The output of these
two analyses is a structured dataset of all the risks that were identified in previous lessons learned as
gathered from an array of historical projects performed and managed by the organization.

The following phase is clustering. Cluster analysis is performed in this research by a supervised
classification of patterns derived from the qualitative and quantitative content analyses (Jain, Murty and
Flynn, 1999). The clustering techniques were investigated and the Wards method, using the Dice
distance similarity measure, was found as the most appropriate one for this purpose. Clustering is an
advanced statistical method targeted to represent data, based on measures of proximity between
elements, expressed by maximum distances between groups and minimal distances within each
group. Joining or tree clustering algorithms, which were used in this study, are designed to join objects
into sequentially larger groups based on some measure of similarity or distance. The distances
represent a set of rules that function as criteria for grouping or separating objects, and they can be
composed of one or more sets of rules or conditions. The output of this analysis was a hierarchical
RBS (Risk Breakdown Structure) that represents the project and organizational risks, arranged in
hierarchical order where every set of risk events are grouped under a common risk factor.

An RBS (Risk Breakdown Structure) is defined by David Hillson (2002) as: A source-oriented grouping
of risks that organizes and defines the total risk exposure of the project or business. Each descending
level represents an increasingly detailed definition of sources of risk. The RBS is a hierarchical
structure of risk factors and events.

The following chart represents the research methodology steps.

Figure 2: The research methodology
4. Case study: An IT company
The research methodology was implemented in a High-Tech company in Israel. The company is an IT
service provider, which is specializing in end-to-end solutions in all market sectors. The company is a
top Israeli IT provider. It is a public company trading on the Tel Aviv Stock Exchange and employs
over 2,500 experts. Its services include outsourcing, software development, ERP system applications,
payroll and human resource solutions, technical support and helpdesk, etc. The company provided an
access to its lessons learned documents which were accumulated during 12 months.
4.1 Qualitative content analysis
In order to demonstrate the qualitative content analysis process, a representative lesson learned
document, which is one of the 40 lessons learned documents of the IT Company that was analyzed in
the research, is presented.

Introduction: The project was designed to analyze and implement the central system software
upgrading procedure of a very busy mainframe that supplied online services to a large array of users.
130

Vered Holzmann
The project was postponed several times due to customer reluctance to permit service technicians to
perform any alteration on the brittle state of the original settings of its system software. The project
was very sensitive since no updates to the original installation were ever performed. Thus,
implementation required an extremely well designed plan, exceptionally skilled professionals and a
very thorough testing program. In addition, the total project set on the critical path, thus there was a
need for a stringent risk mitigation plan based on a wide identification of risk events followed by a
progressive occurrence forecast.

Story: A qualified team of technicians simulated, as close as possible, the actual resident environment
at the customers labs. A complete array of tests was performed. A detailed log of every occurrence
was maintained. Problems of any nature were discussed first within the local division and further
clarified with the vendors global technical department. Finally the personal matrix of responsibility
was assign and tested. The preparations for the event were perfect, and then everything that could go
wrong actually did go wrong. The execution was postponed several times, only hours before due
date, after everything was already put in place. Hence various personnel shifts were necessary, which
demoralized the team and seriously shook the planned subordination structure. At last the upgrading
started. Soon uncharted bugs appeared and hampered system performance. After consultation the
team manager decided they could fix the modules, and proceeded with the upgrade. However
immediately after passing the point of no return, namely, the point where the old system could no
longer be reinstated it became evident that the new upgraded one did not work properly. The
customer was informed, and then everybody involved became panic-stricken. The vendor contacted
all his global users and asked them to participate in an international temporary consortium to solve
the current problem. At the eleventh hour of the projects maximum expected duration, following
intense global communications, a software engineer, located in India, came up with the answers and
the upgrade went through.

Coding: The following codes were assigned.
Table 1: A sample of a code assignment to a lessons learned document
Code Name
1090 define process procedures
1110 define stakeholders involvement
1120 define stakeholders expectations
1150 define critical success factors
1170 involve customer in process method definition
1180 involve customer in deliverables definition
1190 involve customer in success criteria definition
1200 involve customer in process limitations definition
1230 define product attributes
1240 define delivery method
1290 define work constraints
1330 define required documentation
1360 define wp (work package) prerequisites
1430 define wp constraints
2070 define quality requirements
2110 define QA procedures
2170 plan tests
2330 identify risk factors
2340 identify risk events
2350 manage risk assessment
2360 assess risk probability
2370 assess risk impact
2850 attention to constraint indicators by HR
2860 attention to guidance tools by HR
4.2 Quantitative content analysis
The basic quantitative analysis is a code frequency analysis. For the IT service provider this analysis
was based on the assignment of 117 research codes (out of the 200 that appear in the codebook) to
the 40 lessons learned documents that were generated during the year 2007. The frequency analysis
of the codes assigned to the lessons learned of the IT company reveals that the most frequent items,
which constitute about 19% of the total code assignments, are: define work package (wp)
131

Vered Holzmann
responsibilities, define information transfer responsibilities, define work instructions and constraints,
define information transfer requirements, distribute information to team members, control IT
performance and optimization. This list of items can be summarized into topics: responsibilities,
communication and resource planning and controlling.

Analysis of the codes assigned lessons learned documents, as related to process group yields the
following results.

Figure 3: Process group frequency chart
An analysis of the distribution of the codes related to planning identifies the codes of define wp
responsibilities and define information transfer responsibilities as the most frequent ones. The
secondary frequent codes are: define work instructions, define work constraints, and define
information transfer requirements. Hence, the planning process, especially related to the topics of
work planning and communications planning, is obviously problematic in the IT organization. Among
the monitoring and controlling process group the most frequent codes were control of IT performance
and optimization. Within the executing process group the most frequent code was distribute
information to team members. The most frequent codes within the initiating process group were
related to customer involvement and deliverable definition.

Figure 4: Dendogram representation of hierarchical risk clustering
The visualized representations of the cluster analysis results are displayed in this dendogram chart. It
depicts the RBS (Risk Breakdown Structure) of the IT organization as derived from the content
analysis of its lessons learned documents. The RBS hierarchical representation suggests an
interesting classification of diversified risk items. The first level differentiates between issues of
customer involvement and communications on one hand, and product description and the work
required to produce it on the other hand. Each of these two major branches is further divided. The first
branch refers to issues of infrastructure and management support; the second branch focuses on
quality procedures and control analysis processes; the next branch refers to identification of
132

Vered Holzmann
constraints; the following branch is involved with change management and risk management; the fifth
branch concentrates on specific resource planning items; the next one relates to accurate definitions
of the required work during the planning phase; and the following branch is directed towards a
definition of the required resources; the two final branches are: issues related to human resource
management; and, issues of information exchange.
5. Conclusions
The current study presents a methodology that integrates qualitative and quantitative methods to
enable the identification of risk factors in high-tech companies. The methodology is based on the
analysis of existing organizational information while transforming it into valuable managerial
knowledge. The concept that associates risk management and knowledge management was
articulated by Neef (2005) who argued that risk management is knowledge management. This
research encompasses both risk management and knowledge management, which is a concept that
is also supported by other studies (e.g., Kerzner, 1999; Williams, 2003; Perrott, 2007), into one
homogeneous method that starts with existing information and ends with knowledge that emerges as
a risk management plan.

The integration of qualitative and quantitative methods into one methodology yielded an in-depth
understanding of the project and organizational risk factors and risk events that an organization might
encounter. This understanding is of great importance especially in new product development projects
(Grubisic, Gidel and Ogliari, 2011) due to the high level of uncertainty involved in this type of projects.
It is interesting to find out that even in high technology projects the major influence of the identified
risks was nevertheless related to the planning phase of communications, which manifested itself in a
later phase as problems with human resource performance.
References
Chapman, C. B. (2001) The Controlling Influences on Effective Risk Identification an Assessment for
Construction Design Management, International Journal of Project Management, Vol.19, No.3, pp.147-160.
Cohen, M.W. and Palmer, G.R. (2004) "Project Risk Identification and Management", AACE International
Transactions, IN11-IN15.
Cook, P. (2005) "Formalized Risk Management: Vital Tool for Project and Business Success", Cost Engineering,
Vol. 47 No. 8, pp. 12-13.
COSO (Committee of Sponsoring Organizations of the Treadway Commission) (2004) Enterprise Risk
Management Integrated Framework: Executive Summary, AICPA, New York, NY.
Garrett, G. A. (2005) Managing Opportunity & Risk in a Complex Project Environment, Contract Management,
Vol.54, No.4, pp.8-20.
Grubisic, V.V.F., Gidel, T. and Ogliari, A. (2011) " Recommendations for risk identification method selection
according to product design and project management maturity, product innovation degree and project
team", ICED 11 - 18th International Conference on Engineering Design - Impacting Society Through
Engineering Design Vol. 3, 2011, pp. 187-198.
Higuera, R.P. and Haimes, Y.Y. (1996) Software Risk Management, Technical Report, Software Engineering
Institute, Carnegie Mellon University, Pittsburgh, PA.
Hill, G. M. (2008) The Complete Project Management Office Handbook, Taylor & Francis Group, Boca Raton, FL.
Hillson, D. (2004) Effective Opportunity Management for Projects: Exploiting Positive Risk, CRC Press, Taylor &
Francis Group, Boca Raton, FL.
Hillson, D. (2002) Using the Risk Breakdown Structure (RBS) to Understand Risks, Proceedings of the 33rd
Annual Project Management Institute Seminars & Symposium (PMI 2002), San Antonio, USA, 7th-8th
October, Philadelphia, PMI.
Jain, A.K., Murty, M.N. and Flynn, P.J. (1999) Data Clustering: A Review, ACM Computing Surveys, Vol.31,
No.3, pp.264-323.
Jeston, J.and Nelis, J. (2006) Business Process Management: Practical Guidelines to Successful
Implementations, Butterworth-Heinemann, Jordan Hill, Oxford.
Kerzner, H. (2005) Using the Project Management Maturity Model: Strategic Planning for Project Management
(2nd Ed.) John Wiley & Sons, Hoboken, NJ.
Kerzner Harold (1999) Applied Project Management: Best Practices on Implementation, John Wiley & Sons, Inc.,
New York, NY.
Krippendorff, K. (2004) Content Analysis: An Introduction to Its Methodology, 2nd ed., Sage Publications,
Newbury Park, CA.
Lipshitz, R., Popper, M., and Friedman, V.J. (2008) "A Multifacet Model of Organizational Learning" Journal of
Applied Behavioral Science, Vol. 38, No.1, pp. 78-98.
Marxt, C. and Link, P. (2002) "Success factors for Cooperative ventures in Innovation Production Systems",
International Journal of Production Economics, Vol. 77, Iss. 3, pp. 219-229.
Nalewaik, A. (2005) "Risk Management for Pharmaceutical Project Schedules", AACE International Transactions,
Risk07, pp. 71-75.
133

Vered Holzmann
Neef, D. (2005) Managing Corporate Risk through Better Knowledge Management, The Learning Organization,
Vol.12, No.2, pp.112-124.
NIST (National Institute of Standards and Technology) (2002) Risk Management Guide for Information and
Technology Systems, [SP 800-30] Computer Security Division, IT Laboratory, Gaithersburg, MD.
Neuendorf, K. A. (2002) The Content Analysis Guidebook, Sage Publications, Thousand Oaks, CA.
Olsson R. (2007) In Search of Opportunity Management: Is the Risk Management Process Enough?
International Journal of Project Management, Vol.25, No.8, pp.745-752.
Perrott, B.E. (2007) A Strategic Risk Approach to Knowledge Management, Business Horizon, Vol.50, pp.523-
533.
PMI (Project Management Institute) Standards Committee (2008) A Guide to the Project Management Body of
Knowledge (PMBOK Guide) 4th ed., Project Management Institute, Newtown Square, PA.
Ravindranath, C. P. (2007) Applied Software Risk Management, Taylor & Francis Group, Boca Raton, FL.
Raz, T., Shenhar, A.J. and Dvir, D. (2002) "Risk Management, Project Success, and Technological Uncertainty",
R&D Management, Vol. 32, No. 2, pp. 101-110.
SEI (Software Engineering Institute) A Taxonomy of Operational Risks, 15213-3890 [CMU/SEI-2005-TN-036],
Carnegie Mellon University, Pittsburgh, PA.
Ward, S. C. and Chapman, C. B. (2003) "Transforming Project Risk Management into Project Uncertainty
Management", International Journal of Project Management, Vol.21, No.2, pp.97-105.
Williams, T. (2003) Learning from Projects, Journal of the Operational Research Society, Vol.54, No.5, pp.443-
451.
Zsidisin, G.A., Panelli, A. and Upton, R. (2000) Purchasing Organization Involvement in Risk Assessment,
Contingency Plans, and Risk Management: An Exploratory Study, Supply Chain Management, Vol.5, No.4,
pp.187-197.
134
Evaluating Determinants for ERP use and Value in Scandi-
navia: Exploring Differences Between Danish and Swedish
SMEs
Bjrn Johansson
1
, Pedro Ruivo
2
, Tiago Oliveira
2
and Miguel Neto
2

1
Department of Informatics, Lund University, Sweden
2
ISEGI, Universidade Nova de Lisboa, Portugal
bjorn.johansson@ics.lu.se
pruivo@isegi.unl.pt
toliveira@isegi.unl.pt
mneto@isegi.unl.pt

Abstract: In the paper we present a research model for evaluating determinants of ERP value in small and me-
dium-size enterprises (SMEs). The model is grounded in the diffusion of innovation (DOI) model and resource-
based view of the firm (RBV) theory. The research model links six DOI determinants to explain ERP use linked
with two additional determinants to explain ERP value, on which nine hypotheses are postulated. The hypothe-
ses were tested through structural equation modelling on a dataset from a web survey of 325 SMEs in Denmark
(107) and Sweden (218). Through an empirical work we validate the theoretical arguments and provide insight
into how SMEs use and value ERP, especially how perceived ERP use and perceived ERP value in Scandina-
vian SMEs can be explained. To our knowledge this is the first empirical research study on Scandinavian SMEs,
thus adding a cross-country dimension to the innovation diffusion literature. Unlike the typical focus on ERP
adoption in large firms found in the literature, this study focuses on post-adoption of ERP in SMEs. The main find-
ing is that Danish and Swedish SMEs show different results despite the fact that they seem to be so similar. Our
study reveals that while transactional efficiency, best-practices, and competitive pressure are important determi-
nants of ERP use in both Swedish and Danish SMEs, complexity is significant only among Danish firms. Com-
patibility has contrary effects, i.e., it is an inhibitor for Danish SMEs and a facilitator for Swedish SMEs to explain
ERP use. Furthermore, while for Danish SMEs ERP value is explained mainly by collaboration, for Swedish
SMEs ERP value is explained mainly by analytics. The facts that the research presents results focusing on
SMEs makes it especially valuable since this is an under researched area, and that the research has 325 re-
spondents also makes it important in exploring the differences and similarities between countries, adding an in-
ternational dimension.

Keywords: ERP, SMEs, diffusion of innovation, resource-based view, use, value, post-adoption
1. Introduction
Some enterprise systems (ES), such as enterprise resource planning systems (ERPs) are more or
less a de facto standard amongst large organizations. Lately, small and medium-sized enterprises
(SMEs) have also showed a great interest in adopting ERPs. However, in the context of SMEs an in-
teresting question is how decision-makers in SMEs perceive the value of the adopted system. The
questions addressed in this paper are: how can perceived ERP use and perceived ERP value in
SMEs be explained? From the results of those questions we then explore if there are differences be-
tween Danish and Swedish SMEs. Through this comparison we seek to increase the knowledge on
determinants for what is seen as influencing gained value from a decision on adoption of ERPs in
SMEs. This is of interest to stakeholders in the ERP value-chain. The vendors and providers of ERPs
will gain a better understanding of what user organizations experience about gained value in the post-
adoption phase of an ERP. The user organizations and also organizations that have not yet decided
about adoption of ERP will gain knowledge on what they can expect as a result of adopting an ERP
system.

We present a research model for evaluating determinants of ERP value in SMEs. The model is
grounded in the diffusion of innovation (DOI) model and resource-based view of the firm (RBV) the-
ory. The research model links six DOI determinants to explain ERP use linked with two additional de-
terminants to explain ERP value, on which nine hypotheses are postulated Testing of the hypotheses
was conducted through structural equation modelling on a dataset from a web survey of 325 SMEs in
Denmark and Sweden.

The remainder of the paper is structured as follows. The next section presents a literature review giv-
ing the theoretical perspectives as well as the research design and hypotheses. The characteristics of
135

Bjrn Johansson et al.
the survey data and the results are presented in Section 3. Section 4 then analyse the hypotheses.
The final section then presents conclusions and some ideas for future research.
2. Theoretical perspectives and hypotheses
According to Nah, Tan and Teh (2004) ERP use means ERP utilization, which refers to the experi-
ence of managing the operation of the system software throughout the systems post-implementation
stages. In line with literature, we consider ERP to be a type of innovation that is implanted in a firms
core business processes in order to leverage performance (Rajagopal, 2002; Zhu and Kraemer,
2005). Not only does it extend basic business and streamline integration with suppliers and custom-
ers, it also directs system usage to the firms performance. Rogers' (1995) Diffusion of innovation
(DOI) model aims to explain and predict if and how an innovation is used within a social system, with
regard to performance at the firm level. Research conducted by Bradford and Florin (2003) verifies
DOI determinants regarding successful ERP usage. Considering their findings, we believe that DOI
has the potential to provide a favourable framework for explaining ERP use.

As information technology (IT) value relies on how firms strategically exploit it, firms performance in a
competitive environment is a subject that draws much attention and attempts to build explanatory
theories. One of the most recognized is the resource-based view (RBV) theory, which states that firm-
specific resources determine the firms performance. It is linked to the competitive advantage ap-
proach to strategic management and can explain sustained advantages (Hedman and Kalling, 2003).
In the IS literature, the RBV has been used to analyse IT capabilities as a resource and to explain IT
business value. That is, IT business value depends on the extent to which IT is used in the key activi-
ties of the firm. The greater the use, the more likely the firm is to develop unique capabilities from its
IT business applications (Antero and Riis, 2011; Bharadwaj, 2000; Zhu and Kraemer, 2005). Hedman
and Kalling (2003) and Fosser et al. (2008) used RBV to extend Mata et al.s (1995) framework for
organizational and business resources and concluded that ERP systems are IT resources that can
lead to sustained, competitive advantages. With this in mind, we believe that RBV has the potential to
provide a favourable framework for explaining ERP value.

For these reasons, we next postulate six hypotheses to explain ERP use (Hypothesis 1 Hypothesis
6) based on DOI literature, and three hypotheses based on RBV theory to explain ERP value (Hy-
pothesis 7 Hypothesis 9).

Hypothesis 1: ERP systems with high compatibility positively influence ERP use.

Compatibility is measured by the degree to which the ERP system matches IT features, such as com-
patibility with hardware and other software. Bradford and Florin (2003) and Elbertsen et al. (2006)
concluded that the degree of compatibility of ERP will have a positive relationship with system adop-
tion and use.

Hypothesis 2: ERP systems with high complexity negatively influence ERP use.

Cooper and Zmud's (1990) research indicates that system usage familiarity enhances job perform-
ance. Studies conducted by Kositanurit et al. (2006) and Chang et al. (2011) conclude that ERP com-
plexity is a major factor affecting user performance.

Hypothesis 3: ERP systems with high transactional efficiency positively influence ERP use.

Bendoly and Kaefer (2004) assessed transactional efficiency and found that its communication over
the ERP improves the firms overall performance. Rajagopal (2002) found that transactional efficiency
has a direct influence on ERP use. Business process benefits of ERP investment include transac-
tional efficiency, where reliability effectiveness on the application improves user confidence. Along the
same lines, Gattiker and Goodhue (2005) found that efficiency greatly benefits ERP use.

Hypothesis 4: ERP systems implemented best-practices positively influence ERP use.

From the perspective of business process reengineering, there are two main options in implementing
ERP systems: modify/customize the system to suit the firms requirements, or implementing the sys-
tem with minimum deviation from the standard settings - adopting best practice - (Davenport, 1998).
According to Chou and Chang (2008) and Maguire et al. (2010) the reason for adopting best-
136

Bjrn Johansson et al.
practices is the belief that ERP design does things in the right way. In line with Wenrich and Ahmad
(2009), firms that implement industry best-practices dramatically reduce risk and time consuming pro-
ject tasks such as configuration, documentation, testing, and training. Thus, we postulate that best-
practices will positively influence ERP use.

Hypothesis 5: User training of ERP systems will positively influence ERP use.

Several researchers, including OLeary (2000), Bradford and Florin (2003), and Maguire et al. (2010)
state that one of the main determinants for the successful use of ERPs is training of users. They state
that preparedness of users to carry out a planned sequence of actions without upstream errors has a
positive impact on business. Providing knowledge and skills to employees on how to use the system
improves familiarity and boosts its usage.

Hypothesis 6: Competitive pressure positively influences ERP use.

Competitive pressure has long been recognized in the innovation diffusion literature as an important
driver of technology diffusion (Bradford and Florin, 2003; Oliveira and Martins, 2010b; Zhu and
Kraemer, 2005). These studies have shown that innovation diffusion is accelerated by the competitive
pressure in the environment. Thus, we postulate that competitive pressure plays an important role in
pushing firms toward using ERP systems.

Hypothesis 7: ERP use positively influences ERP value.

Shahin and Ainin (2011) found that user fit on ERP is important to explain the ERP usage, and a suc-
cessful adaptation with firms processes and data flow from other systems makes ERP worthwhile.
With ERP systems firms can form a specific resource that guides both internal and external collabora-
tion and provides the repository to perform business analyses. As a result, it is only when firms actu-
ally use ERP systems to conduct business that it can have an impact on firm performance (Devaraj
and Kohli, 2003; Zhu and Kraemer, 2005).

Hypothesis 8: Collaboration by ERP systems positively influences ERP value.

Calisir and Calisir (2004), Gattiker and Goodhue (2005), and Ruivo et al. (2012) support the conclu-
sion that ERP systems help users to collaborate; up, down, and across their department, company,
and industry ecosystem, increasing their productivity and the health of their firms and business part-
ners. ERP allows both humans and applications to collaborate, from meeting service-level to support-
ing firms performance. ERP provides users with a structured communication channel with the right in-
formation at the right time, resulting in increased efficiency and effectiveness. We believe that
partnering with system and cross-group collaboration amplifies ERP value.

Hypothesis 9: Analytics exploited from ERP systems positively influence ERP value.

Davenport and Harris (2007) stated that firms generally use business analytics to gain competive-
ness. The common data model and visibility across functional departments allows firms metrics/KPIs
to be unified and consistent. Although ERP systems are essentially transaction-focused on internal
data, those firms that exploit ERP analytics capabilities can easily and quickly use data for managerial
decision making and realize an advantage in their pursuit of sustainable performance (Chiang, 2009;
Ruivo and Neto, 2010). We therefore suggest that analytics provides users with unique business in-
sight information, and thereby positively influences ERP value.
3. Research methodology and data analysis
A web-survey was used for data collection and each item-question was reviewed for content validity
by ERP experts. The initial questionnaires were pilot tested on 10 firms and some items were revised
for clarity with assistance from the International Data Corporation. To ensure the generalization of the
survey results, the sampling was stratified by country (Denmark and Sweden), by firm size (fewer than
250 employees), and by industry (finance, distribution, manufacturing, and professional-services).
Questionnaires were translated into the two languages and sent in September and October 2011. In
total, 600 (200 Danish and 400 Swedish) firms received the email survey, and 325 (107 Danish and
218 Swedish) valid responses were returned. Table 1 shows characteristics of the sample, regarding
number of years using ERP, industry type, and position of respondent. In general it can be stated that
137

Bjrn Johansson et al.
there are more similarities between Denmark and Sweden than there are differences in the sample.
The biggest difference in the sample is the number of years using ERP, where there is a higher fre-
quency of SMEs that have used ERPs for more than ten years in Sweden.
Table1: Characteristics of the samples
Sweden (N=218) Denmark
(N=107)
# years using ERP
<2 12.4% 14.9%
2-5 21.6% 22.5%
5-10 23.8% 35.5%
>10 42.2% 27.1%
100% 100%
Industry type
Distribution 27.5% 27.1%
Finance 18.3% 21.5%
Manufacturing 33.1% 23.4%
Services 21.1% 28.0%
100% 100%
Respondent type
CEO, owner 30.2% 22.4%
Finance manager 18.8% 24.3%
IT/IS manager 10.2% 14.9%
Manufacturing manager 12.4% 8.5%
Sales manager 28.4% 29.9%
100% 100%
The survey instrument was based on well-established scales as adapted to the context of ERP (see
Appendix A). We performed the Kolmogorov-Smirnov test and confirmed that none of the items
measured are distributed normally (p<0.001). For this reason we used partial least squares (PLS),
since this statistical technique does not require normal distribution. We used the SmartPLS software
to estimate the model. We validate all the items in Appendix A, since all have loadings above 0.7 and
are significant at (p<0.001), in accordance with Chin (1998). Furthermore, it is shown that composite
reliability (CR) and average variance extracted (AVE) for each construct are above the cut-off of 0.7
and 0.5, respectively (Hair, Anderson, Tatham and Black, 1998). The table with item loading and CR
and AVE values is available from the authors on request. In short, our measurement model satisfies
reliability and validity criteria. We tested the conceptual model by using the sample split between
Sweden and Denmark. The control variables used were size and industry type. Figure 1 shows the
path coefficients and t-statistics (in parentheses), as well the R
2
values for dependent constructs.

The analysis of hypotheses on the Swedish and Danish samples was based on the examination of
the standardized paths shown in Figures 1(a) and (b), respectively. Regarding the Swedish sample,
four out of six DOI determinants for ERP use are statistically significant; complexity and training are
not significant. Thus, H1, H3, H4, and H6 regarding ERP use are supported, while H2 and H5 not are
supported. In addition, the model indicates significant links between collaboration (0.343) and ana-
lytics (0.361) to ERP value, meaning that both H8 and H9 are supported, while H7 (ERP use to ERP
value) is not statistically significant. Regarding the Swedish sample, through R
2
value examination this
model explains the variability of ERP use in 42.4% and 'ERP value' in 43.6%. In the Danish sample,
for ERP use, five out of six DOI determinants are found to be significant, while training is insignifi-
cant. Compatibility was expected to be positive, but it is negative. Complexity was expected to be
negative and, so it is. Therefore, H2, H3, H4, and H6 for ERP use are supported, while H1 and H5
are not supported. Also the Danish sample shows a not significant link between ERP use and ERP
value, hence H7 is not confirmed. As in the Swedish sample, the Danish sample shows a significantly
positive association of collaboration and analytics with ERP value. Hence, H8 and H9 are supported.
Again, with regard to the Danish sample, through R
2
value examination, this model explains variability
of ERP use in 52.0% and ERP value in 45.9%.

In a deeper analysis, we tested the differences between the path coefficients across Swedish and
Danish samples. Table 2 shows that regarding ERP use, training as well as efficiency does not show
a statistically significant difference (p>0.10) between countries, being equally important for both
Swedish and Danish SMEs. Whereas best-practices is a more important factor for Danish SMEs,
competitive pressure is more important for Swedish SMEs. Moreover, compatibility and complexity
138

Bjrn Johansson et al.
are found to be important inhibitors for Danish SMEs firms when compatibility is a facilitator for Swed-
ish SMEs. Regarding complexity, it is also an inhibitor for Swedish SMEs even if it is not seen as such
a strong inhibitor for Danish SMEs. Regarding ERP value; ERP use is not a statistically significant
difference (p>0.10) between countries, which means that ERP use is understood as being equally im-
portant for both countries, SMEs. Collaboration is seen as more important for ERP value among Dan-
ish SMEs while analytics is a more important factor for Swedish SMEs when it comes to perceived
ERP value.

Figure 1: Research model, path models of Sweden and Denmark SMEs
Table 2: Results of pooled error term t-tests
Sweden Denmark Comparison
(Sweden-Denmark)
Path co-
eff.
SE from
bootstrap
Path co-
eff.
SE from
bootstrap
t-Stat. p (2-
tailed)
Compatibility 0.152 0.035 -0.374 0.025 12.102 0.000
Complexity -0.010 0.026 -0.109 0.034 2.350 0.019
Efficiency 0.162 0.031 0.182 0.020 -0.559 0.576
Best-Practices 0.173 0.032 0.370 0.033 -4.262 0.000
Training 0.032 0.040 0.000 0.025 0.686 0.493
Competitive
ERP Use
0.370 0.023 0.152 0.025 6.459 0.000
ERP Use 0.024 0.023 -0.014 0.036 0.902 0.367
Collaboration 0.343 0.023 0.505 0.028 -4.509 0.000
Analytics
ERP Value
0.361 0.032 0.215 0.031 3.275 0.001
4. Discussion
The purpose of this research was to identify determinants that explain ERP use and ERP value, and
the magnitude variation across Swedish and Danish SMEs. Empirical results support our theoretical
139

Bjrn Johansson et al.
model, and all hypotheses have been tested. Academic and managerial implications are discussed
below. Although both paths associated with collaboration and analytics are significantly positive, in the
Danish sample collaboration is much stronger, whereas analytics is much stronger among Swedish
SMEs, as shown in Table 2. This difference might be explained by the fact that Swedish SMEs have
been using ERP for a longer time than Danish SMEs (Table 1). Thus, not taking full potential in ex-
ploiting data analytically, and aligned with Buonanno et al. (2005), ERP starters confer more value to
collaboration because it is often connected to the organizational enhancements. However, the differ-
ence in the samples regarding number of years is not great enough that it alone could explain the dif-
ference. But, if also taken into consideration that Danish have a higher number of service organiza-
tions and stipulating that service organization demands more collaboration it could be that both these
characteristics explain the difference. While the link between ERP use and ERP value is found to be
not significant, it can be stated that there is no difference between Swedish and Danish SMEs in this
respect.

Furthermore, although competitive pressure, best-practices, efficiency, and compatibility are signifi-
cant, their importance differs across Swedish SMEs and Danish SMEs, as shown in Table 2. The un-
derlying rationale would be that the number of years using the system influences ERP use. This con-
clusion might be explained through cross-country analysis. First, although compatibility and efficiency
have significant paths for both Swedish and Danish SMEs, compatibility shows a negative result for
Danish SMEs. This may also be explained by the weaker relationship that Danish SMEs gives for the
relationship between analytics and ERP value. The explanation could be that if analytics is not seen
as that important, it could also be that the linkage between different data sources (compatibility) is
also not seen as an important determinant for ERP use. Hence, technological characteristics such as
compatibility with other hardware and software, and transactional efficiency depend on the system
stability, which requires use over time. As a result, ERPs with less customization (using standard pro-
tocols and best practices) are more suited to face compatibility issues (Buonanno et al., 2005;
Nicolaou and Bhattacharya, 2006). Second, following our predictions, and in line with the conclusions
of Bradford and Florin (2003), Kositanurit et al. (2006), and Chang et al. (2011), our findings reveal a
negative effect of system complexity on ERP use. However, it is not statistically significant for Swed-
ish SMEs. It has been widely believed that complexity of business applications is an inhibitor to use,
but from our results we could only provide evidence for this in the Danish sample. Competitive pres-
sure is statistically significant for both Swedish and Danish SMEs, but is stronger for Swedish SMEs.
This reveals that competitive pressure is a subject where analytics plays a critical role in gaining busi-
ness advantages.

The results have several important implications for management. In the first place, as ERP diffuses
through greater use, and becomes a necessity, the competitive pressure infuses IT-enhanced capa-
bilities such as collaboration and business analytics, which are major sources of ERP value. Our
analysis shows that the relationship between training and ERP use is not found to be statistically sig-
nificant, which is a surprise. However, that could probably be explained by the fact that in both cases
Swedish and Danish SMEs have used ERPs for many years. Best practices shows a stronger linkage
amongst Danish SMEs than amongst Swedish SMEs, as does collaboration. This provides indications
for managers on what the ERP is used for and could therefore be used for managers as a decision
point when making the decision on what specific ERP solution to implement. Lastly, our study also of-
fers implications for software makers; ERP complexity, business analytics, and business collaboration
functionalities have emerged as important factors for ERP use and value. This paper has some limita-
tions that may form the starting point for further research. First, although our study shows evidence
that use and value importance vary across-countries in association with the number of years using
ERP, we cannot speak empirically about the issue of whether the maturity stages play a role, because
this would require an adoption process life-cycle study. An interesting different direction could be to
study the maturity stages of ERP. Second, although data cover several industry types, some biases
may have been introduced; perhaps different industries have different operating characteristics and
environments, and the factors related to ERP use and value may differ accordingly (Oliveira and
Martins, 2010a). Consequently, we encourage further studies comparing industries.
5. Conclusion
Via an empirical study among Scandinavian SMEs we evaluated a research model for assessing ERP
use and ERP value at the firm level based on diffusion on innovation and resource-based view theory.
While use and value are usually studied separately, our study proposes that use and value are closely
associated for the post-adoption stages of ERPs. Besides being the first model applied to Scandina-
140

Bjrn Johansson et al.
vian SMEs, our study contributes to the literature by adding an international dimension and by moving
beyond dichotomous adoption versus non-adoption, linking ERP value creation to collaboration and
analytics. For ERP use, our study has examined six DOI determinants; whereas competitive pres-
sure, best-practices and transactional efficiency are important to both Swedish and Danish SMEs,
cross-country analysis also shows complexity to be an important inhibitor for ERP use among Danish
SMEs, but not significant among Swedish SMEs. For ERP value our study demonstrates that col-
laboration and analytics contribute to value creation from ERP. Moreover, our study reveals that while
collaboration is more important for Danish SMEs, analytics is more important for Swedish SMEs. The
main conclusion is that Danish and Swedish SMEs present significant different results despite the fact
that in general they seem to be so similar.
Appendix A: Measures
Respondents were asked to rate their perception of
Construct /Items Scale
Compatibility (Bradford and Florin, 2003):
CB1 with others software. 1~5
CB2 with others hardware. 1~5
Complexity (reverse code) (Kositanurit et al., 2006):
CX1 intuitiveness of the system. 1~5
CX2 how comfortable users feel using it. 1~5
Efficiency (Rajagopal, 2002):
EF1 effectiveness in executing repetitive tasks. 1~5
EF2 effectiveness of user interface. 1~5
EF3 speed and reliability of system. 1~5
Best-Practices (Wenrich and Ahmad, (2009):
BP1 set up of the application. 1~5
BP2 map workflows based on local requirements. 1~5
BP3 system adaptability to business needs. 1~5
Training (Bradford and Florin, 2003):
TN1 understanding of the content training material. 1~5
TN2 applied to daily tasks. 1~5
Competitive Pressure (Oliveira and Martins, 2010b):
CP1 experienced competitive pressure to use ERP. 1~5
CP2 firms competitors affects your landscape market. 1~5
ERP Use (Zhu and Kraemer, 2005):
ERPU1 how much time per day do employees work with the system? %
ERPU2 how many reports are generated per day? %
Collaboration (Gattiker and Goodhue, 2005):
CO1 collaborate with colleagues. 1~5
CO2 collaborate with the system. 1~5
CO3 communicate with suppliers, partners, and customers. 1~5
Analytics (Ruivo and Neto, 2010):
AN1 comprehensive reporting. 1~5
AN2 real-time access to information. 1~5
AN3 data visibility across departments. 1~5
ERP Value (Zhu and Kraemer, 2005):
ERPV1 user satisfaction. 1~5
ERPV2 individual productivity. 1~5
ERPV3 customer satisfaction. 1~5
ERPV4 management control. 1~5
References
Antero, M. and Riis, P. H. (2011) Strategic Management of Network Resources: A Case Study of an ERP
Ecosystem, International Journal of Enterprise Information Systems, Vol 7 No. 2, pp. 18-33.
Bendoly, E. and Kaefer, F. (2004) Business technology complementarities: impacts of the presence and strategic
timing of ERP on B2B e-commerce technology efficiencies, Omega, Vol 32 No. 5, pp. 395-405.
Bharadwaj, A. S. (2000) A resource-based perspective on information technology capability and firm
performance: An empirical investigation, MIS Quarterly, Vol 24 No. 1, pp. 169-196.
Bradford, M. and Florin, J. (2003) Examining the role of innovation diffusion factors on the implementation
success of enterprise resource planning systems, International Journal of Accounting Information Systems,
International Journal of Accounting Information Systems, Vol 4 No. 3, pp. 205-225.
141

Bjrn Johansson et al.
Buonanno, G., Faverio, P., Pigni, F., Ravarini, A., Sciuto, D. and Tagliavin, M. (2005) Factors affecting ERP
system adoption: A comparative analysis between SMEs and large companies, Journal of Enterprise
Information Management,, Vol 18 No. 1, pp. 384-426.
Calisir, F. and Calisir, F. (2004) The Relation of Interface Usability Characteristics, Perceived Usefulness, and
Perceived Ease of Use to End-User Satisfaction with Enterprise Resource Planning (ERP) Systems,
Computers in Human Behavior, Vol 20 No. 4, pp. 505-515.
Chang, H.-H., Chou, H.-W., Lin, C.-P. Y. and Cecilia, I. (2011) ERP Post-Implementation Learning, ERP Usage
and Individual Performance Impact, 15th Pacific Asia Conference on Information Systems, Vol. 35
Brisbane, Australia.
Chiang, A. (2009) Creating Dashboards: The Players and Collaboration You Need for a Successful Project,
Business Intelligence Journal, Vol 14 No. 1, pp. 59-63.
Chin, W. W. (1998) Issues and Opinion on Stuctural Equation Modeling, MIS Quarterly, Vol 22 No. 1, pp. 7-16.
Chou, S. W. and Chang, Y. C. (2008) The implementation factors that influence the ERP (enterprise resource
planning) benefits, Decision Support Systems, Vol 46 No. 1, pp. 149-157.
Cooper, R. and Zmud, R. (1990) Information Technology Implementation Research: A Technological Diffusion
Approach, Management Science, Vol 3 No. 2, pp. 123-139.
Davenport, T. H. (1998) Putting the enterprise into the enterprise system, Harvard Business Review, Vol 76 No.
4, pp. 121-131.
Davenport, T. H. and Harris, J. G. (2007) Competing on Analytics: The New Science of Winning, Harvard
Business School Press.
Devaraj, S. and Kohli, R. (2003) Performance impacts of information technology: Is actual usage the missing
link?, Management Science, Vol 49 No. 3, pp. 273289.
Elbertsen, L., Benders, J. and Nijssen, E. (2006) ERP use: exclusive or complemented?, Industrial Management
& Data Systems, Vol 106 No. 6, pp. 811-824.
Fosser, E., Leister, O. H., Moe, C. E. and Newman, M. (2008) Organizations and vanilla software: What do we
know about ERP systems and competitive advantage?, European conference on information systems, Vol.
8 (Ed, ECIS) Galway, Irland, pp. 24602471.
Gattiker, T. F. and Goodhue, D. L. (2005) What happens after ERP implementation on plant-level outcomes, MIS
Quarterly, Vol 29 No. 3, pp. 559-585.
Hair, J., Anderson, R., Tatham, R. and Black, W. (1998) Multivariate Data Analysis, Prentice Hall, New Jersey.
Hedman, J. and Kalling, T. (2003) The business model concept: theoretical underpinnings and empirical
illustrations, European Journal of Information Systems, Vol 12 No. 49-59.
Kositanurit, B., Ngwenyama, O. and Osei-Bryson, K. (2006) An exploration of factors that impact individual
performance in an ERP environment: An analysis using multiple analytical techniques, European Journal of
Information Systems, Vol 15 No. 556-568.
Maguire, S., Ojiako, U. and Said, A. (2010) ERP implementation in Omantel: a case study, Industrial
Management & Data Systems, Vol 110 No. 1, pp. 78-92.
Mata, F. J., Fuerst, W. L. and Barney, J. B. (1995) Information technology and sustained competitive advantage:
A resource-based analysis, MIS Quarterly, Vol 9 No. 4, pp. 487505.
Nah, F., Tan, X. and Teh, S. H. (2004) An Investigation on End-Users Acceptance of Enterprise Systems,
Information Resources Management Journal, Vol 17 No. 3, pp. 32-53.
Nicolaou, A. and Bhattacharya, S. (2006) Organizational performance effects of ERP systems usage: the impact
of post implementation changes, International Journal of Accounting Information Systems, Vol 7 No. 1835.
OLeary, D. (2000) Enterprise resource planning: systems, life cycle, electronic commerce, and risk, Univ. Press,
New York: Cambridge.
Oliveira, T. and Martins, M. F. (2010a) Firms Patterns of E-business Adoption: Evidence for the European Union-
27, The Electronic Journal Information Systems Evaluation, Vol 13 No. 1, pp. 47-56.
Oliveira, T. and Martins, M. F. (2010b) Understanding e-business adoption across industries in European
countries, Industrial Management & Data Systems, Vol 110 No. 9, pp. 1337-1354.
Rajagopal, P. (2002) An innovation-diffusion view of implementation of enterprise resource planning (ERP)
systems and development of a research model, Information & Management, Vol 40 No. 87-114, pp.
Rogers, E. M. (1995) Diffusion of innovations, The Free Press, New York.
Ruivo, P. and Neto, M. (2010) ERP software for SMEs in Portugal: Exploratory study of new KPIs, 4th European
Conference on Information Management and EvaluationLisbon, Portugal, pp. 421-428.
Ruivo, P., Oliveira, T. and Neto, M. (2012) ERP use and value: Portuguese and Spanish SMEs, Industrial
Management & Data Systems, Vol 112 No. 7, pp. in press.
Shahin, D. and Ainin, S. (2011) The influence of organizational factors on successful ERP implementation,
Management Decision, Vol 49 No. 6, pp. 911-926.
Wenrich, K. and Ahmad, N. (2009) Lessons learned during a decade of ERP experience: A case study,
International Journal of Enterprise Information Systems, Vol 5 No. 1, pp. 55-73.
Zhu, K. and Kraemer, K. L. (2005) Post-adoption variations in usage and value of e-business by organizations:
Cross-country evidence from the retail industry, Information Systems Research, Vol 16 No. 1, pp. 61-84.

142
User Experience in Mobile Phones by Using Semantic
Differential Methodology
Kalimullah Khan
School of Computing, Blekinge Institute of Technology, Sweden
Kakh08@bth.student.se

Abstract: Measuring overall UX is a challenging process because of its versatile nature. Studies showed that
hundreds of thousands of products are returned each year not because of its functional behaviour but because of
its bad user experience. Researchers and practitioners use different techniques and methods to capture the
customer psychological and behavioural aspects towards a product and to put it into design so that the future
product form must be in according to his/her expectations. In this paper a research work is carried out to evaluate
user experience evaluation methodologies and to identify a method which can be used efficiently to measure the
overall user experience of a product use from user experience using mobile phone as a case. As overall user
experience constitutes both the experiential as well as non- experiential aspects of a product. Hence semantic
differential methodology is identified as a best suited method based on current user experience evaluation
methodologies and later on used to measure preference from the overall user experience which can be used to
improve product form to ensure customer loyalty.

Keywords: semantic differential (SD), user experience (UX), overall user experience, user preferences, UX
metrics
1. Introduction
In daily life a person is encountered with a wide range of products and share different experiences. As
human experience is complex and evolve all the time therefore very difficult to measure. Both
experiential and non-experiential experience make it complex due to which it becomes dynamic,
complex and subjective; therefore the success of technology adheres to experiential as well as the
non experiential aspects.

The authors used semantics covering broadly the overall user experience of technology use such as
social, emotional, aesthetic, fun, joy, cool and mood etc to measure overall user experience. The
authors also claimed that SDM can be used to measure overall user experience in order to identify
user preferences in short time and with less resource consumptions. The results obtained through this
methodology resulting in positive experience indicate satisfaction while preferences resulted in
negative experience indicating improvements needed in product form. Preference indicates that
product form can be improved accordingly to achieve satisfaction and loyalty.
2. User experience theory
User experience is ubiquitous. The user churns the experience the moment he start using the
product. As UX is a resultant of behavioural, temporal and psychological aspects. Therefore, it is very
crucial to know UX evaluation and its results on product development. Therefore, before starting
evaluating the non experiential aspects of products, one needs to understand the actual meaning of
user experience evaluation. In order to do so, five steps have been considered in below paragraphs
to develop a systematic way to evaluate user experience.

User Experience

To gain a common understanding about UX and make science out of it, we should be able to
formulate a definition that everyone can agree on. There are a number of UX definitions out there;
below are few of them.

All the aspects of how people use an interactive product: the way it feels in their hands, how well they
understand how it works, how they feel about it while theyre using it, how well it serves their
purposes, and how well it fits into the entire context in which they are using it (Hassenzahl &
Tractinsky, 2006). This definition fits well according to the overall UX. It defines their identity factor,
the way it feels in their hands, aesthetics, how they feel about it while theyre using it, utility, how
well it serves their purpose, stimulation how well it fits into the entire context in which they are using
it.

143

Kalimullah Khan
Other definitions which are suitable according to overall UX aspects are:

Kuniavsky believes that the precise definition of user experience is very difficult; because user
interacts with others and in return the user experience is made omnipresent by the environment
(Kuniavsky, 2003). International Standard Organization (ISO) has its own view of defining user
experience. According to the current version, it is defined as a persons perceptions and responses
that result from the use of anticipated use of a product, system or service (ISO, 2008). Similarly,
according to Usability Professionals Association (2007), Every aspect of the users interaction with a
product, service, or company that makes up the users perceptions of the whole. Logan et al (1994)
argued for the importance of whole range of specific non pragmatic needs such as surprise, diversion
or intimacy to be addressed by technology. Drawing upon the concept of emotional usability.
Hassenzahl asserts that future HCI must be concerned about the pragmatic aspects of interactive
products as well as about hedonic aspect, such as stimulation, identification and evocation so that his
multidimensional model explicitly links product attributes with needs and values that could facilitate in
the evaluation of UX (Hassenzahl, 2003). Similarly, according to Mkel et al (2001), Result of
motivated action in a certain context. The user experience considers the wider relationship between
the product and the user in order to investigate the individuals personal experience of using it
(Hassenzahl, 2006).

Components of User Experience

The aim of measuring user experience is to consider a more comprehensive user centred approach
that takes also aspects into account which go beyond usability. One approach for defining the
components of user experience is to characterize specific dimensions that are important aspects in
the experience of a product. For this purpose, Hassenzahl, distinguishes two dimensions of product
qualities, namely the perception of instrumental (or: pragmatic) and non-instrumental (or: hedonic)
qualities (Jordan, 2000). The importance of those aspects is motivated by their immediate
understandability. While usability evaluation depends basically on interaction with the product, the
attributes that enable hedonic judgments are inherent in the product appearance itself.

A third important aspect of user experience is emotional user reactions. For example, in his
hierarchical model, Jordan distinguished several types of pleasure with a product, whereby he insists
on high functionality and high usability as necessary preconditions (Karapanos et al, 2008).

User experience metrics

UX is a subjective and holistic concept; it is not easy to define a set of criteria against which it could
be evaluated, but there are user experience evaluations where the participants evaluate the product
against the personal criteria (Norman, 2009). This way of metrics setting is very interesting as it well
addresses the subjective nature of user experience.

From product creation perspective, each product aims at certain user experience, e.g. fun, trust, or
relaxing. In this case, it is useful to define metrics from the products perspective rather than from
each individuals perspective. UX is more dynamic therefore it is very hard to define its metrics. Some
practitioners evaluate aesthetics while others look for emotional aspects. Hence UX can be
momentary, episodic and long-term adding value to product design.

User experience evaluation methods

UX is a complex concept that requires specific evaluation techniques in order to consider its all
aspects. These techniques can be very resource consuming, including people, time, and money.
Therefore cost-effective evaluation techniques are of a great importance.

Numerous potential criteria for applicable methods are obvious from literature; therefore one method
will not serve all purposes (Osgood, 1952). A set of methods should form a User experience
evaluation tool kit so that user experience practitioners can select an applicable method for each
case. There are methods developed for examining users momentary emotions during interacting with
a system (Chang et al, 2002) or for analysing the emotions after interaction (Guo, 2010). In user
experience literature, a movement from emotion assessment towards a longer period of time has also
been noticed (Azhari, 2007).This movement means a change in the way user experience is evaluated
144

Kalimullah Khan
and one should not only investigate momentary emotions but also examine how users experience a
product as a whole during a longer period which results in satisfaction.

There exist a number of methodologies to evaluate non-experiential aspects of technology use in a
particular context with particular system and mode of use but no specific method found in the
literature to measure overall user experience. From the literature review we came across various
UXEMS but they can be used as momentary, episodic or long-term. They are highly time consuming
and difficult to run emotion analysis. They require high skills to evaluate user experience and difficult
to interpret user results. Some of methodologies found after conducting literature review are as follow.
Repertory Grid Technique
Experimental Pilots
SAFE Method
Web Based Surveys
Diary Method
Interviews
Heuristic Evaluation Methods
Cognitive Walkthrough
Integrate user experience evaluation into product development

The basic user-centred design principles apply for user experience design as well. We should first
consider users needs and wants in selected contexts, then iteratively design and evaluate the
concepts during the product development process. The methods are different in different phases of
product development: in the very early phases, concept ideas can be evaluated with surveys.
Resource-efficient user experience evaluation methods help introducing user studies as new activities
into the companies that have not followed user-centred design earlier. The goal of user evaluation is
to ensure that all products will be valuable and enjoyable for the target users, and this pay backs in
customer satisfaction to ensure customer loyalty.
3. Factor analysis
Thurstone (1947) was the originator of factor analysis, which was developed in the area of
psychometrics. Nowadays, the method is frequently used as a statistical data reduction technique to
explain variability among observed random variables in terms of fewer unobserved random variables
called factors. The technique is used in many fields, including economics and sociology. The method
can now be implemented easily using convenient software packages, even by users who lack detailed
knowledge of mathematical background. Factor Analysis is normally used to identify underlying
communalities amongst the scales employed. The most frequently obtained communalities or factors
are.
Evaluation, defined by adjectives such as liked- disliked, positive- negative, honest- dishonest,
Potency, defined by heavy- light, strong- weak, hard- soft.
Activity, defined by adjectives such as active- passive, hot- cold, fast- slow.
4. Semantic differential methodology
SDM is recognized as a useful method for measuring persons semantic images of a concept, and
many examples have been reported, applied in various areas. It has proven to be flexible and reliable
instrument for measuring attitudes to a wide range of stimuli. The method normally employs rating of
stimuli by using bipolar scales. Each bipolar scale is defined by a pair of adjectives with contrasting
meanings such as Fast- Slow, Cheap- expensive.

Semantic differential is a method use to evaluate overall user experience not only in a short period of
time with less resource consumptions but also produce satisfactory results. Azhari (2007) highlighted
the importance of the methodology of market research and feedback with known demographic profiles
for better design using semantic differential methodology with multidimensional scaling approach. The
study was conducted in two different countries, Australia and Malaysia. It was claimed that the degree
of satisfaction towards a particular product lies in the perceived value of users of different age groups.
145

Kalimullah Khan
Users of different age groups behave differently towards the same product use, because there exist
differences in the consumers behaviour. In this study SD method was used to evaluate the
preferences and image perceptions of mobile phones. In another study Green et al (2009) identify the
parameters of customer satisfaction and loyalty for mobile phones. Impact of perceived customer
value, perceived service quality, trust on customer and the influence of satisfaction on loyalty was
investigated. A questionnaire survey was used to collect data customer preferences on above
mentioned artifacts. It was found that customer satisfaction and loyalty is predicted by trust and
emotional value. It was also further concluded that perceived service quality is a significant factor
influencing trust which affect customer satisfaction and cause changes up to an extent in loyalty.

In this paper the authors used semantics covering broadly the overall user experience of technology
use such as social, emotional, aesthetic, fun, joy, cool and mood etc to measure overall user
experience. It is also claimed that SDM can be used to measure overall user experience in order to
identify user preferences in short time and with less resource consumptions. The results obtained
through SDM resulting in positive experience indicate satisfaction while preferences resulted in
negative experience indicating improvements needed in product form. Preference indicates that
product form can be improved accordingly to achieve satisfaction and loyalty.
5. Research methodology
Research methodology is a plan or strategy to conduct a research work in a scientific way and link
methods to outcome (Creswell, 2002). It defines how to develop research activity and what
measurement should be used to advance the research and achieve research goals.

In this paper the work presented is the confluence of two approaches and answers to the problems
are sought through the following methodologies.
Literature review
Survey
In first phase, detail explorative literature review is conducted; from literature review the relevant
latent constructs which affect customer loyalty are identified. The identified constructs are based on
user experience. The role of SD in UX studies to achieve customer satisfaction and loyalty is
identified. First phase is helpful to know the empirical evidence of Semantic Differential Methodology.

In second phase, the author used data collection method questionnaire (survey) with the students of
BTH (Blekinge Institute of Technology) to understand their experience of using mobiles to measure
overall UX. The questionnaire is based on semantic differentials of bipolar pairs of adjectives. Initially
all possible pair of adjectives are selected which best explain psychological aspects of customers,
pragmatic and hedonic qualities of mobile phone products and they were selected from literature used
explicitly for mobile phone products evaluation. These pair of adjectives served as characteristics or
attributes which best explain users perception and overall UX of customers from a mobile phone
product. A survey is conducted in order to collect user evaluation data on mobile phones by using SD
method.

After getting user evaluation data, factor analysis has been conducted to identify best possible factors
which show customer preference coming from overall UX of mobile phone products. Questionnaire
significance is tested statistically through Cronbach's alpha value (Cronbach, 1951). For research
purposes alpha should be more than 0.7 to 0.8. Models have been constructed by author using
stepwise regression analysis of customer preferences for mobile phones. The models shows a strong
relationship of the customer preferences to mobile phone experience and carries an explanation of
various parameters of experience that if implemented in design, the future mobile phone product will
not only exhibit satisfaction but also increase profitability and will ensure loyalty.
6. Results
The author mainly concluded from LR results that SDM can be used in conjunction with satisfaction in
order to evaluate overall UX and to identify user preference. Literature review results indicate that
SDM is used in evaluation of UX in different contexts. Therefore, author used this method as a
method of investigation to identify the role of SD to evaluate pragmatic and hedonic aspects of UX
along with measures of the UX. The author identified satisfaction being used as a measure to
evaluate UX as 72% as show in Table 1. The selected papers were studied and 16 out of 22 provide
146

Kalimullah Khan
the evidence of satisfaction as a measure to evaluate UX. 13 out of 22 papers provide the evidence of
hedonic factors that is 59% while 11 out of 22 papers shows pragmatic use i.e. 50%. Other measures
of UX; emotion and expectation were 45% respectively identified.
Table 1: Total number of occurrences of factors/ Methodology
Factor/ Method of Investigation Number of Occurrences Percentage
Satisfaction 16/22 72
Hedonic 13/22 59
Pragmatic 11/22 50
Emotion 10/22 45
Expectation 10/22 45
Second phase is based on empirical study conduction. This phase mainly served as to put the
information collected from first phase into practical example in order to identify user preferences
through SDM. To do so, initially 52 different semantics of factors like aesthetics, emotions, identity,
utility and stimulation were selected from literature for collecting UX data. Semantics are collected
from literature used in different studies to evaluate user experience. The semantics were grouped
together according to Osgood distribution of bipolar adjectives Activity, Evaluation and Potency
accordingly (Osgood, 1952). Pilot survey, was conducted in order to define scale and finally 31
semantics were selected for final data collection as shown in Table II below.
Table 2: Distribution of adjectives according to Osgood [13]
Activity Potency Evaluation
Adjective Antonym Adjective Antonym Adjective Antonym
Practical Decorative Unadomed Splendid Traditional Modern
Adoring Practical Masculine Feminine Popular Individual
Geometrical Streamline Complex Concise Hand-made Hi-tech
Abrupt Unisonous Inconvenient Convenient Rough Delicate
Pointed Rounded Babyish Mature Mediocre Noble
Hale Fluid Dull Captivating Heavy Light
Cheap Expensive Tardy Streamlined Clumsy Clever
Normal Particular Sharp-edge Curvature Cautious Bold
Conventional Inventive Repelling Appealing Coarse Delicate
Undemanding Challenging Indistinct Distinct Rejecting Inviting
Disagreeable Agreeable
SD data collection sheet was used to collect data. Data was collected from 21 participants mainly
BTH (Blekinge Institute of Technology) students for further analysis. A statistical technique named
factor analysis was employed in order to form groups of semantics exhibiting particular
characteristics. Each group is formed on the basis of likelihood of characteristics of same semantics
of various factors. There are 9 factors whose Eigenvalue is greater than one, but only first four factors
were selected as the remaining factors have very few semantics to be considered for further analysis
as shown in Table III and figure no 1 respectively.

Figure 1: Factor analysis chart
147

Kalimullah Khan
Above Figure no 1 represents the semantic representation of the factors identified in UX through
SDM. The average values of the four factors for the evaluated product are also plotted in the above
figure. F1 represents positive experience for Pragmaticility factor. While F2 (Practicality) is neutral.
Whereas F3 (Activity) is turned out to be negative from UX perspectives. F4 (Structural) represents
positive experience in UX; showing that the product has good shape but need improvement.
Table 3: Factors with Eigenvalue greater than 1
Eigen values
F1 F2 F3 F4 F5 F6 F7 F8 F9
Eigen values .470 .867 .349 .126 .036 .757 .324 1.185 .043
Variability (%) 4.097 2.475 0.803 0.085 .569 .668 .272 .823 3.366
Cumulative (%) 4.097 6.573 7.376 7.461 4.030 69.697 73.969 77.792 81.158
There are 17 factors but only nine factors that the initial Eigenvalue was over 1 and whose Extraction
Sums of Squared Loadings was up to 81.158 were selected as shown in above Table III . It could be
further interpreted in figure no 2 below.

Figure 2: Eigenvalues plot
Based on factor pattern analysis and the underlying communalities of adjectives in each factor, four
factors were selected for final analysis and that is the reason being very less number of variables in
particular factor categories used. Factor loadings, where Eigenvalues greater than 1 extracted from
final data set as shown in figure no 2. The semantic differential chart computed from the final survey
is provided in figure no 3. The mean values of the opposite pairs of adjectives are computed. Extreme
values are of particular interest. The extreme values give an idea that which characteristics are
particularly critical and need more improvement or particularly well resolved.

Figure 3: Semantic differential chart
148

Kalimullah Khan
Figure no 3 represents that overall UX of mobile phones from usability point of view and functionality
wise is satisfactory and showed positive experience. While from practicality point of view there is more
negative experience observed. The experience showed that mobile phone from should be in a way
which is more practical, convenient and should be simple and normal. While from activity and
structural wise more negative experience is observed.

The semantic differential chart in Figure no 3 computed from SD data represents the customer
aesthetics, expectations, emotion and identity. The positive values on scale represent customer
satisfaction while negative needs improvement.
7. Conclusion
The main purpose of this paper is to propose a methodology that is used to measure overall UX to
explore various user preferences suitable for product form improvement. Semantics of various factors
(aesthetics, emotions, expectations, fun, and joy etc) were used to measure overall UX. A set of
product semantics along with user preferences were identified. To improve product from it is
necessary that it should not only satisfy functional requirements but it should also satisfy customers
psychological needs subjectively to guide users with life experience. Various methodologies and tools
used to measure UX were studied in the light of their strengths and weaknesses. There is no
evidence found in literature about a specific user experience methodology to measure overall UX. The
importance of non-experiential perspective of technology use and its worthiness in design is explored.
Product specific semantics used to measure overall UX were identified. It is found that there is no
definite set of metrics which can be used to measure UX rather it depends on study type and
environment. Furthermore, satisfaction was used as main metric to measure overall UX. Satisfaction
was measured using various factors semantics such as emotions, aesthetics and expectation etc.
8. Future work
SDM was used to measure overall UX in order to identify user preferences. Semantic chart provides
an explanation of the weak design form of the product. It is suggested that SDM should be compared
with other self reporting techniques and observe its effectiveness.
References
Azhari bin Mohd Hashim, Raja, A., (2007), The multidimensional Scaling: an interactive method for establishing
perceptions of the appearance of product.
Cronbach, L. J., (1951), Coefficient alpha and the internal structure of tests. Psychometric Vol. 16, pp.297-334.
Creswell, J.W., (2002), Research Design Qualitative, Quantitative and Mixed Method Approaches, Sage
Publications, 2nd ed., ISBN 0-7619-2442-6.
Chang, Chien-Cheng. Shih, Yi. Yen., (2002), A differential study on the product form perceptions of different age
group users.
Green, W et al., (2008), "Developing the scale adoption framework for evaluation (SAFE)," pp. 49.
Guo. Fu. Tian., (2010), Consumer demand oriented study on mobile phones form perception design method,
IEEE.
Hassenzahl. M., (2006). Hedonic, emotional and experiential perspectives on product quality In: C. Ghaoui (Ed.),
Encyclopedia of Human Computer Interaction, pp.266-272.
Hassenzahl, M. and Tractinsky, N., (2006), User experience - a research agenda: Behavior & Information
Technology, Vol 25, No. 2, pp. 91-97.
Hassenzahl. M., (2003), The thing and I: understanding the relationship between user and product. In Funology:
From Usability to Enjoyment.
ISO DIS 9241-210: (2008), Ergonomics of human system interaction - Part 210: Human-centered design for
interactive systems (formerly known as 13407). International organization for standardization (ISO).
Switzerland.
Jordan, P. W., (2000), Designing Pleasurable Products: An Introduction to New Human Factors. Taylor &
Francis, London.
Kuniavsky, M., (2003), observing the user experience: a practitioner's guide to user research: Morgan Kaufmann.
Karapanos. E., Zimmerman. J. Forlizzi. J. Jean-Bernard Martens., (2010), measuring the dynamics of
remembered experience over time: Eindhoven University of Technology, Department of Industrial Design,
Carnegie Mellon University, Human-Computer Interaction Institute and School of Design, Pittsburg, PA
15213, USA.
Logan, R.J., Augaitis. S. and Renk. T., (1994), Design of simplified television remote controls: a case for
behavioral and emotional usability. In Proceedings of the 38th Human Factors and Ergonomics Society
Annual Meeting (Santa Monica: HFES), pp. 365-369.
Mkel. A., Fulton. And Suri, J., (2001), Supporting Users Creativity: Design to Induce Pleasurable Experiences.
Proceedings of the International Conference on affective human factors design, pp.387-394.
149

Kalimullah Khan
Norman, D. A. (2009), THE WAY I SEE IT - Memory is more important than actuality. Interactions Vol.16, No.2,
pp. 24-26.
Osgood, C. E., (1952), the nature and measurement of meaning. Psychological Bulletin, Vol. 49, No.3, pp. 197-
237.
Thurstone, L.L., (1947), Multiple- Factor Analysis. Chicago: University of Chicago Press.
UPA (Usability Professionals Association). (2007), Usability body of knowledge. Internet Last Access: [2011-04-
22] http://www.usabilitybok.org/glossary
150
Challenges in Building a Community Health Information
Exchange in a Complex Environment
Ranjan Kini
School of Business & Economics, Indiana University Northwest, Gary, USA
rkini@iun.edu

Abstract: Economists are projecting that the single most important cost to the world economies in the future is
the healthcare cost. Although more developed economies are projecting the healthcare cost growth rate higher
than their gross domestic product (GDP) growth rate, the projections made for the US are dramatic enough to
cause alarm and begin major national conversations. Unlike in Europe, the US healthcare has a larger share of
its total healthcare costs privatized. Despite the fact that over half of US healthcare costs are borne by
corporations and citizens, the governments share, taking care of seniors, is projected to take a lions share of the
GDP in the future and continue to grow. In the US, this initiated studies of successful European and Asian
nationalized health care plans in identifying best practices. One of the first best practices that was pushed
forward is to encourage establishment of national-level, or state, or regional or community-level health
information exchanges (HIE.) Adopting appropriate information and communication technology (ICT)
infrastructure and creating such HIEs was identified as a strategy to address healthcare cost reduction, quality
gains, and safety in service provision. There are many HIEs that have been started in different regions of the US.
Most of them, being early starters, were subsidized by federal agencies through grants. The objectives of
supporting these exchanges were to identify the key ingredients necessary to develop and maintain sustainable
HIEs. These reports from exchanges have identified many critical success factors that need to be addressed in
building sustainable HIEs. However, these factors are influenced by the ICT infrastructure, community,
competitive, and stakeholder characteristics of the region. Consequently, in many parts of the US, the adoption of
HIE and subscribing to it has been slow. In this study, the challenges facing the establishment of a HIE in a
region with three major competitive healthcare entities comprising of nine hospitals are studied. The research
methodology that is used in this research to collect data is through semi-structured interviews with executives of
three major healthcare providers (hospital groups). Although there are many stakeholders such as physicians,
laboratories, pharmacies, etc., in making a HIE highly successful, this research is primarily focused on the central
players of HIE, the hospitals. It is clear from the interview responses that they are skeptical of any regional HIE
becoming sustainable, and they believe that even for most HIEs, sustainability in the long run is difficult unless it
is made mandatory and cost of doing business for stakeholders to control the total healthcare costs. But, they do
agree that there are opportunities for innovative HIEs to add business services to make them viable and
sustainable.

Keywords: health information exchange; regional health information exchange; sustainability of health
information exchange; HIE and quality of health; health information technology
1. Introduction
It was in 2010, as I was getting admitted to the hospital for a surgery that I realized the dire need for
transformation in health information technology (HIT). Within a span of 3 hours, three different people
came up to my bed and asked me the same set of questions and were writing on blank forms! During
the third time as I was answering the questions, I asked the nurse the need for answering the same
questions again and again. The answer was obvious that the systems in different departments wanted
to collect and enter into their systems, separately! Again, in 2011, when a routine mammogram of a
family member was needed to be reinterpreted for a second opinion, the lab had to burn a CD to be
taken to another radiologist. I realized at that time that we had a long way to go in transforming
healthcare in the US through information and communication technologies (ICT). These types of
situations are very common in the USA even in 2012. This is one of the key reasons federal
government started incentivizing the use of ICT in healthcare.

In the US, there is no doubt that the skills of physicians, and the quality of clinical technologies are of
the highest level. But, according to a number of studies, the use of information technology in
managing data both internally and externally among the healthcare stakeholders such as hospitals,
physicians, pharmacies, labs, outpatient clinics, and payers is inefficient and can save up to 7% of the
healthcare spending, i.e. ~100 billion dollars, while increasing the quality of healthcare. Consequently,
federal government, which accounts for over 40% of the healthcare spending through Medicare and
Medicaid, is incentivizing the use of ICT by all healthcare stakeholders. To encourage the use of ICT
federal governments first approach involved handing out competitive grants to regions, states and
stakeholders with expectation of long term sustainability of their projects. As the government
witnessed the lack of initiatives on the part of many stakeholders, especially among physicians, it
151

Ranjan Kini
came out with an approach of demanding stakeholders to use ICT and file reports electronically to get
reimbursement for the services they provided (through Medicare and Medicaid). In fact, by 2014, all
physicians are expected to file reports which are considered meaningful use of information
technology electronically, if not, Medicare is not expected to reimburse for their services. This
approach has created some degree of panic and anxiety to affect structural changes in the healthcare
industry.

Another area federal government has identified is that cost reduction and value can be gained is by
having standardized health record. The healthcare industry identified that it should have three types of
records: Electronic Medical Record, Electronic Health Record, and Electronic Patient Record.
EMR definition: The data in the EMR -- clinical data, clinical decision support, controlled
medical vocabulary, charge entry, computerized provider order entry, pharmacy
information, drug interactions, and clinical documentation applications -- is the legal
record of what happened to the patient during their encounter at the CDO and is owned
by the CDO. (Ericksen, 2009)
EHR definition: A subset of each care delivery organizations (medical offices) presently
assumed to be summaries of the patients Continuity of Care Record (CCR) or HL7s
Continuity of Care Document (CCD) which both can be simply called the patients
electronic health record (EHR or EPR or PHR). This record is owned by the patient and
has patient input and access that spans episodes of care across multiple CDOs within a
community, region, or state. (Ericksen, 2009)
The expectation is that by having standardized EMR and EHR the health data of a healthcare
encounter of a consumer with any of the stakeholders can easily be interchanged with other
stakeholders who might use them as needed. The EMR which is yet to be standardized is typically
created by each of the stakeholder using their proprietary vendor software; whereas an EHR is a
record created by using summary of variety of EMRs from different stakeholders. The EHR is
available to all stakeholders based on the needs of the service provider. But, the EHR is yet to be
standardized and the federal government using the National Health Information Network (NHIN) as
the base architecture encourages each state to define their EHR standard and establish an EHR
repository for Health Information Exchange (HIE). In order to encourage different states and regions
to start such HIEs Department of Health and Human Services (DHHS) initiated grant programs to
form cooperatives, alliances, not-for-profit organizations as HIEs. Since 2005 there were 150 HIEs
that were started but few (West, 2012) have become very successful and most of them are facing
challenges to be sustainable. Although DHHS suggests forming state-level HIEs, the pioneer of
successful IHIE (Indiana Health Information Exchange), Mark Overhage (Health, 2006; Yasnoff, 2004)
suggests that the regional HIEs makes more sense than state-level HIE. Regardless of whether a HIE
is regional or state-level, business model that will be appropriate and sustainable for the HIE is the
key interest of this study.
2. Literature
There have been many studies conducted evaluating and discussing the values and benefits of NHIN,
HIE, and possible cost savings for both the federal government (Medicare & Medicaid) and payers
(employers and insurance companies). Most of them promote the formation of a network of HIEs.

In 2001, Manl et al., suggested a standard EMR by all stakeholders, that is easily portable to any
system, and also, available to patient through the web (securely) for periodic review, like personal
bank account information, credit rating, etc. So far, however that has remained a dream.

Yassenoff (2004) suggested, based on their study results, that for minimizing medical errors and
enhancing quality of care, ICT is critical. The study pointed out that just by using computerized
physician order entry (CPOE) in one case reduced charge in one institution by 12.7% and costs by
13.1% and decreased serious medication errors by 55%. They estimated just by using CPOE
nationally we can save 44 billion dollars. They conclude that the quality and efficiency of healthcare in
the US can be improved just be using CPOE, readily accessible patient information and medical
knowledge to support the physician at the time of need. They suggest a network of community of
HIEs to support the associated local participating physicians with patient information for quality
outcomes with savings.

152

Ranjan Kini
Many early studies pointed out that hospitals and other stakeholders were slow in adopting ICT to
achieve higher stage of EMR adoption model. (Health, 2006; West, 2012; Grossman, 2008) Some
stakeholders pointed out that there was no one standard agreed upon by all the stakeholders in terms
of EMR. By 2013, providers are expected to migrate to ICD-10 International codes for EMR (Hanson,
2012). As many institutions agreed and accepted Health Level 7 (HL7-CCD) as the good EHR
standard, ASTM suggested Continuity Care Record (CCR) as a better EHR standard which sent both
vendors as well as stakeholders back to the drawing board and are playing wait and see approach.
This further slowed the adoption of EMR and in turn EHR. Despite several successes of HIEs across
the US many, with different models of HIEs, are still not convinced that EMR and EHR have been
standardized or that quality of care has improved. (Greenemeier, 2009; EMR, 2011)

In order to incentivize the start of Regional Health Information Organizations (RHIO) Federal
government started supporting the initiatives through grant programs. Over 100+ such organizations
were started, but, over a period of time, many of them became unsustainable (West, 2012). A few that
have sustained have found it difficult to make a very strong case for stakeholders to maintain a viable
model.

In Estahling (2005), Top Ten (2007), and Alfreds (2008) the authors emphasize that reasons most of
these RHIO or HIEs have not become successful is because of one or more of the following factors
were not properly addressed:
Business Model: Clear understanding of the business model for HIE how do we
generate revenues since cost savings may cut revenue for some stakeholders.
Collaboration and Trust: Collaborative environment and trust is not properly addressed
by all HIE partners.
Participation role and benefits: The community and stakeholders needed to have clear
and open discussion about their contribution to and gains from HIE.
HIE Governance: The governance structure and processes need to be clearly
understood by payers, providers, consumers, and all other stakeholders.
Roadmap for HIE: A clear roadmap needs to be established for the community to
understand and stakeholders to engage in the development.
Value Proposition: The value propositions of HIE are to be clearly articulated and
communicated to all stakeholder groups.
Communication of Returns: Financial costs and benefits, and Return on Investment of
participation to partners are to be clearly explained.
Technology constraints: Lack of a common HIE architecture/framework; lack of common
definitions and standards; lack of common standards for data creation and exchange.
Security and privacy risk: Perceived difficulty in addressing HIPAA requirements and
data security is to be addressed.
IT adoption: Many stakeholders had limited their adoption of HIT which in turn would
delay their participation.
Incremental Movement: The communitys reluctance to move in the direction of HIE
initiative, creating difficulty in educating consumers and getting them to participate as
stakeholders.
Market Readiness and Awareness: Most stakeholders including payers and providers
were not incentivized to build and/or participate in and HIE.
The some or most of the above factors have been identified by several studies (Grossman, 2008) as
the leading causes of poor sustainability of HIEs, nationally. The few currently sustaining models are
constantly reenergizing their models. (West, 2012) For example, Indiana HIE is going across the
border to Illinois to recruit providers, physicians and other stakeholders to increase their (subscription
and transaction) revenue base. The long term sustainability of surviving HIEs need to find a viable
model to generate revenue while working towards its objective of reducing the overall costs of
healthcare and gain efficiencies.

153

Ranjan Kini
Typically, a HIE even though needs presence from all stake holders as shown in Figure 1, the
possibility of them coming together to form a cooperative, or joint venture by leading partners, is
considered to be very difficult because of the factors mentioned above. (Health, 2006)


Figure 1: Health information exchange
In Indiana, although the statewide IHIE has a presence in the Northwest Indiana region, authors,
following the suggestions made in the several studies and reports (Health, 2006; Grossman, 2008;
West, 2012) were interested in investigating the feasibility of an local HIE in the region.

In Northwest Indiana, there are three hospital groups. First group has four hospitals (1500 beds) in
the region but is a part of 14 hospital group in Indiana; second group has two hospitals (600 beds) in
the region; and third group has three hospitals (800 beds) in the region. These hospitals serve a
region about half million population. The hospitals are competing for the revenue among themselves
as well as many neighboring metropolitan hospitals of greater Chicago. Many of the physicians in the
region are associated with and practice in competing hospitals. In addition to these hospitals, there
are several small hospitals of 40-beds or 50-beds in the region. Also, there are a variety of local
laboratories and imaging centers as well as branches of national laboratories. There are several
national and regional pharmacies competing with hospital pharmacies. Thus, potentially there are
many possible stakeholders who are in a position to become part of a regional HIE, if made possible
and feasible.

The authors interest in the study is to understand the perspectives of these hospitals senior
management regarding a HIE and to investigate their views on a Northwest Regional HIE. Thus,
authors interviewed the hospital senior administration since in most HIEs hospitals are the anchor
tenants and are the primary supporters needed to sustain a HIE. Most of the literature (listed
references) relating to HIT and HIE has been studied to generate appropriate questions to interview
the executives. These questions are based on the reported results and perspectives about a HIE from
the point of view of other healthcare executives regarding the successes and failures of HIEs in the
United States.

Although earlier studies encourage starting a HIE, more specifically a regional HIE, they also warn
that making a HIE sustaining and successful is a complex task. So, this study is conducted to
154

Ranjan Kini
investigate the perspectives that influence the decision-making in the formulation of such a
collaborative HIE. (Figure 1)
3. Research methodology
The literature on HIE indicate that perspectives of senior executives determine the formulation,
governance, success, and sustenance of a HIE. (Establishing, 2005; Health, 2006; Grossman, 2008;
Alfreds, 2008) Consequently, a set of questions were designed to develop it into a semi-structured
interview questionnaire. (Health, 2006; Kvale, 1996) The expectation is to get candid answers from
senior executives of three hospital groups, separately. The interview was designed to be semi-
structured to allow room for follow-up questions if the questions are not clear or if there is a need to
follow-up to clarify their comments. Furthermore, when responders digressed into a topic of their
interest, there were follow-up questions to bring the focus back in to HIE discussion.

Authors requested each hospital groups main office to suggest an executive for the interview. The
title of the executive of each of the hospital groups are: Manager of Clinical Informatics; Chief
Financial Officer; Chief Financial Officer & Chief Information Officer; is also included in the Appendix
1. Also, the edited responses from the three interviews are included in the Appendix 1. (If the
Appendix 1 is not included in the main paper for lack of space, upon request, it is available.)
4. Results
The first interview was with the Manager of Clinical Informatics of the hospital group of 14 hospitals,
with four hospitals in the region. The manager responded that they have been members of IHIE for
some time. They pay their subscription as well as their physician associates fees, to access IHIE data.
They pay for physicians IHIE subscription fee for fear of physician not admitting patients into their
hospitals for lack of access to data, and go to competition which may provide that service. This
executive does not think there is evidence at this time that quality of care has improved because of
IHIE. But manager did acknowledge that IHIEs presence in Illinois did help them retain the patients
because 25 percent of their patients come from Illinois. This manager did not have an opinion about
whether local, state or regional HIE will be more helpful, one way or other. They did not think there
were a large number of patients from other part of the state to make IHIE much more valuable. The
manager did say they enter limited amount of data from their Epic (PHR) to IHIEs (EHR). The
manager thinks at this point there is no standard for EHR, thus each hospital decides to share and
populate whatever data they see fit. This manager does not think there is going to be sustainable
model for HIE and that their investment as a subscriber to IHIE is more a convenience to physicians
and protection from losing their business for not having it.

The second interview was with Vice President of Finance of a hospital group with two hospitals. This
executive clearly is not a strong supporter of HIE. They are not subscribers to HIE. They think
eventually it may be mandated upon them until that time, they believer, they will be able to support
their physicians through proprietary Epic system. The Epic system is used by all major hospitals in the
region. Thus, as a result of contractual agreement with Epic, the physicians practicing in hospitals
where Epic is used will have access to the patient data even if they are treating the patient in another
hospital which also uses Epic. This executive believes that there is no large number of patients from
outside the region to warrant subscription to IHIE. This is also the argument put forward by this
executive on behalf of other stakeholders such as other small hospitals, labs, pharmacies etc., about
subscribing to IHIE. This executive believes that belonging to IHIE is a charity strategy. The executive
believes that cost is a sunk cost when organizations subscribe to HIE, and the only stakeholders that
benefit from it are physicians and insurance companies. This executive supported their physicians to
access Epic data from their private offices, by installing equipment and training them to use the
system using federal grant money. The executive does not anticipate any health care quality benefits
from HIE at this time.

The third group of hospitals was represented by both the chief financial officer and the chief
information officer. These executives believed that HIEs are beneficial even if there is a high cost for
subscription. They have subscribed to IHIE for the same reason other (the first) hospital group did, to
support the physicians to access the data (EHR) from anywhere. They also believe that membership
in IHIE is beneficial since the report physicians need can be easily created by them from the data
hospital populates in the IHIE data repository of EHR, thus saving work for hospital ICT department.
They believe although there is no clear standard for EMR and EHR at this time, they think it will
eventually happen. They believed that it does not really matter whether it is regional, state, or national
155

Ranjan Kini
level HIE, it will work the same way and they are not very concerned about it. They also believes that
the regional HIE in Northwest Indiana will have difficulty in sustaining itself in the long run, unless the
HIE provides additional services to generate revenue. The executives also believed, like their
competitors did, that Epic software does provide HIE type of service, through its proprietary PHR to
physicians in the region; although executives think eventually HIE will be mandated by the federal
government. These executives find the value in HIE but think there needs to be significant amount of
work done to standardize the EMR, EHR, and middleware for interfacing with variety of existing
software used by various stakeholders.

The responses given by the executives indicated that they are all concerned with the business model
of HIEs. They believe that HIEs clearly have a role but the value of a HIE in terms of cost efficiencies
gained and convenience to physicians, and ease of patient data entry and data access is as
expected. What they are really are not sure about are the advantages of quality of health delivered
and revenue gain by one or more stakeholders. They believe whether all the stakeholders are
embracing HIE or not, sooner or later the federal government (the dominant payer) would require
every stakeholders to connect into a HIE at regional, state or national level.

In this study, two of the three groups have already subscribed to the state level IHIE. They have also
subsidized the access for the physicians both at the hospital as well as at physicians offices, so that
there is minimal discontent among physicians. The group has indicated that although it has not
subscribed to IHIE, its clinical system Epic is useful in accessing the Epic data (PHR an Epic
provision for clients) from neighboring hospitals as both other groups are using Epic. This hospital
group has no immediate plans to become a subscriber of IHIE. According to the senior executive the
loss of value will be felt only when a patient from a hospital without Epic presence is admitted in to
their hospital. Thus, if Epic or similar clinical systems are in use at all or most of the hospitals in the
region, and if the percentage of patients from outside the region admitted to these hospitals is very
small then there may be opportunities for Epic type of product, with appropriate market penetration to
be the default value provider rather than a HIE. Extending this logic, ignoring EMR and EHR
standards for the moment, it is possible for a clinical system vendor to be the de facto HIE value
provider. And, recently, vendors have been promoting this value to their clients to boost their
marketshare.
5. Discussion
People need health care because: they fall sick as they get old; they get in to accidents during their
normal living or activities; they are genetically prone to certain types of ailments because of their
genetics; their environmental or living conditions may cause certain types of ailments; they develop
hobbies which are causes of certain types injuries or accidents; and, people develop vices which are
to the detriment of their health. Of the list made above, except for the first three, which are not
controllable by individuals, all the others are controllable by individuals. However, it is difficult to
differentiate causes of certain healthcare costs as to which of the above caused the individual to
encumber the medical expense. Payers try to manage the healthcare expenses while providing the
best care, as defined in the contract, to individual subscribers while the cost of most healthcare
products and services grow at a rate higher than all sectors of industry. Thus, it is logical that largest
healthcare providers, the U.S. government (Medicare and Medicaid) and large employers would like
to rein in the cost of healthcare. In this process one of the first items that is brought to the attention of
all stakeholders in the industry is reducing the cost of managing healthcare information.

Although healthcare industry made tremendous progress in the use of computer technologies in
clinical area its use in the information management lagged compared to all other industries. The
management of information in the healthcare delivery has the potential to gain significant efficiencies
and lower the cost of healthcare delivery but the adoption and diffusion of the use of ICT in the
hospitals has been very slow. Thus, the largest payer, the federal government had to intervene and
suggest the use of ICT by all stakeholders. The cost efficiencies gained through reduced paperwork,
decreased fraud, enhanced quality of healthcare delivery, quality healthcare data for analysis and
research are expected to be some of the major benefits from such interactions. NHIN needs to be in
place to derive all these benefits. There are many ways of designing a NHIT and make it relevant to
all citizens and stakeholders. A distributed network of HIE is one such alternative. The number, the
size, ownership, governance, sustainability all immediately became challenges of a HIE.

156

Ranjan Kini
The sustainability issue drives the HIE to find ways for stakeholders to cooperate, collaborate, govern
within a highly competitive revenue declining industry. If providers and stakeholders were to decrease
their costs equivalent to the subscription costs to HIE (or investment and governance in the formation
of a HIE) then there is at least business case for an HIE. The IHIE seems to have found a way to
make itself reasonably sustainable through various value added activities such as Quality Health First
(QHF) with pay-for-value as the motivating factor for linking physicians and payers/employers to
become part of IHIE, and Health Information Technology for Economic and Clinical Health (HITECH)
to engage patients for public health and cost containment. However, some providers have indicated
that IHIE is expensive and are not sure whether they are generating enough value to make it
worthwhile, especially when IHIE charges for custom consolidated data analytic reports using their
data repository over and above their subscription fees. Regardless, IHIE has developed a viable
business model to influence at least two of this study hospital groups to not support the idea of a
regional HIE in the Northwest Indiana but continue with IHIE.

If regional HIE were to be a viable business model, in the study region or elsewhere, then probably
the investment made in to the ICT infrastructure may need to be properly leveraged. There is an
opportunity for HIE to leverage their infrastructure to provide many stakeholders their data center
needs for both clinical and financial data. This has a potential for significant savings for most small to
medium size stakeholders: be it be physicians/groups; labs, clinics, imaging centers surgery centers
etc. Independent HIEs can use the infrastructure to develop educational and knowledge support like
in WebMD and also generate revenue sources through advertisements. But, this would demand
aggressive innovative approaches from stakeholders or entrepreneurs!

Acronyms

ACO Accountable Care Organization
ASTM American Society for Testing and Materials
CCR - Continuity of Care Record or CCD Continuity of Care Document
CDO Care Delivery Organizations
CDR - Clinical Data Repository
CDSS - Clinical Decision Support Systems
CMV Controlled Medical Vocabulary
CPOE - Computerized Provider Order Entry
CRS - Care Record Summary
EDI Electronic Data Interchange
EHR Electronic Health Record
EMR Electronic Medical Record
EPR Electronic Patient Record
Epic Clinical System and Vendor name
HIPPA- Health Insurance Portability and Accountability Act
HL7 Health Level 7 International Health Information Technology standards
NHIN National Health Information Network
PHR Patient Health Record
RHIO - Regional Healthcare Information Organizations
References
Alfreds S. T., Masters E. T., & Himmelstein J. (2008) Opportunities for Facilitating Electronic Health Information
Exchange in Publicly Funded Programs: Findings from Key Informant Interviews with Public Health Agency
Leadership and Staff, Center for Health Policy Research, University of Massachusetts, Medical School,
Shrewsbury, MA, http://www.umassmed.edu/healthpolicy/HIT/PolicyDevelopment.aspx
EDI Today .HIE Tomorrow, (2008), INGENIX, MN,
http://www.optuminsight.com/content/attachments/IX_PYR_CL_22409_EDItoday_WP.pdf
EMR Companies Holding Practice Data for Ransom (2011) http://www.emrandhipaa.com/emr-and-
hipaa/2011/01/19/emr-companies-holding-practice-data-for-ransom/
Eriksen, Andrew. (2009) EMR vs. EHR Difference, http://freeemrsolution.com/emr-articles/emr-vs-ehr-
difference/
Establishing a Business Case for Health Information Exchange, (2005) Findings from the State and Regional
Demonstrations in Health Information Technology Regional Meeting, November 8 9,
http://healthit.ahrq.gov/portal/server.pt/community/ahrq_national_resource_center_for_health_it/650
Garets, Dave & Davis, Mike (2006) Electronic Medical Records vs. Electronic Health Records: Yes, There Is a
Difference, A HIMSS AnalyticsTM White Paper, HIMSS Analytics, Chicago, IL,
http://www.himssanalytics.org.
157

Ranjan Kini
Greenemeier, Larry. (2009), Will Electronic Medical Records Improve Health Care?
http://www.scientificamerican.com/article.cfm?id=electronic-health-records
Grossman, J. M., Kushner, K. L., & November, E. A. (2008) Creating Sustainable Local Health Information
Exchanges: Can Barriers to Stakeholder Participation be Overcome? Center for Studying Health
System Change, NIHCM Foundation Research Brief No. 2.
Hanson, Wayne. (2012) Will Affordable Care Act Uncertainties Affect Local IT?
http://www.digitalcommunities.com/articles/Will-Affordable-Care-Act-Uncertainties-Affect-Local-IT.html.
Health Information Exchange Projects: What Hospitals and Health Systems Need to Know, An
Executive Brief, (2006), by Manatt Health Solutions for American Hospital Association, Chicago,
IL.
Kvale, Steinar. (1996) Interviews: An Introduction to Qualitative Research Interviewing, Sage Publications,
Thousand Oaks California.
Mandl, K. D., Szolovits, P., Kohane, I.S. (2001) Public standards and patients' control: how to keep electronic
medical records accessible but private, British Medical Journal, (BMJ.com), Vol. 322, pp. 283-285.
Medical Records Privacy, (2011) Published on Privacy Rights Clearinghouse,
http://www.privacyrights.org, Revised January 2011.
Mercuri John J. (2010) The Ethics of Electronic Health Records, http://www.clinicalcorrelations.org/?p=2211
January 15, 2010
Oracle Health Information Exchange: Secure, Seamless Data Sharing, (2010), Oracle Corporation,
http://www.oracle.com/us/industries/healthcare.
Shay, Edward F. (2005) Legal barriers to electronic health records,
http://www.physiciansnews.com/law/505.html
Stafford, Nancy. (2010) Who owns the data in an Electronic Health
Record?http://www.ehrinstitute.org/articles.lib/items/who-owns-the-data-in
Top Ten Success Factors for Community HIE, (2007) White Paper, Center for Community Health Leadership,
Raleigh, NC, 2007.
West, D. M. & Friedman, A. (2012) Health Information Exchanges and Megachange, Governance Studies,
Brookings Institution, Washington, DC.
Yasnoff, W. A., Humphreys, B. L., Overhage, M., Detmer, D. E., Brennan, P. F., Morris, R. W., Middleton, B.,
Bates, D. W., Fanning, J. P. (2004) A Consensus Action Agenda for Achieving the National Health
Information Infrastructure, Journal of the American Medical Informatics Association, Vol. 11 No. 4, 2004,
pp. 332-338.
158
Factors Inhibiting Recognition and Reporting of Losses
From Cyber-Attacks: The Case of Government
Departments in the Western Cape Province of South Africa
Michael Kyobe
1
, Sinka Matengu
1
, Proske Walter
1
and Mzwandile Shongwe
2

1
Department of Information Systems, University of Cape Town, Cape Town,
South Africa
2
Department of Information Studies, University of Zululand, Kwadlangezwa,
South Africa
michael.kyobe@uct.ac.za
mshongwe@pan.uzulu.ac.za

Abstract: The South African government has invested substantially in IT to improve service delivery and benefit
from the low cost of communication via e.g., the internet. However, cybercrime, lack of accountability and failure
to evaluate e-developments remain a major concern to the society. The level of awareness of these risks in this
sector appears to be low despite effort to address these challenges in government and private sector initiatives
and conferences. The draft paper on information security (Dept of Public service and administration n.d. p2) and
the CSIRT initiative clearly point to the fact that success in government electronic initiatives depends on effective
information security management. The present study examined some of the factors inhibiting the recognition and
reporting of losses from cyber-attacks on government departments in South Africa. A survey was conducted in
the Western Cape Province. Forty responses were received and analysed using mixed methods. The results
indicate that lack of clear guidance on how to calculate losses; lack of understanding of the legislation and
knowledge of how it may assist in cubing cyber-crime; lack of training and creation of awareness of cyber-crime
and lack of knowledge and capability to assess risks regularly are major factors inhibiting departments from
recognising and reporting losses from cyber attacks.

Keywords: cybercrime, human behaviours, IT, information security, public sector, South Africa
1. Introduction
The government of South Africa, like other governments around the world, has invested in modern
Information and Communication Technologies (ICTs) to improve service delivery in the public sector
(Syvjrvi et al. 2009; Nyanda 2010). Consequently, reliance on information systems (IS) has
increased and placed government operations at higher security risk (News24 2007). However, the
level of awareness of security risks in the public sector appears to be low and recognition and
reporting of losses resulting from cyber attacks is a major challenge (Nyanda 2010; De Tolly, Maumbe
and Alexander 2006).

With the increasing attacks on government systems (Kayle 2010), it is critical that such attacks are
effectively detected, identified and mitigated and early warning to potential targets raised. There is
growing concern however that limited resources are devoted to information security and accounting
for the actual cost of losses. Information security challenges may not be addressed with limited
knowledge of the actual costs of breaches (Cashell, Jackson, Jickling and Webel 2004). In many
jurisdictions, for cyber attacks to be considered statutory crime they must exceed a specified amount.
Insurance firms will also require loss estimations in order to determine damages and recoveries. This
therefore means that accurate record keeping and proper evaluation of cyber losses are necessary.

Recent studies show however that poor record-keeping systems continue to be a major barrier to
institutional, legal and regulatory reform; anti-corruption strategies; poverty reduction and economic
development (De Tolly, Maumbe and Alexander 2006; Marion 2008). The purpose of this research is
to investigate factors that impede South African government departments in the Western Cape
Province, from recognising and reporting losses due to cybercrime. In the first section, we present the
literature review on cybercrime and the theoretical work explaining the difficulties involved in
recognising and reporting losses from cyber attacks. The information security challenges in the public
sector are then discussed and some measures taken to mitigate these challenges are presented. A
research model is developed and the methodology for the present study. Finally the research findings
are discussed and recommendations are made.
159

Michael Kyobe et al.
2. Literature review
While no agreed upon definition exists for the term cybercrime, it can generally be thought of as any
criminal activity involving a computer system (Kshetri 2009). Today the world experiences large scale
ever evolving attacks, which are blended, malicious in nature and involving thousands of computers.
The process of identifying, recognising and reporting losses from cybercrime depends on many
factors including our understanding of what cybercrime represents, methods used in risk identification
and analysis, the design of systems and human attitude or behaviour (Canhoto 2010). Lionel and Von
Frederick (2005) argue that crime involves many things and is committed in many different forms by
different agents. Consequently it would be unlikely that the scope and power of its facets can be
captured by a single theory (Lionel & Von Frederick 2005). In the following sections, we discuss some
theoretical perspectives of electronic crime and the challenges involved in its recognition and
reporting.
2.1 Lack of understanding of cybercrime impact its recognition
In his position paper to the Oxford Internet Institute, Baker (2010) argues that the process of collection
and analysis of cybercrime data is often affected by a lack of understanding of what cybercrime
means or represents. This lack of understanding of cybercrime, which is also evidenced by various
ambiguous and conflicting interpretations of the term, impedes its recognition and measurement.
Canhoto (2010) also observed that this lack of understanding is complicated by the fact that the types
of crimes organisations monitor are different and as such common strategies in addressing crime
challenges may be problematic. For instance, crime monitored by financial institutions may not be
relevant for law enforcement agencies. She maintains that the process of detecting cybercrime must
be conducted in a correct way and that the process of interpreting the data objects is conditioned by
the many factors including the position in the organisation, the characteristics of the environment and
the cognitive processes.
2.2 Information asymmetries
Information asymmetry deals with the study of decisions in transactions where one party has more or
better information than the other. This creates an imbalance of power in transactions such that the
ignorant party will lack negotiating capabilities or lack ability to retaliate in event of a breach of
agreement.

Asymmetric information can be a strong impediment to effective security. For instance, many
institutions collect data about security breaches and fail to share it with their stakeholders. Banks for
instance do not want to share information on fraud and cyber-attacks resulting in shortage of relevant
information to fight crime effectively. It has also been reported that some organisations inflate
reported crime statistics (Levi et al. 2007) while others like Internet Service Providers may understate
crime committed by their customers.

Christensen and Laegreid (2007) discuss several bureaucratic, political and hierarchical structures
and policies that make cross-boundary sharing of information and decision making about priorities,
resources and systems difficult in the public sector. For instance, some privacy acts restrict the
sharing of some kind of data between government institutions and business partners. Shortage of
hard data about information security failures therefore becomes a major impediment to determining
losses and containing crime.
2.3 Fragmentation of legislation and law enforcement
The fragmentation of jurisdictions hinders rapid response. A crime committed by someone in a
different country may be difficult to investigate due to differences or lack of appropriate legislation and
bi-lateral agreements. Phishermen often send hot money through the banks of member States with a
relaxed attitude to asset recovery resulting in money laundering. Also traditional mechanisms for
international police cooperation have been found to be too slow and expensive to cope with the
demands of the Internet age.
2.4 Other economic factors
In his presentation on how economics and information security affect cybercrime, Guerra (2009)
outlines several important economic issues. For instance, that crime can be extremely profitable, has
160

Michael Kyobe et al.
low overhead and a little risk of being prosecuted. Guerra adds further that business and government
have less money to spend on security even as the threat grows. Similar concerns have been
expressed by other writers. Anderson and Moore (2006) discuss the problem of misaligned incentives,
whereby the person who is responsible for securing a system or reporting security breaches has no
incentive to do so as it does not affect them (or do not stand to benefit from ensuring security).
2.5 Information systems security challenges in the public sector
2.5.1 Cybercrime in the public sector in South Africa
Managing cyber-crime in the 46 public departments is a challenging task. A draft policy on cyber
security was only gazetted in 2009 by the then minister of Communication Siphiwe Nyanda. According
to this draft, South Africa does not have a coordinated approach to dealing with cybercrime and while
various structures were in place to deal with cyber security, these were perceived to be inadequate in
dealing with cyber security (Sapa 2010; Von Solms 2012).

Limited empirical studies on cyber crime in South Africa have been conducted and as such there is
much reliance on news articles and online reports (Kayle 2011). However, organised crime syndicates
are increasingly targeting government systems. According to the Information Security Group (ISG), it
is estimated that the department of Home Affairs lost around R400 million in 2009. In a case of fraud
in January 2009, an employee of CIPRO colluded with syndicates to divert R51 million of tax refunds
from South African Revenue Services - SARS (Kayle 2010). In September 2008, a court sentenced
two SARS employees and their accomplices to 15 years imprisonment after each had defrauded the
taxman of nearly R500 000 (Otto, 2008).

Other African countries have experienced some level of cyber-crime. There are a few studies though
that report on cyber-crime in Africa. Wangwe, Eloff and Venter (2009) reported that cyber-fraud
remains a big concern in East Africa. Heeks (2002) reports that in many African countries, data quality
and data security are very poor and there are few mechanisms to address these issues. He states
further that digital signatures cannot be accepted in some countries. He concludes that while there
have been some progress in e-government, much has to be accomplished in terms of computing and
telecommunications infrastructure in Africa. These sentiments are also shared by Gichoya (2009) in
Kenya. Adomi and Igun (2007) reported that Nigeria is another African country that is experiencing
cybercrime. They state that, this has led to Nigerian email accounts and networks being blocked in
other countries. This shows that Africa is also not safe from cyber criminals.
2.5.2 Lack of accounting, legal and IT skills
Some of the problems relating to the escalation of cybercrime and poor security in Government
departments could be attributed to the lack of skills in accounting, law and IT (The Auditor General
2010). According to Marion (2008), the Auditor General found that municipalities are failing to
account for funds meant for service delivery because of a critical shortage of staff and lack of internal
controls. Of the 138 municipalities whose books were audited in 2008, only two received clean audits.
Fifty municipalities received adverse and disclaimer opinions, meaning that either their financial
statements were fundamentally flawed or information submitted could not be corroborated with any
documentation respectively. Nombembe said most of these qualifications were mainly due to a lack of
adequate internal controls; lack of discipline to retain supporting documents and lack of capacity. In
KwaZulu-Natal, according to the Business Day, out of 54 audited municipalities, only 17 received
unqualified audit opinions. There were three municipalities that improved from a qualified to an
unqualified opinion and the number of disclaimers decreased. However, the number of worst-case
opinions remained high at 28%, as two municipalities received adverse opinions and 13 received
disclaimer opinions.

Another problem relate to the unwillingness by managers to report cyber attack incidents. In many
cases the victims withhold reporting (Kyobe 2005). The importance of reporting incidents has been
emphasised in many studies on information security and safety. Gonzalez (2005) maintains that
information security reporting is a quality improvement process that is essential to reduce incidents.
EURIM (2003) outlines several barriers to reporting including concern about confidentiality, disruption
to business and loss of reputation. Lack of monitoring and evaluation and inappropriate budget for
information security have also been identified.

161

Michael Kyobe et al.
3. Initiatives to address these challenges
However, there have been several measures taken by the government and researchers to address
cybercrime challenges. Recently, the Department of Communications published the draft Cyber
security Policy of South Africa. The Electronic Communication and Transaction (ECT) Act (2002) one
effective law put in place by the government to govern the use of information technology in South
Africa. The problems relating to cybercrime are addressed by the cybercrime section in Chapter XIII of
the ECT Act, 2002. According to Michalson and Hughes (2005), this chapter introduces statutory
criminal offenses relating to unauthorized access to data (e.g., through hacking), interception of data
(e.g., tapping into data flows or denial of service attacks), interference with data (e.g., viruses) and
computer related extortions, fraud and forgery. They also state that a person aiding those involved in
these crimes will be guilty as an accessory. A person convicted of an offence related to the above is
liable to a fine or imprisonment for a period not exceeding five years. There has also been effort
toward the establishment of a Computer Security Incident Response Team (CIRT) and conferences
have been organised to address information security. CSIRT has however been affect by funding and
implementation challenges.
4. Conceptual model
The above review indicates many factors influence recognition and reporting of losses from cyber-
attacks on government departments. Those examined in the present study are: human behaviour
(i.e., attitude of managers toward information security) (Dowland et al. 1999); lack of understanding of
the regulations governing the use of electronic media (Dowland et al. 1999); and lack of accounting
skills and methods required to prepare losses (Bougaardt and Kyobe 2011). Figure 1 represents the
conceptual model and the relationships between the elements of the model. We propose that human
behaviour (e.g. attitude of top management and staff members towards Information Security);
understanding of the IT regulation that deals with cybercrime; and possession of relevant accounting
skills and methods influence the ability to recognise and prepare loss estimates arising from
cybercrime in government departments. This proposition is tested and the results are presented in the
following sections.
Human Behaviors
Possession of
Accounting skills &
methods
Understanding of IT
regulations
Ability to recognize
and report losses
from cyber attacks

Figure 1: Factors influencing recognition and reporting of losses from cyber-attacks on government
departments
5. Research methodology
This study was conducted using a questionnaire. This was considered to be appropriate since
respondents were located in different areas. The questionnaire was made available either as an
online version or a paper-based hard copy for the convenience of the respondents. The online version
was created and hosted on Google Documents. The questionnaire consisted of three sections:
Section 1 contained questions on cyber-attacks and its recognition. A Likert scale with options 1
(strongly disagree) to 5 (strongly agree) was used. In section 2 respondents were asked to give
additional information that they felt was relevant to the study. Section 3 captured the demographic
information about the respondents. The questionnaire was piloted with one academic and commerce
students and a few corrections were made.

162

Michael Kyobe et al.
6. Data sample
The sample population was made up of employees and managers at various public organisations
which have adopted ICT in the provision of services to the public. A list of public organisations located
in Cape Town was obtained from the government website www.gov.za. Stratified Random Sampling
technique was used to select 28 organisations offering different services and invitations were then
sent out requesting for their participation in the study. 20 departments in Transport Services,
Telecommunication, Arts and Culture, Metro Police, Development Corporation, National Library, City
of Cape Town, Premiers Office, South Africa Reserve Bank (SARB) and the Auditor Generals office
indicated their willingness to participate. All these organisations use ICT in their operations.

We focused on public institutions in Cape Town due to their close proximity and to allow for easier
follow-up on the responses. The survey was sent to each of the organizations and this had to be
completed by IT managers or officers, accounting officers, and general managers. IT managers were
best equipped to answer the majority of the questions and also to give an understanding of the
technology and processes in place at the organisation. Accounting managers were deemed to provide
the detail on procedures and methods used to record any losses as a result of cybercrimes, while
general managers could provide general information relating to the organisation.

120 questionnaires were sent out in total and only 40 usable questionnaires were returned. This
provided a very low response rate of about 30%. Some of the reasons for poor responses were that
some public officers did not have permission from the executive heads to participate in surveys.
However, as indicated in the profiles of the responded below, most of these respondents hold senior
positions in the organisations, and possess expertise in the fields of IT, Accounting and Management.
Hence the information obtained from this sample is valuable.
7. Data analysis and results
Data was analysed using quantitative techniques and the results are presented in the following
sections. Respondents consisted of Admin/General Manager, IT Managers, HR Managers,
Accounting, Audit and Risk Management, Directors, IT Support including those in Health services,
Librarian, IT Developers, and others. The sample consisted of a fair representation of different IT
users in the government departments. The results are presented in table 1.
Table 1: Profession of the respondents
Profession No. of Respondents
Admin General Manager 6
IT Managers 10
HR Managers 5
Accounting, Audit and Risk Management 5
Directors 2
IT Support 3
Librarian 1
IT developers 4
Others 4
Table 2 below presents the mean responses of these participants. On average, the respondents did
not possess appropriate accounting skills which suggest lack of skills on how to prepare and account
for losses arising from cyber-attacks. In addition, many indicated unawareness and unhappiness with
the methods or procedures followed in determining the costs of such attacks. Many were also
uncertain about the regulatory requirements and were not sure if the ECT Act provided sufficient
remedy to cyber problems. They did not know if proper documentation procedures on security were
maintained and disagreed that their behaviours and those of managers facilitated the recognition and
reporting of losses.
Table 2: Mean responses (Items measuring Constructs)
Construct No. of
items
Mean Std. Dev.
Possession of accounting skills and methods 5 2.18 0.77
Knowledge of Legislation 5 3.23 0.82
Human Behaviour 7 2.17 0.65
Recognition and reporting of losses 4 2.72 0.78
163

Michael Kyobe et al.
We conducted the Cronbach Alpha test to determine the reliability of the responses. The results are
presented in Table 3 below.
Table 3: Results of the Cronbach Alpha test
Construct Cronbachs Alpha
Possession of Accounting skills & Methods 0.94
Knowledge of Legislation 0.78
Human Behaviours 0.72
Recognition and preparation of losses 0.67
Except for the construct (Recognition and Reporting of losses), the Alpha values for the other
construct exceed the threshold of 0.70 thereby confirming the reliability of their measures. The low
reliability results obtained for Recognition and Reporting of losses may be attributed to the few items
that measured this construct and also the fact that the sample size was small.

Furthermore, factor analysis was conducted to determine the loading of each item on its respective
construct. Table 4 shows those items that loaded 0.50 or more on the factors.
Table 4: Factor analysis results
Construct Qn Factor 1
(Accounting)
Factor 2
(Recognition
and Reporting)
Factor 3
(Human
Behaviours)
Factor 4
(Regulation)
Possession of Accounting
Skills
17 0.512361 -0.112675 0.1674321 0,432861
18 0.623493 0.121041 -0.132770 0.147611
19 0.781132 0.054068 0.112183 0.006652
20 0.613003 -0.070995 0.128150 -0.001822
22 0.721991 0.256773 0.075049 0.047416
Knowledge of Legislation 9 0.092453 0.114719 0.244063 0.542167
10 -0.000126 0.142647 -0.01478 0.544206
11 0.401819 -0.372891 -0.008176 0.500121
Human behaviours 1 -0.14410 0.557550 0.520341 -0.064458
2 0.056432 -0.054231 0.553762 0.115432
3 0.278451 0.342772 0.532280 -0.005477
7 0.258186 0.298509 0.548337 -0.040771
5 0.443221 0.100453 0.542765 0.136745
15 0.315871 -0.120605 0.883331 0.250380
Recognition & Reporting 4 0.116744 0.671662 0.012634 0.016934
6 -0.177945 0.549261 0.013467 0.478221
21 0.412734 0.671608 -0.253795 0.034560
22 0.628032 0.604773 0.004249 0.122333
* Items with factor loading of 0.50 or more are highlighted
7.1 Discussion of results
Four factors were extracted as presented in Table 3 above. Items that loaded on Factor 1 were items
Q17, Q18, Q19, Q20 and Q22. Q22 was however excluded because it loaded on two constructs.
Therefore Factor 1 represents the Accounting construct since most of the items measured this
construct. Availability of clear guidance on how to calculate losses (Q19) had the highest loading on
this factor. Table 1 confirms that respondents did not agree that they possessed adequate skills in
accounting and accounting methods and that the method used in their organisations did not reflect the
true costs of cyber-attacks. Further, responses to Q18 also indicate that there were no clear
procedures on how to report security breaches. It is therefore not surprising that few could report on
security breaches in their departments (Q17). Such lack of proper guidance and training in
appropriate accounting techniques affect recognition of losses (Hood and Rothstein, 2000).

Only three of the five items that measured respondents understanding of IT regulations loaded on
Factor 2 (i.e., Q9, Q10 and Q11). This factor represents the Regulation construct. The perception of
the level of support the ECT Act provides in cubing cybercrime loaded the highest (Q10), followed by
respondents understanding of how the Act applied to security matters and existence of
documentation procedures to ensure compliance with the Act. Table 1 (Average responses) indicates
164

Michael Kyobe et al.
that many were uncertain about the role of the ECT Act. Being uncertain about what the ECT Act can
do might have also contributed to the lack of implementation of measures to ensure compliance.
Further analysis of Q9 (I understand how the ECT Act applies to security) shows that only IT
managers understood well the requirements of the ECT Act. 72% of the respondents did not know the
effectiveness of the ECT Act.

The elements that loaded to the construct (Human behaviours) were Q1, Q3, Q7, Q5 and Q15 (My
organisation provides security and awareness training on security) loaded the highest. Table 1
(Average results) suggests limited training done on risk management. This lack of training and
understanding of risks means that crime is committed without the awareness of system users and as
such can not be recognised and reported (Fick 2009). Responses to Q3 and Q18 also indicate that
respondents were not certain if they had a specific IT risk policy and that various methods were being
employed to address the problem. Without a clear policy on security and risk management, managers
are left to apply different strategies which could be inconsistent and sometimes confusing.

The lack of training may also be attributed to poor budget allocation on security. According to
Richardson (2008), inappropriate allocation of security budget may result in limited funds or resources
for security. It is not surprising therefore that Q1, and Q7, for instance, received negative responses.
Responses to Q1 show that information security was perceived to be of least importance to top
management by non-managerial staff. In addition, members did not know their obligation towards
information security therefore monitoring and evaluation of information security could not be enforced
effectively by management (Q5). Q15 also had the highest loading overall emphasising the
importance of awareness and training on security.

The last factor represents the construct Recognition and reporting of losses. Items Q4, Q6, and Q21
loaded on it. Q4 (risks are regularly assessed in the organisation) and Q21 (all breaches we
experience are included in the costs incurred) loaded highly. The requirements for proper Information
security documentation are not adhered to by the departments. Departments do not appear to keep
appropriate security records. This is consistent with the observations of Marion (2008). A culture of
poor documentations may also discourage managers from preparing loss reports or report attack
incidents (Gonzalez 2005; Eurim 2003).
8. Conclusion, limitations and recommendations
The present study examined some of the factors inhibiting the recognition and reporting of losses
from cyber-attacks on government departments in the Western Cape Province of South Africa. The
results indicate that lack of clear guidance on how to calculate losses; lack of understanding of the
legislation and knowledge of how it may assist in cubing cybercrime; lack of training and creation of
awareness of cybercrime and lack of knowledge and capability to assess risks regularly are major
factors inhibiting departments from recognising and reporting losses from cyber attacks.

Lack of proper guidance on determining losses means that accurate and reliable data for analysis
cannot be available, nor can these departments develop necessary competencies in assessing risks.
The lack of knowledge about the regulation and its implications may be attributed to the culture of not
sharing information typical of public institutions. Training in the area of information security awareness
is imperative, and appears to be a worldwide problem for the public sector. Research conducted by
PriceWaterhouseCoopers (2012) shows that there has been degradation in employee security
awareness training; disaster recovery planning and information security management strategies in the
public sector. The negative responses received for the construct Possession of Accounting skills and
Methods causes much concern. It imply that very few public sector departments have adequate
accounting methods to allow for correct estimation of cybercrime related incidents. These findings are
consistent with international finding by Smith (2004) and Richardson (2008). A recent study by the HP
and Economist Intelligent Unit also confirms that public sector organizations world-wide have been
historically slow to share information between and within organizations (Rodgers 2010).

We strongly recommend that government departments and other stakeholders engage in initiatives to
create awareness of the cyber-risks. While the government continues to invest in technology and
related mechanisms for detection and prevention, it is equally important to develop a culture of
sharing information on cybercrime and development of skills in accounting and management of cyber
risks.

165

Michael Kyobe et al.
Limitations

When evaluating the results of this study it should be noted that the sample size was a limiting factor
with 40 responses being recorded. This may have impacted on the statistical analysis of the data.
Another major setback is that we obtained very few comments from the respondents to verify most of
the responses obtained. In most cases many could not provide such comments because of lack of
authority to report on certain sensitive issues. Therefore not much of the qualitative analysis could be
done as initially planned. It should further be noted that respondents came exclusively from the
Western Cape, possibly giving a skewed view of the public sector and an unrepresentative
demographical makeup of respondents.

Recommendations for future research

The sample size should be increased by targeting other geographic locations to ensure correct
demographics of the public sector. Interviews and open ended questions can provide further insight
into the meaning of the data collected and analysed. It would also be interesting to determine the
relationship between the variables and the strength of each inhibiting factor using statistical methods
like regression analysis.
References
Acts online (2002) Electronic Communication and Transaction Act 2002, [Online] Available:
http://www.acts.co.za/ect_act/index.htm [23 August 2010].
Adomi, Esharenana E. and Igun, Stella E. (2008),Combating cyber crime in Nigeria, The Electronic Library, vol.
26, no. 5, pp. 716 725.
Anderson, R. and More, T. (2006) The Economics of Information Security, Science Magazine, vol. 314, no.
5799, pp. 610 613.
Baker, W.H. (2010) Thoughts on Mapping and Measuring Cybercrime.
Oxford Internet Institute Forum Mapping and Measuring Cybercrime, [Online]
Available http://www.sfu.ca/~icrc/content/oxford.forum.cybercrime.pdf [10 March 2011].
Bougaardt, G and Kyobe, M. (2011) Investigating the Factors Inhibiting SMEs From Recognizing and Measuring
Losses From Cyber Crime in South Africa. The Electronic Journal Information Systems Evaluation, vol. 14,
no. 2, pp. 67-178. [Online] Available: www.ejise.com [26 June 2012].
Canhoto, A. (2010) What before How, Oxford Internet Institute Forum, [Online], Available:
http://www.sfu.ca/~icrc/content/oxford.forum.cybercrime.pdf [24 February 2011].
Cashell, B., Jackson, W.D., Jickling, M.and Webel, B. (2004) CRS Report for Congress, The Economic Impact of
Cyber-Attacks, [Online] Available:
http://www.cisco.com/warp/public/779/govtaffairs/images/CRS_Cyber_Attacks.pdf [13 March 2012]
Christensen, T. and Lgreid, P. (2007b) The whole-of-government approach to public sector reform, PAR.
Public Administration Review, vol. 67, no.6, pp. 1059-1066.
Christensen, T. and Lgreid, P. (2007a). Introduction - Theoretical Approach and Research Questions, in T.
Christensen and P. Lgreid (eds.): Transcending New Public Management, Aldershot: Ashgate.
De Tolly, K., Maumbe, B. and Alexander, H. (2006) Rethinking E-government development: Issues, Lessons and
Future Prospects for the Cape Gateway Portal in South Africa, in Cunningham, Paul and Cunningham,
Miriam (Eds): IST-Africa 2006 Conference Proceedings, IIMC International Information Management
Corporation, 2006, [Online] Available: http://www.ist-africa.org/Conference2006/ [24 August 2010].
Dept of Public service and administration (n.d) Draft position Paper on Information Security, [Online], Available:
http://www.dpsa.gov.za/documents/acts&regulations/frameworks/e-
commerce/POSITION%20PAPER%20ON%20INFORMATION%20SECURITY1.pdf. [24 August 2010].
Dowland, P.S., Furnell, S.M., Illingworth, H.M. and Reynolds, P.L. (1999) Computer crime and abuse: a survey
of abuse and awareness, Computers and Security, vol. 18, no. 8, pp. 715-726.
EURIM (2003) IPPR E-Crime study, Partnership policing for the Information Society, Working Paper 1: Reporting
methods and structures, [Online] Available: http://www.eurim.org/consult/e-
crime/dec03/ECS_WP1_web_031209.pdf [24 November 2010].
Gichoya, D. (2009). Facing the challenges of ICT implementation in Government. IST-Africa 2009 conference
proceedings, Paul Cunningham and Miriam Cunningham (Eds). IIMC, 2009.
Gonzalez, J.J. (2005) Towards a Cyber Security Reporting System A Quality Improvement Process,
Conference Proceedings, 24th International Conference on Computer Safety, Reliability, and Security,
Fredrikstad, pp. 368380.
Guerra, P. (2009) How economics and information security affects cybercrime and what this means in the context
of a global recession, [Online] Available: http://www.blackhat.com/presentations/bh-usa-
09/GUERRA/BHUSA09-Guerra-EconomicsCyberCrime-SLIDES.pdf [ 29 August 2010].
Heeks R. (2002). eGovernment in Africa: Promise and Practice, Paper 13, Institute for Development Policy and
Management. http://idpm.man.ac.uk/wp/igov/index.htm
166

Michael Kyobe et al.
Kayle, A. (2010) Experts Criticise Cyber Policy, [Online] Available:
http://www.itweb.co.za/index.php?option=com_content&view=article&id=31353:experts-criticise-cyber-
policy&catid=159:it-in-banking&tmpl=component&print=1 [23 March 2010].
Kayle, A. (2011) SA Lacks accurate cyber crime statistics, [Online] Available:
http://www.itweb.co.za/index.php?option=com_content&view=article&id=46586 [23 March 2010].
Kshetri, N. (2009) Positive Externality, Increasing Returns, and the Rise in Cybercrimes, Communications of the
ACM, vol. 52, December, pp. 141-144.
Kyobe, M.E. (2005) Addressing e-crime and computer security issues in small organisations in South Africa, EMT
2005 Proccedings, The European Management and Technology Conference on integration of management
and technology, Rome, Italy, June 2005.
Levi, M., Burrows, J. Fleming, M. and Hopkins, M. (2007) The nature, extent and economic impact of fraud in the
UK, London: Association of Chief Police Officers. [Online] Available:
http://www.cardiff.ac.uk/socsi/resources/ACPO%20final%20nature%20extent%20and%20economic%20imp
act%20of%20fraud.pdf [01 May 2011].
Lionel C.M. and VonFrederick, R. (2005). Theories of Crime Causation. [Online], Available:
http://www.vonfrederick.com/pubs/Theories%20of%20Crime%20Causation.pdf. [ 25 August 2010].
Marion, C.C. (2008) Kwazulu Natal Anti Corruption Strategy, [Online] Available:
http://www.kwazulunatal.gov.za/premier/anti_fraud/Presentation_To_Kzn_Provincial_Anti_Corruption_Sum
mit.pdf [20 August 2010].
Michalson, L. and Hughes, B. (2005) Guide to the ECT Act, [Online] Available : http://www.michalson.com [20
August, 2009].
News24 (2007) Huge growth in cybercrime, [Online] Available: http://www.news24.com/SouthAfrica/News/Huge-
growth-in-cyber-crime-20071114 [14 April 2010].
Nyanda, S. (2010) Notice of Intention to Make South African National Security Cybersecurity Policy, Government
Gazette Vol. 536. No 32963, [Online] Available: http://www.pmg.org.za/files/docs/100219cybersecurity.pdf
[30 April 2010].
Otto, H. (2008) SARS Employees get 15 years for fraud, [Online] Available:
http://www.iol.co.za/index.php?set_id=1&click_id=15&art_id=vn20080905054713259C961747 [24 April
2010].
PricewaterhouseCoopers LLP (2012) Eye of the Storm: Key findings from 2012 Global State of Information
Security Survey, [Online] Available: http://www.pwc.se/sv_SE/se/riskhantering/assets/2012-global-state-of-
information-security-survey.pdf [13 March 2012].
Rodgers, G. (2010) Managing information effectively: a necessity for the public sector, Hewlett-Packard
Development Company (2010), Business White Paper. [Online] Available:
http://www.managementthinking.eiu.com/sites/default/files/Government%20Workflow%20and%20Security_
0.pdf [20 December 2011].
Syvjrvi, A., Stenvall, J., Laitinen, I. and Harisalo, R. (2009) Information Management as Function of Data
Mining and ICT In City Government, [Online] Available:
http://www.egpa2009.com/documents/psg1/Anitti.pdf [27 August 2010].
The Auditor General (2010) 2009-10 PFMA audit outcomes - Provincial consolidated, [Online] Available
http://www.pmg.org.za/files/docs/110208consolidatedreport.pdf [20June 2011].
Von Solms, B. (2012) Cyber crime challenges, [Online] Available: http://www.eepublishers.co.za/article/cyber-
crime-challenges.html [14 February 2012].
Wangwe, Carina K., Eloff, Mariki M., and Venter, Lucas M. (2009). e-Government Readiness: An Information
Security Perspective from East Africa. IST-Africa 2009 conference proceedings, Paul Cunningham and
Miriam Cunningham (Eds). IIMC, 2009.
167
The Overall Process Taken by Enterprises to Manage the
IaaS Cloud Services
Alina Mdlina Lonea
1
, Daniela Elena Popescu
2
and Octavian Protean
1

1
Automation and Applied Informatics Department, Faculty of Automation and
Computers, Politehnica University of Timisoara, Timisoara, Romania
2
Computer Engineering Department, Faculty of Electrical Engineering and
Information Technology, University of Oradea, Oradea, Romania
madalina_lonea@yahoo.com
depopescu@uoradea.ro
octavian.prostean@aut.upt.ro

Abstract: Small and medium-sized enterprises (SMEs) were the initial focus for cloud services and they are
susceptible to a continuous adoption of cloud computing services, because of its strong advantages of accessing
data from any place in the world over the Internet without concerning about the infrastructure used and the
problems involved by the installation and maintenance processes. However, organizations need to consider
simultaneously both risks and rewards within the decision making process, in order to assure an efficient expertise.
SMEs represent the target group of this study concerned with the outsourcing process to Cloud Service Provider
(CSP) considering the fact that the number of SMEs is greater than the number of large organizations, making
SMEs the heart of economies worldwide (Sharma, et al., 2010; Van Hoecke, et al., 2011). The aim of the proposed
research represents a qualitative analysis of the overall process taken by SMEs to manage the migration of their
applications to Infrastructure-as-a-Service (IaaS). We conducted a literature analysis using papers released both
by academic and practitioner bodies, in order to respond to the following two research questions: What are the
steps involved in the migration process of the SMEs to cloud services? What are the stages required by each step
of the outsourcing process? In this sense we produced a theoretical process, which includes a collection of the
following interrelated activities: data analysis step, decision making step, migration step and management step. In
an IaaS cloud service, the CSP supports the hardware related issues, whilst the software related issues should be
identified by enterprises that want to migrate to cloud. Thus, this paper is first proposing to address an overview of
the data analysis step. This constitutes the initial step of the overall process taken by organizations and it
comprises: the analysis of cloud migration opportunities, the study of cloud adoption barriers and the examination
of current infrastructure used by the organization. Further, another objective of this paper is to address the
decision making step, which implies the following decisions: what information should be moved into cloud and who
will access the information, what CSP the organization will choose and how the organization will manage the cloud
services. The decisions will be made based on the analysis step. We assumed that the cloud service type was
chosen (i.e. IaaS) and the cloud deployment model was selected as well (i.e. public cloud). Furthermore, the
effective moving stage of enterprises assets into cloud services is the migration step, which includes two
activities: developing the Service Level Agreement (SLA) and implementing cloud. In addition, the last step of the
overall process is the management step, which is realized using two management functions: business and
operational.

Keywords: cloud management, outsourcing, IaaS, SME, cloud risks, cloud benefits, service level agreement
1. Introduction
Information Systems (IS) has a great impact for the business growth of Small and Medium sized
Enterprises (SMEs) and it started with personal computers in order to manage the day-to-day
operations of the enterprises using the basic applications (i.e. word processing and accounting
systems), complex applications (i.e. decisional support systems) and the services produced in the
Internet age (i.e. email, web sites, transaction processing systems) (Levy and Powell, 2004). Today,
enterprises adhere to cloud computing technology, which is subject to a continuous development and
it is considered the future and the improvement of Information and Communication Technology (ICT).
While in 2000 the tendency of Small and Medium sized Enterprises (SMEs) was to migrate to the
Enterprise Resource Planning (ERP) solutions (Adam and O'Doherty, 2000), today ICT assists to a
trend of SMEs to migrate from the traditional SMEs to the SMEs based cloud. The migration of
enterprises to cloud is because of the advantages offered by this technology, defined by Joe
Weinmann (2011) as an acronym: Common, Location-Independent Online Utility on-Demand service,
on the Axiomatic Cloud Theory. However, simultaneously with the increased number of enterprises
that adopt cloud computing, the challenges of enterprises to exploit cloud for their business objectives
are growing as well. Thus, companies go through a holistic process in order to manage the
implemented cloud services.

168

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
SMEs represent the target group of this study concerned with the outsourcing process to Cloud
Service Provider (CSP) considering the fact that the number of SMEs is greater than the number of
large organizations, making SMEs the heart of economies worldwide (Sharma, et al., 2010; Van
Hoecke, et al., 2011). SME is a collective term for all micro, small or medium enterprises, which is
qualified based on the following characteristics defined by the European Commission
Recommendation (2003/361/EC): maximum number of staff (i.e. less than 250), annual turnover (e.g.
50 million Euros) or an annual balance sheet (e.g. total 43 million Euros) (Begg and Caira, 2012).
However, these characteristics do not influence the amount and nature of organisational data for
SMEs. Despite the fact that SMEs are not sufficiently oriented toward the ICT solutions like the large
organizations (Dai, 2009; Levy and Powell, 2004), SMEs started to explore the movement benefits to
cloud services (Begg and Caira, 2012) for both business models (i.e. companies with existing IT
infrastructure and start-up companies). Moreover, Devos, Van Landeghem and Deschoolmeester
(2012) state that SME presents insufficient internal IT expertise and needs external IT expertise.
Nevertheless, Misra and Mondal (2011) observed that the start-up SMEs area presents a stronger
interest for adopting cloud based services. Van Hoecke, et al. (2011) present their awareness
regarding the migration of SMEs with existing IT infrastructure to hybrid cloud and the outsourcing of
resources to public clouds for the start-up SMEs.

Companies can choose from a wide range of cloud services (i.e. Infrastructure-as-a-Service, Platform-
as-a-Service and Software-as-a-Service), which can be deployed in four deployment models (i.e.
private cloud, public cloud, hybrid cloud and community cloud). The selection of the cloud
deployment model depends on the size of the organization and its Information Technology (IT)
maturity level. While SMEs would rather prefer to outsource their applications within an external cloud
provider, the large organizations first take into consideration the solution of having a private cloud and
after that, they can decide to migrate their non-critical information (i.e. test and development) to public
deployments (CSCC, 2011). Additionally, depending on the IT maturity level of the enterprise, the
SMEs will move from the traditional SMEs to the SMEs based cloud in order to gain access to a wide
range of advanced IT applications like: Enterprise Resource Planning (ERP), Customer Relationship
Management (CRM), Human Resources (HR) and Collaboration tools (KPMG, 2011). Sharma, et al.
(2010) demonstrates the benefits of using Enterprise Resource Planning (ERP) based cloud services
instead of traditional ERP system by SMEs.

Nonetheless, the aim of the proposed research represents a qualitative analysis of the overall process
taken by SMEs to manage the migration of their applications to Infrastructure-as-a-Service (IaaS).
We conducted a literature analysis using papers released both by academic and practitioner bodies,
in order to respond to the following two research questions: What are the steps involved in the
migration process of the SMEs to cloud services? What are the stages required by each step of the
outsourcing process? In this sense we produced a theoretical process, which includes a collection of
the following interrelated activities: data analysis step, decision making step, migration step and
management step. In an IaaS cloud service, the CSP supports the hardware related issues, whilst the
software related issues should be identified by enterprises that want to migrate to cloud.

Thus, this paper is first proposing to address an overview of the data analysis step. This constitutes
the initial step of the overall process taken by organizations and it comprises: the analysis of cloud
migration opportunities, the study of cloud adoption barriers and the examination of current
infrastructure used by the organization. Further, another objective of this paper is to address the
decision making step, which implies the following decisions: what information should be moved into
cloud and who will access the information, what CSP the organization will choose and how the
organization will manage the cloud services. Furthermore, the effective moving stage of enterprises
assets into cloud services is the migration step, which includes two activities: developing the Service
Level Agreement (SLA) and implementing cloud. In addition, the last step of the overall process is the
management step, which is realized using two management functions: business and operational.

The remainder of this paper is structured as follows: section 2 discusses the proposed management
process of enterprises migration to IaaS; section 3 reviews the related work in this area; while section
4 concludes the paper.
2. The management process of enterprises migration to IaaS
From consumer perspectives, the overall process taken by enterprises to manage the IaaS cloud
services includes a collection of the following interrelated activities: data analysis step, decision
169

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
making step, migration step and management step. Thus, Figure 1 encompasses the proposed
overall process and then each step is discussed separately.

Figure 1: The management process of enterprises migration to IaaS
2.1 Data analysis step
Data analysis constitutes the initial step of the overall process taken by organizations to manage the
migration to IaaS and it comprises: the analysis of cloud migration opportunities, the study of cloud
adoption barriers and the examination of current infrastructure used by the organization. In this
process, enterprises have to consider both the assessment of risks and rewards even if an analysis of
costs produced by implementing cloud services should be realized.

Cloud rewards analysis

According to National Institute of Standards and Technology (NIST), the Cloud concept is defined by
five main characteristics: on-demand self-service, broad network access, resource pooling, rapid
elasticity and measured service (Furlani, 2010), which constitutes the technical benefits:
On-demand self-service: The customers could obtain the desired services from Cloud providers
without any interaction with the employees of the Cloud provider, because those services are
requested online by them.
Broad network access: The devices that consumers use for access the Cloud vary (e.g. mobile
phones, laptops and PDAs), which adds the flexibility capacity for SMEs (KPMG, 2011).
Resource pooling: The customers in Cloud platform are multi-tenant. Even if the location of the
resources is not actually known, the idea is that each of them requires over the Internet the
exactly wanted resources (i.e. storage, processing, memory, network, virtual machines), by
specifying location at a higher level of abstraction (e.g. country, state or datacenter).
Rapid elasticity: Any quantity of the capabilities could be purchased and released any time, which
gives the elasticity feature in Cloud and it makes the clients to feel flexible in their options. This
capability is very convenient for the SMEs that have different workload depending on several
criteria: the months of the year (Marston, et al., 2011), the time of the day (Van Hoecke, et al.,
2011). In this sense, Marston, et al. (2011) emphasized the case of an internet photo website
Smugmug that in December and January recorded computing workloads for five times higher
than the usual one. And Van Hoecke, et al. (2011) presented the internet company Nieuws.be,
that has the field of activity in distributing national and international news on the web and reported
an increased workload during the day hours by comparing with the decreased workload during
the night hours.
Measured Service is realized in cloud computing by monitoring, controlling and reporting the
usability of the resources. The customers are informed what they have to pay for consuming
those resources.
Another cloud opportunity that enterprises should identify is the financial benefit (Khajeh-Hosseini, et
al., 2011; KPMG, 2011). The reduction of capabilitys expenses is directed by the resource pooling
cloud characteristic, in collaboration with the elasticity capability of cloud providers, which optimize the
cost usages. In addition, the cost efficiency of cloud utilization is proved also by a case study realized
170

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
by Khajeh-Hosseini, Greenwood and Sommerville (2010), which calculates the system infrastructure
costs involved over five year period for a company that maintain and provides IT solutions for the Oil
& Gas industry. In this case study it was demonstrated that the costs of utilizing Amazon EC2 cloud
service are smaller than the costs of utilizing the traditional IT system and the company has the
advantage of rapid elasticity feature of cloud and the enterprises in-house hosting costs are
minimizing as well (e.g. electricity, cooling, off-site tape archiving). Thus, cloud computing records
also a huge energy savings together with the cost benefit (Marston, et al., 2011; Van Hoecke, et al.,
2011). The IT investment for SMEs using cloud services is also reduced by the manageability of IT
infrastructure of cloud providers which replace the work of IT support team provided by the in-house
SMEs (KPMG, 2011). SME based cloud achieves also financial growth by changing the responsibility
for the upgrades and compliance of the applications, which will be the task of the cloud providers
(KPMG, 2011).

Furthermore, another cloud opportunity should be reflected on. This is the organizational growth of
the enterprises, which will be realized by facilitating the sales and marketing department to create
new products/services (Khajeh-Hosseini, Greenwood and Sommerville, 2010).

Cloud risks analysis

Companies which want to migrate to cloud services have also to identity the cloud adoption risks and
to consider how to manage the cloud adoption barriers. The major concerns of enterprises are the
security risks implied by the act of embedding their resources within the cloud computing environment
(Rittinghouse and Ransome, 2010). Hence, migration and integration phases of existing enterprise
application within the IaaS cloud services, should be deployed following a business migration plan,
respectively a business disruption plan for the case that the migration process is causing the
disruption of the business flow. All business managers, IT managers and IT vendors should cooperate
in order to implement the business migration, respectively the business disruption plans (Saugatuck
Technology, 2010). Disaster recovery solves handling the detection and prevention of possible
incidents and provides a Business Continuity Planning (BCP) which enhances the future growth of
enterprises (CSA, 2009; CSCC, 2011). Although business migration plan, respectively business
disruption plan cover a major area of security measures, encryption and key management usage in
cloud computing is recognized as a core mechanism to protect resources (i.e. data in transit over
networks, data at rest and data on backup media) (CSA, 2009). Even though the data is encrypted
and a BCP exists, the measures inspected for securing the enterprises data are not sufficient for
securing the cloud services and Identity and Access Management (IAM) branch has to be considered,
which secures the user identity of cloud computing services (RSA, 2009; CSA, 2009).

Beside security, data governance for moving to cloud should comply with the specific enterprises
regulatory requirements (e.g. physical location of data, data breach, personal data privacy, data
destruction, intellectual property, information ownership, law enforcement access, service availability)
(CSCC, 2011). Hence, the health and financial sectors are distinguished with many regulatory
restriction of moving their data to cloud (Khajeh-Hosseini, Sommerville and Sriram, 2010). CSA
(2009) recommends the ISO/IEC 27001/ 27002 certifications for certifying the information security
management systems of providers, respectively the SAS 70 Type II for providing a reference for
auditors. However, the applicability of data governance for SMEs sector should be increased, at the
moment being poorly served (Caira and Begg, 2012).

Additionally, at the stage when the company think to adopt cloud services, its employees are not
prepared to deal with the cloud services. Thus, organizational issues are another challenges that
should be perceived by enterprises (Heinle and Strebel, 2010), which will have to settle on the type of
training activity: internal (i.e. by training their personal to use the cloud services) or external (i.e. by
receiving temporarily or permanent external services) (CSCC, 2011). In this context, specific training
should be realized in this area and in this way the employees will be aware about the changes
produced by cloud transition and it will reduce their fake understanding of losing their jobs (Saugatuck
Technology, 2010). Just the IT departments concerned only with the hardware and network support
will suffer scaling down the number of jobs inside of those IT departments (Khajeh-Hosseini,
Greenwood and Sommerville, 2010). Thus, this is another solution to manage the challenges that
comes with the changes produced by cloud on the IT business department (Saugatuck Technology,
2010). Consequently with this organizational change produced by the cloud migration, the job
satisfaction of support engineers, sales & marketing staff and customer care staff is shrinking,
171

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
because the technical role of support engineers is switching to reporting issues and the satisfaction of
sales and marketing roles, respectively the satisfaction of customer care depend upon the cloud
based services (Khajeh-Hosseini, Greenwood and Sommerville, 2010).

Assets examination

Further, the examination of current infrastructure used by the organization is useful because
enterprises should know what type of bit architecture (i.e. 32 or 64 bit) have their hardware
infrastructure and their operating systems (OSs), where are deployed their application. Additionally,
the business applications should be investigated. This step of identifying these assets is required in
order to prepare the migration process to an IaaS service compatible with the current infrastructure of
the enterprise (Cisco Systems, Inc., 2010; Universitt Osnabrck, 2012).
2.2 Decision making step
Decision making step implies the following decisions: what information should be moved into cloud
and who will access the information, what Cloud Service Provider (CSP) the organization will choose
and how the organization will manage the cloud services. We assumed that the cloud service type
was chosen (i.e. IaaS) and the cloud deployment model was selected as well (i.e. public cloud).

Choosing information

Enterprises should decide what information should be moved into cloud. This decision should be
realized based on the cooperation between the following departments: IT department and compliance
department. The enterprises preoccupation will establish a selection criteria of data and application
preferred to migrate to cloud services, in order to assure confidentiality, integrity and availability
requirements for the assets, based on the infrastructure examination and the cloud risks analysis
(CSA, 2009; CSCC, 2011).

Define service requirements

In this sense being aware of the current infrastructure and applications used on the company and
being aware about what information should be moved into cloud, the enterprise can define service
requirements for IaaS (Universitt Osnabrck, 2012).

Choosing CSP

Choosing the right Cloud Service Provider (CSP) for enterprise will depend by the following criteria:
cost efficiency, product strengths and market credibility. These three decisive factors were listed by
Craig (2012) in the Enterprise Management Application Report in order to emphasize the criteria of
evaluating the Application Performance Management of known solutions from the cloud marketplace,
with the difference that instead of the market credibility it was used vendor strengths capability which
includes a larger area of features. This paper suggests applying these determinants also for choosing
the CSP, which will be discussed and evaluated for CSP (Figure 2).

Figure 2: Choosing CSP
Cost efficiency is one of the decisive factors for choosing the CSP. This factor is composed by the
following two sub-factors: cost advantage and deployment & administration. In terms of cost
advantage, the modelling tool (from www.shopforcloud.com) described by Khajeh-Hosseini, et al.
(2011) could help the costumers to deploy a cloud model by choosing the deployment elements
172

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
(i.e. server, storage and databases) from a variety list of cloud providers (i.e. Amazon Web
Services, Microsoft Azure, Rackspace). After defining the system requirements (i.e. the second
stage of the decision making step), enterprises may use this modelling tool in order to deploy the
specified requirements. Hence, the tool produces a cost report based on selection of its
computational resource usage patterns. This modelling tool is a free web interface, helpful for
comparing the pricing schemes around cloud providers. Nevertheless, the cost reports calculates
only the costs involved in deployment of a system based infrastructure, where it can be added
additional costs (e.g. 3
rd
party plugin to monitor costs Cloudability.com, 3
rd
party platform to
manage cloud resources RightScale cloud Management). Additional costs may include license
costs, training/consulting services costs, expenditure of time consuming for employees who
migrate to cloud services etc (Universitt Osnabrck, 2012; Khajeh-Hosseini, et al., 2011).
However, besides cost advantage another decisive factor which proves the cost efficiency is the
deployment and administration analysis (i.e. ease of deployment, support and services, ease of
administration).
Product strength analysis provides information about the architecture and integration features,
respectively about the functionality of CSP.
Market credibility strengthens the enterprises decision about choosing the CSP, by analyzing the
reputation of CSP on the market.
Choosing management tools

Cloud management is a subject approached by researchers in the community and this can be
observed by the big number of third party cloud management providers (i.e. RightScale, enStratus,
IMOD Kaavo, CloudWatch, Scalr, Tapin, Cloudkick). These third party cloud management tools are
commercial versions, used in special by organizations which want to manage their cloud
infrastructure. Thus, enterprises should select one of these commercial versions. In the Enterprise
Management Application Report, Craig (2012) emphasizes the criteria of evaluating the Application
Performance Management of known solutions from the cloud marketplace. According with Craig
(2012) three criteria of selections is considered in their survey: cost efficiency, product strengths and
vendor strengths. The results of the survey which define the vendor strengths include several
categories, namely: vision, strategy, financial strength, research development and market credibility of
vendors. For choosing the management tools, this paper proposes the first two criteria of selection:
cost efficiency and product strengths from the report provided by Craig (2012) and instead of all
vendor strengths elements, only the market credibility feature of the vendor will be evaluated.
The appraisal of cost efficiency should include the following objectives: cost advantage and
deployment & administration analysis. Whilst the cost advantage is determined by price, licensing
and maintenance costs of the management tool, the deployment and administration analysis is
made to demonstrate the ease of deployment (i.e. time to deploy, packaging requirements, staff
training, disruption minimization), a high vendors customer support and the ease of
administration (i.e. ongoing administration, update process, testing/migration).
The investigation of product strengths is another selections criterion of management tools and it
should reveal an analysis of architecture and integration categories, respectively the analysis of
their functionality.
The vendors market credibility feature will review the reputation of the vendors on the cloud
marketplace, with the purpose of enhancing the decision, after evaluating the cost efficiency and
the product strengths.
2.3 Migration step
Migration step is the effective moving stage of enterprises assets into cloud services. This step
includes 2 activities: developing the Service Level Agreement (SLA) and implementing cloud.

Developing SLA

The Service Level Agreement (SLA) is a document that should be compulsory done between the
cloud provider and the customer, in order to obtain and to maintain a clear aspect over the rights and
the responsibilities of each party. This document is relevant for avoiding conflict that could occur
173

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
during the contract, because it should specify a wide range of issues and the remedies and warranties
of them (Kandukuri, Paturi and Rakshit, 2009).
The content of a typical service level agreement (Kandukuri, Paturi and Rakshit, 2009) is representing
in Figure 3.

Figure 3: Typical service level agreement content
Definition of Services is part of the SLA document, where the services are defined and described
using detailed information, for create a good understanding over exactly what is being delivered. After
the services are defined in SLA, then another section of it (Performance Management) should contain
aspects of monitoring and measuring the service performance. Including benchmarks, targets and
metrics in the requirements of SLA, the both parties of the agreement will be involved in monitoring
the performance of the services. A reliable management arises with the agreement, because relating
and discussing the management problems will also be a part of the contract. Problem Management
embraces the methods for preventing and combating the incidents (Kandukuri, Paturi and Rakshit,
2009).

The customer duties and responsibilities relate the obligations of the clouds customers, because in
the agreement document each party plays its role and have its own responsibility. While the
customers have responsibilities also the provider of cloud should have warranties and remedies
(Kandukuri, Paturi and Rakshit, 2009).

The SLA also should provide security features, creating control access to the information established
by the customers and including the clients security policies and procedures that must be performed
by suppliers. Beside the security features, both parties should include in the agreement document a
disaster recovery and business continuity feature, because if an unplanned disaster happen, the
customer should have the guaranty of safeguarding the data and the cloud provider should thing to
keep its clients, by assuring the disaster recovery plan (Kandukuri, Paturi and Rakshit, 2009; CSCC,
2011).

The final chapter in a typical service level agreement is the termination section, which should have the
following topics: termination at end of initial term; termination for convenience; termination for cause
and payments on termination (Kandukuri, Paturi and Rakshit, 2009).

Implementing Cloud constitutes the effective migration of the enterprises information to the cloud
service. This step will deploy the system using the CSP capabilities and the system requirements
previously defined (i.e. phase 2 of decision making step), by migrating the information (i.e. phase 1 of
decision making step) to the cloud service.
2.4 Management step
After migrating to cloud services, enterprises will manage the deployed cloud, using two management
functions: business and operational. Business management function, also called administrative group
by DMTF (2010b) guarantees the following business supports: customer management, contract
management, inventory management, accounting and billing, pricing and ratings, metering and SLA
management (DMTF, 2010a; Hogan, et al., 2011).

174

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
The second management function called operational management function or resource management
group by DMTF (2010b), is handling the provisioning/configuration operations and
portability/interoperability operations (Hogan, et al., 2011; DMTF, 2010a).
3. Related work
The steps of the proposed process have been separately approached by researchers in the area. It
can be observed that the most desired topics of scientists related with this paper are the study of
cloud benefits and the study of cloud barriers. Thus, the detailed reports provided by Cloud Security
Alliance (CSA, 2009) and the European Network and Information Security Agency (Catteddu and
Hogben, 2009) contain assessments of security risks and benefits of using cloud services. Kourik
(2011) present an overview of the risk assessment instruments for SMEs developed by Open Group,
CSA, National Institute of Standards and Technology (NIST), ENISA. Additionally, white papers
produced by CSPs offer decision makers recommendations for enterprises (Chappell, 2009; Varia,
2010), which represent the marketing tools.

Moreover, the practical guide provided by Cloud Standards Customer Council (CSCC, 2011) offers a
series of steps recommended to be adopted by customers, with the purpose to ensure a successful
cloud deployment and to make visible the differences produced by the size and the IT maturity level of
the enterprises. This idea conducted at the conclusion that start-up SMEs prefer to adopt the public
cloud instead of private cloud, while SMEs with existing infrastructure prefer to adopt hybrid cloud
(Van Hoecke, et al., 2011). However, the steps described in (CSCC, 2011) are approached differently
comparing with the steps of the overall process discussed in this paper.

Khajeh-Hosseini, et al. (2011) created a benefits and risks assessment tool of using public cloud
IaaS, by identifying the following categories: organizational, legal, security, technical and financial
risks and the technical, organizational and financial benefits. Khajeh-Hosseini, Sommerville and
Sriram (2010) looked into the research challenges for cloud computing for an enterprise perspective:
organizational changes, economic and organizational implications and the security, legal and privacy
issues. In addition, the cost efficiency of cloud utilization is proved by a case study realized by Khajeh-
Hosseini, Greenwood and Sommerville (2010), which calculates the system infrastructure costs
involved over five year period for a company that maintain and provides IT solutions for the Oil & Gas
industry.

The study of cloud risks in our paper emphasizes three issues: security issues debated in
(Rittinghouse and Ransome, 2010), data governance discussed in (Caira and Begg, 2012; CSCC, 2011;
CSA, 2009; Khajeh-Hosseini, Sommerville and Sriram, 2010) and organizational issues reviewed in
(Heinle and Strebel, 2010), with their corresponding measures deliberated in the following papers:
(Saugatuck Technology, 2010; CSA, 2009; CSCC, 2011; RSA, 2009; Khajeh-Hosseini, Sommerville
and Sriram, 2010; Heinle and Strebel, 2010; Khajeh-Hosseini, Greenwood and Sommerville, 2010;
Martens and Teuteberg, 2011).

Cisco Systems, Inc. (2010) presented the migration plan of enterprise applications to the cloud, by
analyzing which cloud services is suited for the selected application. Because in our paper we
assumed that the cloud service chosen is IaaS, the interest from (Cisco Systems, Inc., 2010) is IaaS
service. According to Cisco Systems, Inc. (2010) for deciding about IaaS service the enterprise
should examine its infrastructure, idea supported also by Universitt Osnabrck (2012).

Decision making step is based on the analysis step. The asset decision was discussed in (CSA, 2009;
CSCC, 2011). Our paper suggests applying three determinants for choosing the CSP: cost efficiency,
product strengths and market credibility, decisive factors that were listed by Craig (2012) in the
Enterprise Management Application Report in order to emphasize the criteria of evaluating the
Application Performance Management of known solutions from the cloud marketplace, with the
difference that instead of the market credibility it was used vendor strengths capability which includes
a larger area of features. The same decisive factors were recommended for choosing the
management tools of cloud services.

Kandukuri, Paturi and Rakshit (2009) and CSCC (2011) constitute the sources for the stage of
developing the Service Level Agreement (SLA). With respect to the management step, two
management functions were discussed: business and operational (DMTF, 2010a; DMTF, 2010b;
Hogan, et al., 2011).
175

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
4. Conclusions
Enterprises adhere to cloud computing technology, which is subject to a continuous development.
Simultaneously with the increased number of enterprises that adopt cloud computing, the challenges
of enterprises to exploit cloud for their business objectives are growing as well. Thus, companies go
through a holistic process in order to manage the implemented cloud services. This paper discussed
the overall process taken by Small and medium-sized enterprises (SMEs) to manage the migration of
their applications to Infrastructure-as-a-Service (IaaS), which includes a collection of the following
interrelated activities: data analysis step, decision making step, migration step and management step.

The presented process is a complex approach which reveals the migration of SMEs to the cloud
services, and this process was created based on several criteria that SMEs should analyze in order to
make a suitable decision. Thus, in order to realize the replacement procedure of the existing services
from the traditional SMEs with the cloud services, the following patterns were used in the conception of
the outsourcing procedure: the SMEs business model, the size of the organization, the IT maturity
level of SME, the SMEs sector, the SMEs information, the establishment of the periods when the SME
records intense and regular workload, the skills and knowledge of SME about cloud services, the
Cloud Service Provider (CSP) market and the cloud management tools.

The proposed process improves the efficiency, quality and capacity management of enterprises to
move their data and applications into cloud. Furthermore, it decreases the enterprises expenditure for
decision making as regards transition into cloud. Due to the innovation brought by cloud computing
technology which improves the Information and Communication Technology (ICT), the cloud adoption
by SMEs maintains the competitiveness. The limitation of this paper is that it is a qualitative research,
which does not provide a case study for evaluating the described process. However, this will be part of
our future work.
Acknowledgements
This work was partially supported by the strategic grant POSDRU/88/1.5/S/50783, Project ID50783
(2009), co-financed by the European Social Fund Investing in People, within the Sectoral
Operational Programme Human Resources Development 2007-2013.
References
Adam, F., and O'Doherty, P., (2000). "Lessons from enterprise resource planning implementations in Ireland -
towards smaller and shorter ERP projects," Journal of Information Technology (15:4), pp 305-316.
Begg, C. and Caira, T., (2012). Exploring the SME Quandary: Data Governance in Practise in the Small to
Medium-Sized Enterprise Sector. The Electronic Journal Information Systems Evaluation, Volume 15,
Issue 1, pp. 01-12.
Catteddu, D. and Hogben, G. (2009). Cloud Computing: benefits, risks and recommendations for information
security.
Cisco Systems, Inc. (2010).Planning the Migration of Enterprise Applications to the Cloud. [online], Cisco White
Paper,
http://www.cisco.com/en/US/services/ps2961/ps10364/ps10370/ps11104/Migration_of_Enterprise_Apps_to
_Cloud_White_Paper.pdf.
Chappell, D. (2009). Winodws Azure and ISVs: A Guide for Decision Makers.
Craig, J. (2012). EMA Radar
TM
for Application Performance Management (APM) for Cloud Services: Q1 2012.
Enterprise Management Associates (EMA).
CSA, (2009). Security Guidance for Critical Areas of Focus in Cloud Computing V2.1, [online], Cloud Security
Alliance, https://cloudsecurityalliance.org/csaguide.pdf.
CSCC, (2011). Practical Guide to Cloud Computing Version 1.2, [online], Cloud Standards Customer Council,
http://www.isaca.org/Groups/ProfessionalEnglish/cloudcomputing/GroupDocuments/CSCC_PG2CCv1_2.pd
f.
Dai, W. (2009). The Impact of Emerging Technologies on Small and Medium Enterprises. Journal of Business
Systems, Governance and Ethics, Vol. 4, No. 4, pp. 53-60.
Devos, J., Van Landeghem, H. and Deschoolmeester, D. (2012). SMEs and IT: Evidence for a Market for
Lemons. The Electronic Journal Information Systems Evaluation, Volume 15, Issue 1, pp. 25-35.
DMTF, (2010a). Use Cases and Interaction for managing clouds. [online], White paper from the Open Cloud
Standards Incubator, Version 1.0.0. Distributed Management Task Force Inc.,
http://www.dmtf.org/sites/default/files/standards/documents/DSP-IS0103_1.0.0.pdf.
DMTF, (2010b). DMTF Architecture for managing clouds, [online], Distributed Management Task Force, Inc.,
http://www.dmtf.org/sites/default/files/standards/documents/DSP-IS0102_1.0.0.pdf.
176

Alina Mdlina Lonea, Daniela Elena Popescu and Octavian Protean
Heinle, C. and Strebel, J.. (2010). IaaS adoption determinants in enterprises. In Proceedings of the 7th
international conference on Economics of grids, clouds, systems, and services (GECON'10), J. Altmann and
Omer F. Rana (Eds.). Springer-Verlag, Berlin, Heidelberg, 93-104.
Hogan, M., Liu, F., Sokol, A. and Tong, J. (2011). NIST Cloud Computing Standards Roadmap Version 1.0.
NIST Nationale Institute of Standards and Technologies.
Kandukuri, B. R., Paturi, R.V. and Rakshit, A. (2009). Cloud Security Issues. In: IEEE International Conference
on Services Computing. Bangalore, 21-25 September 2009, pp. 517-520.
Khajeh-Hosseini A., Sommerville I., Bogaerts J., Teregowda P.. (2011). Decision Support Tools for Cloud
Migration in the Enterprise. IEEE 4
th
International Conference on Cloud Computing (CLOUD 2011),
Washington DC, USA.
Khajeh-Hosseini A., Greenwood D. and Sommerville I. (2010). Cloud Migration: A Case Study of Migrating an
Enterprise IT System to IaaS. IEEE 3
rd
International Conference on Cloud Computing (CLOUD 2010).
Miami, USA.
Khajeh-Hosseini A., Sommerville I. and Sriram I. (2010). Research Challenges for Enterprises Cloud
Computing. LSCITS Technical Report.
Kourik, J.L., (2011). For Small and Medium Size Enterprises (SME) Deliberating Cloud Computing: A Proposed
Approach. Proceedings of the European Computing Conference, ISBN: 978-960-474-297-4, pp. 216-221.
KPMG, (2011). The Cloud Changing the Business Ecosystem. [online]
http://www.kpmg.com/IN/en/IssuesAndInsights/ThoughtLeadership/The_Cloud_Changing_the_Business_E
cosystem.pdf.
Furlani, C.M. (2010). Cloud Computing: Benefits and Risks of Moving Federal IT into the Cloud, [online],
National Institute of Standards and Technology (NIST),
http://www.nist.gov/director/ocla/testimony/upload/Cloud-Computing-testimony-FINAL-with-Bio.pdf.
Levy, M., and Powell, P. (2004).Strategies for Growth in SMEs: The Role of Information and Information
Systems. Elsevier, Oxford.
Marston, S., et al. (2011). Cloud Computing The business perspective. Decision Support Systems, 51, pp.
176-189.
Martens, B. and Teuteberg, F. (2011). Risk and Compliance Management for Cloud Computing Services:
Designing a Reference Model. In Proceedings of the Seventeenth Americas Conference of Information
Systems, Detroit, Michigan, August 4
th
-7
th
.
Misra, S.C. and Mondal, A. (2011). Identification of a company's suitability for the adoption of cloud computing
and modelling its corresponding Return on Investment. Mathematical and Computer Modelling, Volume 53,
Issues 3-4, pp. 504-521.
Universitt Osnabrck, (2012). Total Cost of Ownership Calculator for Cloud Computing Services, [online],
http://www.cloudservicemarket.info/tools/tco.aspx.
Rittinghouse, J. W. and Ransome, J.F. (2010). Cloud Computing Implementation, Management and Security.
Boca Raton: CRC Press.
RSA, (2009). The Role of Security in Trustworthy Cloud Computing, [online], White Paper,
http://www.emc.com/collateral/about/investor-relations/9921_CLOUD_WP_0209_lowres.pdf.
Saugatuck Technology Inc. (2010). Stepping Up to the Cloud: Managing Changes and Migration for Mid-Sized
Business, [online], http://fm.sap.com/data/UPLOAD/files/Saugatuck-Stepping_Up_to_the_Cloud-
Managing_Changes_and_Migration_for_Mid-sized_Business.pdf.
Sharma, M., et al. (2010). Scope of Cloud Computing for SMEs in India. Journal of Computing, Volume 2, Issue
5, ISSN: 2151-9617.
Van Hoecke, S., et al. (2011). Efficient Management of Hybrid Clouds. Cloud Computing 2011: The Second
International Conference on Cloud Computing, Grids and Virtualization, pp. 167-172.
Varia, J. (2010). Migrating your Existing Applications to the AWS Cloud. Amazon Web Services.
Weinman, J. (2011). Axiomatic Cloud Theory. [online],
http://www.joeweinman.com/Resources/Joe_Weinman_Axiomatic_Cloud_Theory.pdf.
177
Sustainable Enterprise Architecture: A Three-Dimensional
Framework for Management of Architectural Change
Thanos Magoulas, Aida Hadzic, Ted Saarikko and Kalevi Pessi
Department of Applied IT, Gothenburg University, Gteborg, Sweden
thanos.magoulas@gu.se
hadzic@chalmers.se
ted.saarikko@ituniv.se
kalevi.pessi@gu.se

Abstract: Despite advances in information technology, the modern enterprise finds itself struggling to satisfy its
need for pertinent information. Faced with the challenge of matching internal resources to external demands,
more and more organizations turn to Enterprise Architecture (EA) for guidance. Yet all too often change efforts in
relation to EA are approached with a simplistic- or technical perspective that limits future development. We must
therefore seek a more appropriate means to facilitate purposeful change and ensure a sustainable EA that is able
to accommodate the complexities that arise when dealing with complex change efforts. In response to the need
for better understanding of sustainability in Enterprise Architecture, we propose a tentative three-dimensional
framework for change consisting of three dimensions: Perspectives on change, levels of change and types of
change. The perspective on change is a reflection of the paradigm upon which the change effort is based. Two
extreme views are hard systems thinking and soft systems thinking. The level of change describes its delineation.
Depending on scope, the change effort may be considered local, structural or inter-organizational in nature. The
nature of change signifies the extent to which an enterprise departs from existing practices. We may refer to
changes as incremental, transformational or reorientation-based depending on their magnitude. The relevance
and usefulness of these three dimensions has been validated through seminars, workshops or advanced courses
held with over 30 healthcare professionals. While we feel that this framework is a step in the right direction, there
is still much work to be done in this area. We therefore call for a deeper discourse and further research into
sustainability in Enterprise Architecture.

Keywords: enterprise architecture, change, sustainability
1. Introduction
Faced with increasing instability and diversity in terms of operational context as well as internal
structure, many organizations turn to Enterprise Architecture (EA) for guidance. To this end, much
effort has been spent establishing suitable taxonomies (Sowa & Zachman, 1992), describing
enterprise modelling (IFIP/IFAC, 2003), and developing methodologies for modifying architecture (The
Open Group, 2009, p. 49-65). Considerably less has been said in regards to its sustainability the
ability to either absorb contextual changes or adapt in order to accommodate them. Given that the
impetus for adopting EA is to manage change and complexity, it is remarkable that one would
overlook the need to maintain and develop the architecture itself. In a conference moderated by the
Open Group, their vice president of skills and capabilities said in reference to architecture and
transformation: My position is that theyre two separate entities (Gardner, 2012). A clue into the root
of this peculiar contradiction may be the technical focus of the topical literature. Normative
approaches to EA, often referred to as frameworks, usually delineate EA into layers such as business,
application, information, data, technology et cetera. Given the focus on IT artefacts, it is perhaps no
wonder that EA is treated much in the same way as hardware or software; use until failure or
obsolescence, then replace. This manner of change has been referred to as discontinuous (Nadler &
Tushman, 1995), radical (Orlikowski, 1993) or episodic (Weick & Quinn, 1999). However, given the
sheer amount of time, money and expertise that goes into establishing an Enterprise Architecture,
applying such an approach would be extremely wasteful. Hence, once the architecture is in place it
often stays in place, gradually diverging from the ephemeral environment until it has become outdated
and more of a burden than a boon to the enterprise. Again, this approach has received its fair share of
attention under the guise of incremental- (Nadler & Tushman, 1995; Orlikowski, 1993) or continuous
change (Weick & Quinn, 1999).

It would seem that we need a more balanced view if we are to understand and discuss change efforts
within the realm of Enterprise Architecture. A sustainable approach that allows the architecture to
develop as needed, when needed by the stakeholders of the enterprise (Kay, 1993).

In this paper, we introduce three dimensions of change which we have derived from extant literature
in the fields of organizational- and information systems research. As such, the framework is based on
178

Thanos Magoulas et al.
a deductive approach that has been validated through seminars, workshops or advanced courses
held with more than 30 Swedish healthcare professionals. Most of the participants have worked with
IT in a healthcare context for more than 15 years. We will also utilize their experiences as a means to
exemplify practical instances of different change efforts.
2. Concept of change in enterprise architecture
There are two topics that are relevant to our discourse on sustainability in Enterprise Architecture:
Sustainability and change.

The aptness of utilizing sustainability in a perpetually changing business environment is discussed by
Aier (2004). Noting the general dearth of relevant (i.e. business-oriented) literature, he approaches
the issue by means of mapping extant literature on sustainability onto the construct of EA.
Ascertaining that constant change gives rise to frequent reorganizations, he proposes modularization
as a viable and fiscally rational means to sustain the architecture in the face of environmental
turbulence.

In a subsequent paper, Aier and Schoenherr (2005) further pursued the potential of developing
sustainability through modularization by means of Enterprise Application Integration (EAI). Conducting
an empirical study covering approximately 60 EAI users, consultants and vendors, they conclude that
while EAI shows significant promise, its utilization is presently restricted to artifacts. This, of course,
limits its potential contribution towards sustainability or any other non-technical aspect of EA.

Kluge, Dietzsch and Rosemann (2006) present a case study of EA value based on the DeLone &
McLean model of IS success. In their view, a lack of non-IT stakeholder acceptance is a major
impediment to sustainable implementation of EA. Although limited in scope, their findings suggest that
improving stakeholder awareness serves to promote purposeful architecture change efforts. By
ensuring a perceived value of EA, stakeholder involvement brings about a more holistic
understanding and thus greater conformance to the needs of the enterprise.

Fischer, Aier and Winter (2007) explore the possibilities of utilizing EA modelling in order to effect
purposeful change whilst maintaining IT-business alignment. The primary role of EA is perceived as
one of documentation in order to increase awareness of relevant structures and processes. Hence,
EA may serve enterprise transformation by means of ensuring the availability of pertinent information.
Fischer et al. conclude that integrated and updated models not only provide valuable information, but
also serve to demonstrate the value of EA to stakeholders.

Ren and Lyytinen (2008) as well as Schelp and Aier (2009) propose Service Oriented Architecture
(SOA) as an agent for achieving an agile EA. Ren and Lyytinen scarcely move beyond the technical
domain, content with providing s number of definitions of SOA and pointing out that several among
them acknowledge business requirements and user needs. They conclude that while SOA has the
potential to enable a higher degree of agility, practical implementations are often limited to certain
technical environments a state of affairs which diametrically opposes the notion of SOA as
independent of platform.

Schelp and Aier (2009) conduct a case-study whereby they ascertain the potential of SOA to improve
enterprise agility. Their findings suggest that SOA does indeed have a positive impact on agility in
terms of shortening time-to-market of architecture components. Other potential benefits, such as
reusability and flexibility, are highly dependent on contextual factors.

We may discern two salient features from the currents streams of research into sustainable Enterprise
Architecture. First and foremost, there is a pronounced focus on artefact as a means to promote
sustainability. This marks a sharp contrast to other, more organization-based views such as Kays
(1993) notion of architecture as founded upon personal relationships, or Huismans (2003) view of
architecture as a means to balance stakeholder demands and enterprise capabilities. Furthermore,
change as such is not explored in any great detail but rather mentioned in passing or vaguely alluded.
Extant literature actively promotes agility and modularisation as means to promote changeability, but
does not give us any insight into whet nature of change itself. We wish to promote a deeper
understanding of the complexities of change since Enterprise Architecture is not and should not
be limited to tinkering with technology. At a bare minimum, we perceive that three aspects of change
179

Thanos Magoulas et al.
need to be made explicit: The area of effect, the deviation from past/current practices, and the
mindset with which we move forward.
3. A three-dimensional framework for change
In the following section, we will outline our framework one dimension at a time. We do not assert that
our classification is in any way definitive, but rather a serviceable starting point for a discourse into
change in Enterprise Architecture.
3.1 Perspectives on change
The starting point of any change effort is a perceived dissatisfaction between what is and what should
be. Our preferences, skill-set and past experiences greatly influence how we perceive said
dissatisfaction and how we seek to resolve it (Langefors, 1973, p. 242-249). As sharing implicit
knowledge is extremely difficult (Boisot, 1995, p. 72-73), these perceptions are usually made explicit
by means of various schools of thought, paradigms and methodologies in order to facilitate
coordinated action. We shall limit ourselves to elucidating two extremes by which we may confront
change efforts: Hard systems thinking and soft systems thinking. In addition to listed references, we
draw upon research by Argyris (1971), Tichy (1983), Bartunek & Moch (1987) and Ross, Weill &
Robertson (2006).

Hard systems thinking is based upon operational research and as such essentially rooted in a
positivist and reductionist world view. The underlying assumption is that each component of a system
has a number of absolute properties that may be objectively ascertained. Building on this logical
assumption, we may then also build a system that we deem desirable to the task at hand by using the
proper components. We refer to this procedure as either optimizing or goal-seeking depending on
whether the system in question is mechanical or human (Checkland, 1985). Hence, when applied in
practice, hard systems thinking tend to be based on deductive reasoning. This puts the problem
solver in the role of an external party to the change effort; perceiving it through codified variables
such as a specification of requirements. Working in this manner, the organization utilizes planning and
strategy as means to match the systems functionality against requirements present or predicted.
Heavy emphasis is placed upon proper procedure such as normative methods or generic tools in
order to safeguard desirable results (Mackenzie, 1984). Illustrating via healthcare, any change effort
that is intended to improve any existing activity or routine may be characterized as a result of hard
systems thinking.

In contrast, soft systems thinking is based upon on relativism and systems theory which states that
the properties of a component are highly dependent on context and purpose. In other words, the
value, utility, function et cetera of components should primarily be assessed by the efficacy with which
they interconnect to form a cohesive system (Churchman, 1971, p. 66-76). The distinction between
efficiency and efficacy is often important within the realm of social systems. Indeed, there may not be
problems as such but rather a nebulous sense of dissatisfaction or misalignment that has been
present for a long time (Checkland, 2000). As such, the problems are not objectively ascertained but
rather socially constructed based on dialogue and discussion among stakeholders. Given the
subjective, even political nature of establishing a problem situation, the problem solver is not an
external party to the change effort, but rather an actor in elucidating and balancing the different
perspectives. As soft systems thinking extends well beyond technical systems, we must not only ask
ourselves how to do something, but also what and why. Hence, before establishing any plan of action
(such as a strategy), we must first establish goals that motivate stakeholders to take action or at the
very least acquiesce. Only then may tangible steps be taken towards reducing the disparity between
the present- and desired state of affairs. Over the past decade, healthcare has gradually shifted from
being focused on the medical practitioner to being focused on the patient. This necessitates a
fundamental departure from existing values and mindsets as it directly affects the authority of the
medical professional.
3.2 Levels of change
Our second dimension depicts the scope of the change effort undertaken. A wider scope carries with
it a wider and more diverse range of stakeholders. We must therefore balance patterns of authority
and know-how in architectural modifications. In our framework, based upon research by Simon
(1962), Ackoff (1967) and Armenakis & Bedeian (1999), we have chosen to distinguish between three
different levels of change: Local-, structural- and inter-organizational change.
180

Thanos Magoulas et al.
Local changes are confined to a single organizational unit. In an industrial setting, this would be
described as a technical subunit (Thompson, 1967, p.10-11) whereas a more general description
would be to describe it as an information domain (Magoulas & Pessi, 1998, p. 366-367). The most
salient feature of local changes is the proximity between decision-maker and context. It is entirely
possible that the impetus for change is an external factor, but the change effort is adapted to suit local
conditions. Using healthcare as a means to illustrate, any decision that is made at and limited to a
single primary care unit would be considered a local change.

Structural changes affect several organizational units, creating a change effort that is wider in scope.
We now concern ourselves with a managerial (Thompson, 1967, p. 10-12) effort that impacts the
wider information environment (Magoulas & Pessi, 1998, p. 367-370). Kay (1993, p. 78-80) describes
this as a change effort affecting the internal architecture of an enterprise. As there are now more than
one organizational unit involved, there is an inherent diversity in priorities and contextual issues to
address. Hence, there is a very real possibility that political issues impact structural changes
(Davenport, Eccles & Prusak, 1992; Davenport & Prusak, 1997, p.67). A concrete example from
healthcare is the relatively recent (and ongoing) transition from organizational hierarchies to
organizational processes. To some degree, this involves breaking with established structures and
mindsets that have been instilled in generations of medical professionals.

Expanding our scope even further, we find ourselves face with inter-organizational changes. No
longer limited to a single enterprise, these efforts concern the institutional level (Thompson, 1967, p.
10-12) of the organization, i.e. the societal contract by which the enterprise operates. Kay (1993, p.
80-82) refers to this as the external architecture as it involves separate legal entities. There are
several ways to exemplify inter-organizational changes based on healthcare. One relatively intuitive
illustration would be the use of e-prescriptions transmitted between healthcare providers and
pharmacies. Another, perhaps more contemporary, illustration would be the drive towards
interoperability in healthcare between primary care, private practitioners, specialists and emergency
care. While all of these facilities provide care to the same populous, they do so on different terms and
with different purposes.
3.3 Types of change
In this section we define and exemplify enterprise changes based on their deviation from existing
structures and processes. We delineate these into three types of change: Incremental changes,
transformational changes and reorientation-based changes. This classification is based upon
research by Henderson & Clark (1990), Tushman & OReilly (1995) and Newman (2000).

Incremental changes cover enterprise changes limited to improving existing structures using
predominantly existing know-how. This is often referred to as single-loop learning (Argyris, 1977).
These types of changes strive to promote efficiency in terms of quality, cost or time. This manner of
gradual, controlled improvements has been widely adopted in the manufacturing industry under the
guise of Total Quality Management (Oakland, 1993), Six Sigma (Brue, 2005) and Kaizen (Lee,
Dugger & Chen, 1999). Within the context of healthcare, incremental changes take the form of
continuous improvement of care processes, tools or working schedules. While incremental changes
may tend to be localized, this is not always the case. New legislation may force healthcare providers
to amend their practices on a national scale even if the change itself may be described as
incremental and easily accommodated using existing organizational structures.

Transformational changes are more far-reaching then aforementioned incremental ditto in that
existing structures and/or processes must be altered in order to accommodate a new modus
operandi. The ultimate goal is still unaltered, but the means by which one strives to achieve this
changes substantially. This is commonly referred to as second-loop learning (Argyris, 1977). The
ends of transformational changes are typically expressed in terms of effectiveness rather than
efficiency, i.e. attaining the desired result rather than economizing on resources. ICT has been treated
as a powerful enabler of transformational changes due to the added ability to process vast quantities
of data as exemplified by Business Process Reengineering (Hammer, 1990). Using healthcare as a
backdrop, instructing the patient to administer his/her own care represents a transformational change.
The fundamental goal of the caregiver is still the treatment (or cure) of the patient, but the patient has
been given a greater degree of responsibility and autonomy. In effect, the patient is now a co-
producer of medical information rather than merely a factor thereof.
181

Thanos Magoulas et al.
An enterprise facing reorientation-based change undergoes a fundamental change of its very identity.
It is no longer a matter of doing things differently, but rather one of doing different things. In other
words, the ultimate goal or mission of the enterprise is altered. In Soft systems Methodology, this is
expressed as altering the root definitions (Checkland, 1981). It is unlikely that this profound manner of
change is undertaken at any other level than top management, yet given the far-reaching implications
in terms of culture and core values, it is unlikely that such an undertaking is possible without support
from all stakeholders.

Within the context of healthcare, a reorientation is most clearly exemplified by shifting ones focus
from treatment of injury or disease to prevention by means of acting pre-emptively to promote a
healthy lifestyle.
3.4 The three dimensions of change
Having presented the dimensions individually, we now present the framework as we perceive it.

Figure 1: A three-dimensional framework for change
We illustrate the framework thusly in order to explicate two important issues related to architectural
change efforts. First and foremost, there is no maximum level of complexity. We could expand our
axes ad infinitum if we wanted to, but doing so would be of little use as the levels featured serve to
cover most realistic situations. Secondly, increasing the scope of change, attempting a more radical
departure from current practices or incorporating new knowledge are all potent challenges in their
own right. What we wish to illustrate using this framework is that compiling these exacerbating factors
is not tantamount to simply adding another variable it really is an entirely new dimension of
complexity.
4. Discussion
As we have outlined in the previous chapter, change is a complicated, multidimensional construct. As
we move away from localized, incremental changes that are based upon pre-existing know-how, we
find ourselves with a richer pantheon of stakeholders that must embrace new methodologies and new
paradigms. A similar point has been made by Foster-Fishman, Nowell and Yang (2007) albeit from
the perspective of community change efforts. Yet their discourse holds relevance as the chief
components of enterprises like communities are the people that inhabit them. Another relevant
similarity is that enterprises like communities tend to be diverse when viewed as a whole. It is unlikely
that a large gathering of denizens or employees will display homogeneity of background, skill-set or
experience. As it is the purpose of Enterprise Architecture to furnish the enterprise with a means to
organize its constituent components, the architecture must be commensurate to the level of
complexity experienced by the enterprise. This line of reasoning is based upon Ashbys law of
requisite variety which states that a control mechanism must allow for the same level as complexity as
182

Thanos Magoulas et al.
the system it is to control (Waelchli, 1989). In our case, control mechanism is perhaps a somewhat
limited moniker for EA, yet apt insofar as it shapes our understanding of the enterprise and as such
directly influences our actions. In light of this, extant literature on change efforts in EA provides us
with detailed perspectives on how to create flexibility in IT-artefacts, but says little with regards to the
enterprise as such. Even less is said on the topic of change itself. Our interpretation of Ashbys law
suggests that adopting a technical perspective when dealing with organizational issues is at the very
least simplistic; an assertion that is supported by IS research (Earl, 1993). The explicit need to
address stakeholders in organizational change efforts have been highlighted by Hedberg (1980) as
well as Davenport, Eccles and Prusak (1992). Kay (1993) has takes this perspective one step further
by asserting that stakeholders are the source of architecture, not a factor thereof.

Expanding architectural design to include non-technical aspects i.e. stakeholders adds several
layers of complexity for the architect. Yet it is a necessary step if we are to design architectures that
ameliorate problems rather than ignore them. Realizing the challenges brought on by complex change
efforts, we should seek to design an architecture that allows the enterprise to absorb changes rather
than go through frequent processes of redesign. Should a redesign be considered necessary, then a
fuller understanding of the complexities of change may guide us to design an architecture that
enables us to keep change efforts as simple as possible; restricting as much as possible to localized,
incremental changes that rely on tested and true methods. We believe that the framework presented
in this paper is a first step towards a deeper understanding of sustainable change in Enterprise
Architecture. A greater sense of sustainability allows us to bring about efficacious change that is
suited to the enterprise environment while concurrently promoting resource efficiency be they
temporal, pecuniary or intellectual in nature. This is the challenge that faces academia as well as
practitioners and it is here that the full value of Enterprise Architecture can come to light.
5. Conclusions
In the course of this paper, we have attempted to ascertain how change efforts are treated in
conjunction with Enterprise Architecture. Our purpose in this undertaking has been to further our
understanding of how architectural changes may be imbued with a sense of sustainability that
enables architectures to evolve based on the needs of the enterprise as well as the experiences of
stakeholders.

When studying change itself, it is readily apparent that change is not a simple phenomenon or act, but
rather a complicated undertaking that cannot be reduced to a single scale or metric. Furthermore,
there is a reciprocal relationship between architecture and change; we need architecture in order to
effect purposeful change. Change, in turn, serves as an impetus for architectural evolution.
Underestimating the inherent complexities by applying similar methods to all manner of change efforts
will at best yield a capricious track record. Sustainable Enterprise Architecture can only be attained
after we have enabled ourselves to better conceptualize and understand change. This is especially
true when studying structures such as complex organizations operating in dynamic environments. If
we are to understand change, we require a construct with sufficient granularity so that we may
ascertain how change efforts impact different avenues of the enterprise. We are then in a far better
position to understand and discuss change among enterprise stakeholders with the aid of
architectural design.

In response to the need for better understanding of sustainability in Enterprise Architecture, we
propose a tentative three-dimensional framework for change. This framework furthers our
understanding of enterprise change based on three essential dimensions. The relevance and
usefulness of these three dimensions has been validated through seminars, workshops or advanced
courses held with over 30 healthcare professionals. While we feel that this is a step in the right
direction, there is still much work to be done in this area. We therefore call for a deeper discourse and
further research into sustainability in Enterprise Architecture.
References
Ackoff, R. (1967). Management Misinformation Systems. Management Science, Vol. 14 (4), 147-156.
Aier, S (2004). Sustainability of Enterprise Architecture and EAI. The 2004 International Business Information
Management Conference (Amman, Jordan) 2004.
Aier, S. & Schoenherr, M. (2005). Sustainability of Enterprise Architecture and EAI an empirical study. In
Milutinovic, V. (Ed.) Proceedings of the International Conference on Advances in Internet, Processing,
Systems, and Interdisciplinary Research, IPSI-2005, MIT Cambridge/IPSI, Cambridge, Boston MA.
183

Thanos Magoulas et al.
Argyris, C. (1971). Management Information Systems: The Challenge to Rationality and Emotionality.
Management Science, Vol. 17 (6), 275-292.
Argyris, C. (1977). Organizational Learning and Management Information Systems. Accounting, Organizations
and Society, Vol. 2 (2), 113-123.
Armenakis, A.A. & Bedeian, A.G. (1999). Organizational change: A review of theory and research in the 1990s.
Journal of Management, Vol. 25 (3), 293-315.
Bartunek, J.M. & Moch, M.K. (1987). First-order, second-order, and third-order change and organizational
development interventions: A cognitive approach. Journal of Applied Behavioral Science, Vol. 23 (4), 483-
500.
Boisot, M. H. (1995). Information Space. London: Routledge.
Brue, G. (2005). Six Sigma for Managers. New York: McGraw-Hill Professional Education Series.
Checkland, P. (1981). Systems Thinking, Systems Practice. New York: John Wiley & Sons.
Checkland, P. (1985). From Optimizing to Learning: A Development of Systems Thinking for the 1990s. Journal
of Operational Research Soc. Vol. 36 (9), 757-767.
Checkland, P. (2000). Soft Systems Methodology: A Thirty Year Retrospective. Systems Research and
Behavioural Science, Vol.17, 11-58.
Churchman, C. W. (1971). The Design of Inquiring Systems: Basic Concepts of Systems and Organization. New
York: Basic Books.
Davenport, T.H., Eccles, R.J. & Prusak, L. (1992). Information Politics. Sloan management review, Fall 1992.
Davenport, T.H.& Prusak, L. (1997). Information Ecology. New York: Oxford University Press.
Earl, M.J. (1993). Experiences in Strategic Information Systems Planning. MIS Quarterly, Vol. 17 (1), 1-24.
Fischer, R., Aier, S. & Winter, R. (2007). A Federated Approach to Enterprise Architecture Model Maintenance.
Enterprise Modelling and Information Systems Architectures, Vol. 2 (2), 1422.
Foster-Fishman, P.G., Nowell, B. & Yang. (2007). Putting the system back into systems change: A framework for
understanding and changing organizational and community systems. American Journal of Community
Psychology, Vol. 39 (3-4), 197-215.
IFIP-IFAC Task force on Architectures for Enterprise Integration. (2003). The Generalised Enterprise Reference
Architecture and Methodology: Version 1.6.3 (Final). In Bernus, P., Nemes, L. and Schmidt, G. (Eds.)
Handbook on Enterprise Architecture. Berlin: Springer-Verlag.
Gardner, D. (2012). Enterprise architecture and enterprise transformation: Related but distinct concepts that can
change the world. Zdnet, Febuary 22, 2012. http://www.zdnet.com/blog/gardner/enterprise-architecture-and-
enterprise-transformation-related-but-distinct-concepts-that-can-change-the-world/4530
Hammer, M. (1990). Re-engineering Work: Don't Automate, Obliterate. Harvard Business Review, July-August,
104-112.
Hedberg, B. (1980). Using computerized Information Systems to design better organization and jobs. In Bjrn-
Andersen, N. (Ed.) The Human Side of Information Processing. Amsterdam: North-Holland.
Henderson, R. M. & Clark, K.B. (1990). Architectural Innovation: The Reconfiguration of Existing Product
Technologies and the Failure of Established firms. Administrative Science Quarterly, Vol. 35 (1), 9-30.
Huisman, B. (2003) Information Strategy: An Introduction. In Berg, M. (Ed.) Health Information Management,
Integrating IT in health care work. London: Routledge.
Kay, J. (1993). Foundations of Corporate Success. Oxford: Oxford University Press.
Kluge, C., Dietzsch, A., & Rosemann, M. (2006). Fostering an Enterprise Architectures Value Proposition Using
Dedicated Presentations Strategies. Proceedings of the CAISE '06 Workshop on Business/IT Alignment and
Interoperability (BUSITAL '06), Luxemburg, 2006.
Kluge, C., Dietzsch, A.& Rosemann, M. (2006) How to realise corporate value from enterprise architecture. ECIS
2006 Proceedings. Paper 133.
Langefors, B. (1973). Theoretical Analysis of Information Systems. 4th Edition. Lund: Studentlitteratur.
Langefors, B. (1975). Control Structrure and Formalized Information Analysis in Organizations. In Grochla, E. &
Szyperski, N. (Eds.) Information Systems and Organizational Structure. Berlin and New York: Walter de
Gruyter.
Lee, S. S., Dugger, J. C. & Chen, J. C. (2000). Kaizen: An Essential tool for Inclusion in Industrial Technology
Curricula, Journal of Industrial Technology, Vol. 16 (1), 1-7.
McKenzie, K.D. (1984). A Strategy and desiderata for organizational design. Human Systems Management, Vol.
4, 201-213.
Magoulas, T. & Pessi, K. (1998). Strategic IT-management. (Doctoral Thesis) Gothenburg: Gothenburg
University, Department of Informatics.
Magoulas, T., Hadzic, A., Saarikko, T. & Pessi, K. (2012). Alignment in Enterprise Architecture: A Comparative
Analysis of Four Architectural Approaches. Electronic Journal of Information Systems Evaluation. Vol. 15
(1), 88-101.
Nadler, D. & Tushman, M. (1995). Types of organizational change: From incremental improvement to
discontinuous transformation. In Nadler, D.A., Shaw, R.B. & Walton, A.E. (Eds.) Discontinuous Change:
Leading Organizational Transformation. San Fransisco: Jossey-Bass.
Newman, K. L. (2000). Organizational transformation during institutional upheaval. Academy of Management
Review, Vol. 25 (3), 602619.
Orlikowski, W. (1993). Case Tools as Organizational Change: Investigating Incremental and Radical Changes in
Systems Development. MIS Quarterly, Vol. 17 (3), 309-340.
184

Thanos Magoulas et al.
Oakland, J. (1993). Total Quality Management: The Route for Improving Performance. Oxford: Butterworth
Heinemann.
Ren, M. & Lyytinen, K. (2008). Building Enterprise Architecture Agility and Sustenance with SOA.
Communications of the Association for Information Systems, Vol. 22 (1), 75-86.
Ross, J., Weill, P. & Robertson, D. (2006). Enterprise Architecture as Strategy. Boston: Harvard University Press.
Schelp, J. Aier, S. (2009). SOA and EASustainable Contributions for increasing Corporate Agility. Proceedings
of HICSS-42. IEEE Computer Society, Los Alamitos (2009).
Simon, H. (1962). The Architecture of Complexity. Proceedings of the American Philosophical Society, Vol. 106
(6), 467-482.
Sowa, J.F. & Zachman, J.A. (1992). Extending and Formalizing the Framework for Information Systems
Architecture. IBM Systems Journal 31 (3), 590-616.
Thompson, J. D. (1967). Organizations in Action. New York: McGraw-Hill.
Tichy, N.M. (1983). Managing Strategic Change Technical, Political, and Cultural Dynamics. Chichester: John
Wiley & Sons.
Tushman, M.L. & OReilly, C.A. (1996). Ambidextrous organizations: Managing evolutionary and revolutionary
change. California Management Review, 38 (4), 8-30.
Waelchli, F. (1989). The VSM and Ashbys Law as illuminants of historical management thought. In Espejo, R. &
Harnden, R. (Eds.) The Viable System Model: Interpretations and Interpretations of Stafford Beers VSM.
Chichester: John Wiley & Sons.
Weick, K.E. & Quinn, R.E. (1999). Organizational change and development. Annual review of psychology, Vol.
50, 361-386.
185
Applying Structural Equation Modelling to Exploring the
Relationship Between Organisational Trust and Quality of
Work Life
Nico Martins and Yolandi van der Berg
University of South Africa, Pretoria, South Africa
martin@unisa.ac.za
Yolandi.VanDerBerg@za.sabmiller.com

Abstract: Dissatisfaction with working life is a problem affecting almost all employees during their working
career, regardless of position or status. Although many managers seek to reduce job dissatisfaction at all
organisational levels, they sometimes find it difficult to isolate and identify all of the attributes, which affect and
influence the quality of working life. Some researchers proclaim that the success of Quality of Work Life
programmes will depend on the ability of the organisation to reinforce high levels of trust. Quality of work life is
assumed to affect various organisational factors such as job effort and performance, organisational identification,
job satisfaction and job involvement. The aim of this quantitative research, based on theoretical and empirical
research, is to determine the relationship between organisational trust and quality of work life. A validated
organisational trust questionnaire (consisting of Big Five personality constructs, managerial practices and
organisational trust dimensions) and a quality of work life questionnaire (11 dimensions) were used in the
research. Two hundred sales representatives of a marketing company, participated in the research. An internet-
based survey methodology was used to collect primary data from a probability sample of 282 sales
representatives respondents with a 72% response rate. Responses were analysed using quantitative techniques
and Structural Equation Modelling. Results confirm a positive relationship between the Managerial Practices and
Organisational Trust, and a lower relationship between the Personality dimensions and Organisational Trust.
With regard to the Quality of Work Life, a positive relationship was noted with Managerial Practices but again
lower relationship with the Personality constructs. The study strengthened and focused attention on the
importance of building good trust relationships within an organisation, as it seems as though the Personality traits
and Managerial Practices of managers will not only influence the trust relationship experienced by employees,
but also their experience of a Quality of Work Life.

Keywords: quality of work life (QWL), organisational trust, structural equation modelling
1. Introduction
The extent and rate of change within organisations has created renewed interest in the quality of
employees work lives, particularly in South Africa where organisations have to deal with cultural
diversity, the ethnic composition of the workforce, and changes in value systems and beliefs (Kirby
and Harter, 2001; Kotz, 2005; Sekwena, 2007). Although many managers seek to reduce job
dissatisfaction at all organisational levels, including their own, they sometimes find it difficult to isolate
and identify all of the factors which affect and influence the QWL (Huang, Lawler and Lei, 2007; May,
Lau and Johnson, 1999; Walton, 1973).

According to Kaushik and Tonk (2008) an employees QWL is determined by the interaction of
personal and situational factors involving both personal (subjective) and external (objective) aspects
of work-related rewards and experiences. According to Kotz (2005), the changes in the ethnic
composition of the South African workforce, specifically with regard to changes in beliefs and value
systems, as well as the greater importance placed on knowledgeable workers, are factors which may
influence QWL. Affirmed and emphasised by Martins (2000) and Schoorman, Mayer and Davis
(2007), these changes in the workforce may also lead to an increase in the importance of trust in
organisations. QWL is assumed to affect various organisational factors (job effort and performance,
organisational identification, job satisfaction and job involvement) (Ballou and Godwin, 2007), while
Organisational Trust is the employees expectation of the reliability of the organisations promises and
actions (Carmeli, 2005; Politis, 2003). Long, Sitkin and Cardinal (2003) urge managers to build trust
between employees and the organisation in order to enhance organisational effectiveness. Martins
and Von der Ohe (2002) also indicate that trust is created by leadership, which in turn influences
relationships and job satisfaction.

Research has further shown that QWL is not only a significant determinant of various enviable
organisational outcomes, but that it also significantly influences the nonworking life of an individual
and is an important predictor of the life satisfaction, health and psychological wellbeing of employees
(Ballou and Godwin, 2007; Kaushik and Tonk, 2008; 2010; Martel and Dupuis, 2006; Sirgy, Efraty,
186

Nico Martins and Yolandi van der Berg
Siegel and Lee, 2001; Srivastava, 2008; Wilson, DeJoy, Vandenberg, Richardson and McGrath,
2004; Wright and Bonett, 2007).
2. Organisational trust
Trust can be regarded as a multidimensional construct, consisting of a cognitive (belief about
anothers trustworthiness), affective (role of emotions in the trust process) and behavioural (relying on
another and disclosing sensitive information) base (Bssing, 2002; Gillespie and Mann, 2004;
Rousseau, Sitkin, Burt and Camerer, 1998; Schoorman et al, 2007). Tschannen-Moran and Hoy
(2000, p 556) consequently proposed a multidimensional definition of trust, namely: Trust is one
partys willingness to be vulnerable to another party based on the confidence that the latter party is (a)
benevolent, (b) reliable, (c) competent, (d) honest, and (e) open.

Despite the differences in conceptualisation, there are a number of common elements unifying the
many different definitions of trust. In particular there seems to be an agreement that trust is the
willingness to be vulnerable based on the positive expectations of the intentions or behaviour of
others (Mayer, Davis and Schoorman, 1995, p 712). Secondly, it seems that interdependence and
uncertainty are necessary conditions for trust to develop. In line with the above and taking into
account that this research study is done within an organisational context, the author uses the
definition provided by Von der Ohe, Martins and Roode (2004, p 6). Organisational Trust is therefore
defined as the choice to make oneself vulnerable with the express belief in the positive intent and
commitment to the mutual gain of all parties involved in the relationship.

According to Hay (2002) and Lms and Putait (2006), the importance of trust in organisations is
likely to increase over the next few years. This is reiterated by Bews and Rossouw (2002) and Martins
(2000), specifically in relation to South Africa, due to the changing composition of the workforce and
the focus on employment equity. In the development of a trust model, Martins (2000) and Martins and
Von der Ohe (2002) also identified the Big Five Personality aspects as significant indicators of trust,
and their results provided support for the claim that Personality characteristics, together with
Managerial Practices information-sharing, work support, credibility and team management have
an influence on the trust relationships between managers and employees. This overview links to the
notion that Organisational Trust is not necessarily an interpersonal form of trust, but rather a systems
form of trust deriving from structures and processes within an organisation, such as fairness and
perceived organisational support (Bagraim and Hime, 2007), which in turn relates to the QWL an
employee experiences within the organisation.
3. Quality of work life
Most individuals spend a great deal of their time participating in job- or work-related activities and
even plan their time, living standards and social interaction around the demands of their work, and to
a large extent people define themselves and others in terms of their work, making QWL in
organisations a major component of quality of life in general (Kotz, 2005; Rathi, 2010). Although
QWL is a term used today in almost every area of organisational activity, definitions of QWL tend to
change focus continually and it has been viewed in various ways: as a movement, a set of
organisational interventions (approaches to management in organisations) and as a type of working
life experienced by employees (reflecting the affective evaluation of individuals) ( Wyatt and Wah,
2001). Extensive research on the definition and measurement of QWL from a range of disciplines has
emerged, since it was first introduced in the 1950s, and Hannif et al (2008, p 274) suggest that three
categories of definition are found in the literature: (i) a concept concerned with employees job
satisfaction; (ii) a concept going beyond job satisfaction and encompassing subjective wellbeing; and
(iii) a dynamic, multidimensional construct that incorporates any number of measures objective and
subjective relating to employment quality.

Schneider and Dachler (1978) in Kaushik and Tonk (2008, p 36) found the feelings employees have
about their job tend to be stable over time and might be a product of specific personality traits. As
already mentioned, Personality traits are psychological in nature, relatively stable over time, and
provide the reasons for behaviour (Church, 2000); they seem to be interrelated with trust and QWL by
means of the Big Five Personality factors. Research from various sources has found a link between
the Big Five Personality factors and dimensions relating to QWL such as job performance
(Bozionelos, 2004; Gellatly and Irving, 2001; Hurtz and Donovan, 2000; Rothmann and Coetzer,
2003), job satisfaction (Goodstein and Lanyon, 1999; Judge, Higgins and Cable, 2000; Thoresen,
Kaplan, Barsky, Warren and De Chermont, 2003), emotional intelligence (Salgado, 2002),
187

Nico Martins and Yolandi van der Berg
organisational engagement (Bozionelos, 2004), job proficiency (Salgado, 2002), organisational
commitment (Thoresen et al, 2003), work and time pressures (Dijkstra and Fred, 2005; Morgan and
de Bruin, 2010; Pienaar, Rothmann and Van de Vijver, 2007) work-life balance (Thomson and de
Bruin, 2007; Wayne, Musisca and Fleeson, 2004) and reaction to change (Vacola, Tsaousis and
Nikolaou, 2004). Research conducted by Kaushik and Tonk (2008) found a positive correlation
between the construct QWL and three of the Big Five dimensions of Personality, namely extraversion,
agreeableness and conscientiousness. In addition, research by Rothmann and Coetzer (2003)
indicated that Personality dimensions were related to management performance and identified
emotional stability, resourcefulness and agreeableness as being significantly related to management
performance. Shaw (2005, p 249) proposes that the success of QWL programmes will depend on the
ability of the organisation to reinforce high levels of trust, which in turn will improve organisational
performance.

Apart from its positive relationships with various dimensions of the QWL construct, as well as findings
directly relating it to the construct (Kaushik and Tonk, 2008), the Big Five Personality aspects are also
significant indicators of trust (Von der Ohe and Martins, 2010). However, it does seem as if there is a
lack of research into the relationship between QWL and Organisational Trust.
From the above the following hypotheses are formulated:

Hypothesis 1: There is a positive relationship between organisational trust (The Big Five and
managerial practices dimensions) and trust.

Hypothesis 2: There is a positive relationship between quality of work life dimensions and
organisational trust (The Big Five and managerial practices dimensions).
4. Research methodology
4.1 Research approach
Structural Equation Modelling (SEM) and correlation analysis were used to test the relationship
between the various factors or dimensions of Organisational Trust and QWL.
4.2 Research method
4.2.1 Research participants
All 282 sales representatives across an organisation were invited to participate in the research. In
total, 203 participants completed the online questionnaire (72% response rate).Of these, 133 were
male (65.5%) and 70 (34.5%) were female. The majority of respondents were African (124 or 61.1%),
below the age of 46 years (175 or 86%), and had a tenure of two to five years (80 or 39.4%).
4.2.2 Measuring instruments
A combined Organisational Trust and QWL questionnaire consisting of six biographical questions, 92
Organisational Trust questions and 59 QWL questions was posted on a survey companys website
with an open invitation for sales employees to participate. To measure Organisational Trust the Trust
audit survey was used (Martins, 2000), the questionnaire. Section five encompassed the QWL
construct and was measured by means of the Leiden Quality of Work Questionnaire (Van der Doef
and Maes, 1999 and 2002).
5. Results
Inferential statistics included the Cronbachs alpha and confirmatory factor analysis (CFA) to confirm
the reliability of the instruments. SEM multivariate analysis technique was used to determine the
relationship between the constructs (Organisational Trust and QWL) and the independent dimension
of trust to test the theoretical model. CFA, path analysis and regression analysis within SEM were
used to test the two hypotheses. Two SEM approaches were subsequently followed, namely the
strictly confirmatory approach (to confirm a structural model specified by another researcher) and the
model development approach (to find models into which the data fitted well statistically) (Garson,
2004; Schumacker and Lomax, 2004).

188

Nico Martins and Yolandi van der Berg
5.1 Reliability analysis
The Cronbachs alpha was used to determine the internal reliability of items within each factor (see
Table 1). An acceptable value for Cronbachs alpha is between 0.70 and 0.80 and values substantially
lower indicate an unreliable scale (Field, 2005). For the purpose of this research study, a reliability
coefficient of 0.70 or higher was considered an acceptable score of internal consistency. Based on
each factors Cronbachs alpha, it was determined that all factors included within the Organisational
Trust dimension had a strong internal reliability with the lowest score being 0.602 obtained for
Information Sharing. The reliability coefficient of the factors, which forms part of the QWL dimension,
appears to vary between -0.179 and 0.908 with five of these reliability coefficients being above 0.9
which can be regarded as acceptable internal consistency (Kline, 1999). The item analysis based on
Cronbachs alpha suggests there was a negative relationship between some items, that is, decision
authority and job insecurity, after recoding took place. Based on these reliability results, information-
sharing (0.602), and decision authority was excluded from SEM Model due to the weak Cronbach
alphas. There is, however, no obvious reason for the negative Cronbachs alpha obtained for job
insecurity, as there did not appear to be any coding error and it was therefore decided to include it as
part of the model. Overall, it can be concluded that the internal consistency (reliability) of the overall
Organisational Trust questionnaire and the factors are consistent in what it is intended to measure.
With regard to the Leiden Quality of Work Questionnaire and its factors, internal reliability seems to
vary between the various factors and can definitely be improved.
Table 1: Results of reliability analysis
Dimension
Cronbach
Alpha
N of
items Comments
Organisational Trust
Conscientiousness 0.954 8
Extraversion 0.940 7
Agreeableness 0.980 8
Emotional Stability 0.952 5
P
e
r
s
o
n
a
l
i
t
y

Resourcefulness 0.852 7
Trust Relationship 0.941 5
Credibility 0.944 15
Work Support 0.945 4
Information Sharing 0.602 4 Not included in SEM
Team Management 0.947 8
Change which has occurred 0.940 11
M
a
n
a
g
e
r
i
a
l

P
r
a
c
t
i
c
e
s

Interpersonal Trust 0.874 9
Quality of Work Life
Skill Discretion 0.598 8
Decision Authority -0.179 4

Not included in SEM
Task Control 0.536 4
Work and Time Pressure 0.354 3
Role Ambiguity 0.811 6
Physical Exertion 0.596 3
Hazardous Exposure 0.852 8
Job Insecurity -0.125 3 Not included in SEM
Lack of Meaningfulness 0.613 3
Social Support Supervisor 0.888 6
Social Support Colleagues 0.908 11
Job Satisfaction 0.843 5 One negative question was recoded
189

Nico Martins and Yolandi van der Berg
5.2 SEM results
Alternative models were tested on the basis of the theory and changes to the structural and/or
measurement models were made as suggested by the SEM modification indices. In figure 1, the
relationship between the constructs Organisational Trust and QWL is depicted. The path diagram and
parameter estimates are illustrated. Results revealed a non-significant chi-square = 622.252 based on
0.196 degrees of freedom with a probability of 0.000. The ratio of chi-square and degrees of freedom
(x/df) was equal to 3.175 indicating an adequate fit (a value of between 2 and 5 is believed to be a
good fit) (Bollen and Long, 1993). Contradictory to this an RMSEA value of 0.104 was obtained.
According to Garson (2004) an RMSEA value of 0.05 or less indicates a close approximation and
values of up to 0.08 suggests a reasonable fit of the model in the population. A value of 0.104
therefore suggests a moderate fit within the population.

Figure 1: Relationship between organisational trust and quality of work life

190

Nico Martins and Yolandi van der Berg
The GFI is 0.754, which also indicates a moderate fit. In addition, the CFI equals 0.910, reflecting a
good fit. The NFI equals 0.875 and the NNFI equals 0.894, both reflecting an adequate fit. Based on
the above, it is therefore believed the structural model, based on these indices, achieved a moderate
fit. Analysing the SEM correlation coefficients between the various variables (see table 2), the model
indicates moderate correlations between QWL and Managerial Practices (0.68) as well as between
QWL and Personality aspects (0.54).

The Pearson product-moment correlation coefficient was furthermore used to calculate the
correlations between Organisational Trust, QWL, Personality and Managerial Practices (see table 3).
All correlation coefficients were significant at the 0.01 level (2-tailed).
Table 2: SEM correlations coefficients between organisational trust and QWL
Dimension Correlation SE P
QWL Personality 0.541 0.052 ***
QWL Managerial Practices 0.679 0.039 ***
Personality Managerial Practices 0.790 0.029 ***
e 22 e 23 0.475 0.055 ***
e 6 e 7 0.488 0.049 ***
e 4 e 3 0.453 0.056 ***
e 5 e 9 0.363 0.063 ***
e 7 e 8 0.358 0.052 ***
e 13 e 14 0.337 0.069 ***
e 5 e10 -0.316 0.085 ***
SE = Standard error

P = probability value (<0.05 = significant on the 0.001 level *** (Garson, 2004).
Table 3: Pearson product-moment correlations coefficients
Dimension Trust
Relationship
QWL Personalit
y
Managerial
Practices
Trust
Relationship
Pearson
Correlation
Sig.(2-tailed)
N
1

203
0.545**
.000
200
0.793**
.000
.203
0.760**
.000
203
QWL
Pearson
Correlation
Sig.(2-tailed)
N
0.545**
.000
200
1

200.
0.502**
.000
200
0.613**
.000
200
Personality
Pearson
Correlation
Sig.(2-tailed)
N
0.793**
.000
203
0.502**
.000
200
1

203
0.702**
.000
203
Managerial
Practices
Pearson
Correlation
Sig.(2-tailed)
N
0.760**
.000
203
0.613**
.000
200
0.702**
.000
203
1

203
** Correlation is significant at the 0.01 level (2-tailed)

*Correlation is significant at the 0.05 level (2-tailed)

Highly significant positive relationships (at a 0.01 level of significance) are evident between the Trust
relationship and Personality dimensions (0.793), Managerial Practices and the Trust relationship
(0.760) and Managerial Practices and Personality (0.702), suggesting that if Managerial Practices are
regarded as positive, the trust employees experience will increase accordingly. Moderate linear
relationships are evident between Managerial Practices and QWL (0.613), Trust relationship and
QWL (0.545) and QWL and Personality (0.502).

Further analysis seems to indicate that Personality aspects had less impact on trust (estimate of 1.51)
than Managerial Practices (estimate of 2.89). Within the Personality dimension, agreeableness had
the highest impact (estimate of 14.79) explaining 93.2% of the variance, followed by
conscientiousness with an estimate of 12.41, explaining 75.9% of the variance. Focusing on the
Managerial Practices, it seemed as though credibility had the highest impact (estimate of 13.11)
191

Nico Martins and Yolandi van der Berg
explaining 95.3% of the variance, and team management explained 91.5% of the variance with an
estimate of 7.47. Change which has occurred (estimate of 4.98) and interpersonal trust (estimate of
5.18) seem to have the lowest impact on trust, explaining 16% and 41.7% or the variance
respectively. Within the QWL dimension, social support from colleagues has the highest impact
(estimate of 6.75) explaining 97.5% of the variance, followed by social support from the supervisor
with an estimate of 4.01, explaining 93.4% of the variance. Hazardous exposure (estimate of 0.65)
and physical exertion (0.49) seem to have the lowest impact and only explain 2% and 5% of the
variance respectively. This might be due to the specific work environment of a sales representative,
as it seems they are not necessarily exposed to hazardous circumstances and physical exertion.

This research study therefore indicates that for sales representatives there is a stronger relationship
between QWL and Managerial Practices than between QWL and their Personality constructs.
6. Discussion
This research study can be seen as an exploratory attempt to test an integrated model consisting of
Managerial Practices, Personality aspects and QWL. In particular, the aim of this study was to
investigate the implied theoretical relationship between the dimensions making up the Organisational
Trust construct and those which form the QWL construct. The results of the analysis indicate a
positive relationship between QWL and Managerial Practices (0.68) but a lower relationship with the
Personality constructs (0.54).

Martins (2000) and Von der Ohe et al (2004) found agreeableness to be a significant manifestation of
the Big Five Personality aspects. This was confirmed by the results of this research. Also in
accordance with Martinss (2000) research, it seems that the Personality aspects have a lower impact
on Organisational Trust than Managerial Practices.
7. Conclusions and recommendations
From the empirical results the assumption can be made that if an organisation intends to improve the
satisfaction levels of sales representatives, the focus should be on improving the Managerial
Practices and QWL dimensions. In this environment a focus on the correct personality types will not
have a great influence on Organisational Trust or positively influence QWL. Research results
regarding the Organisational Trust construct have been supported by research carried out by Martins
(2000), Martins and Martins (2002), Martins and Von der Ohe (2002), Von der Ohe et al (2004) and
Von der Ohe and Martins (2010).
References
Bagraim, J. J. and Hime, P. (2007). The dimensionality of Workplace Interpersonal Trust and its relationship to
Workplace Affective Commitment. SA Journal of Industrial Psychology, Vol.33, No 3, pp 43-48.
Ballou, B. and Godwin, N. H. (2007). Quality of "Work Life": Have you invested in your organisation's future?
Strategic Finance, October, pp 41-45.
Barrick, M. R. and Mount, M. K. (1991). The Big Five Personality dimensions and job performance: A meta-
analysis. Personnel Psychology, Vol. 44, No 1, pp 1-26.
Bews, N. and Rossouw, D. (2002). Contemporary organisational change and the importance of trust. SA Journal
of Industrial Psychology, Vol 28, No.4, pp 2-6.
Bollen, K. A. and Long, J. S. (Eds.). (1993). Testing Structural Equation Models. Newbury Park: Sage.
Bozionelos, N. (2004). The relationship between disposition and career success: A British Study. Journal of
Occupational and Organizational Psychology, Vol. 77,No.3, pp 403-420.
Bssing, A. (2002). Trust and its relations to commitment and involvement in work and organisations. SA Journal
of Industrial Psychology, Vol.28, No.4, pp 36-42.
Carmeli, A. (2005). The relationship between organisational culture and withdrawal intentions and behaviour.
International Journal of Manpower,Vol. 26,No 2, pp 177-195.
Church, T. A. (2000). Culture and personality:Toward an integrated cultural trait psychology. Journal of
Personality, Vol.68, pp 651-704.
Dijkstra, E. J. and Fred, W. V. (2005). Separate and Joint effects of medium type on consumer responses: A
comparison of television, print and the internet. Journal of Business Research,Vol. 58, No. 3, pp 377-386.
Esterhuizen, W. and Martins, N. (2008). Organisational justice and employee responses to employment equity.
South African Journal of Labour Relations, Vol.32, No.2, pp 66-85.
Field, A. (2005). Discovering Statistics Using SPSS (2nd ed.), Sage Publications, London.
Garson, G. D. (2004). Structural Equation Modeling. Retrieved September 8, 2010, from
http://www2.uta.edu/sswmindel/S6367/SEM/Principles%20of%20SEM.pdf
Gellatly, I. R. and Irving, P. G. (2001). Personality, Autonomy, and Contextual Performance of Managers. Human
Performance, Vol.14, No.3, pp 231-245.

192

Nico Martins and Yolandi van der Berg
Gillespie, N. A. and Mann, L. (2004). Transformational leadership and shared values: the building blocks of trust.
Journal of Managerial Psychology, Vol.19,No.6, pp 588-607.
Goodstein, L. D. and Lanyon, R. I. (1999). Applications of Personality Assessment to the Workplace: A Review.
Journal of Business and Psychology,Vol.13, No.3, pp 291-322.
Hannif, Z., Burgess, J. and Connell, J. (2008). Call centres and the quality of work life: Towards a research
agenda. Journal of Industrial Relations,Vol.50, No.2, pp 271-284.
Hay, A. (2002). Trust and Organisational change: An experience from manufacturing. SA Journal of Industrial
Psychology, Vol.28,No.4, pp 40-44.
Huang, T., Lawler, J. and Lei, C. (2007). The effects of Quality of Work Life on commitment and turnover
intention. Social Behavior and Personality,Vol. 35,No.6, pp 735-750.
Hurtz, G. M. and Donovan, J. J. (2000). Personality and Job Performance: The Big Five Revisited. Journal of
Applied Psychology, Vol.85, pp 869-879.
Judge, T. A., Higgins, C. A. and Cable, D. M. (2000). The Employment Interview: A recent research and
recommendations for Future Research. Human Resource Management Review, Vol.10.No.4, pp 383-406.
Kaushik, N. and Tonk, M. S. (2008). Personality and Quality of Work Life. ICFAI Journal of Organizational
Behavior,Vol. VII,No.3, pp 34-46.
Kirby, E. L. and Harter, L. M. (2001). Discourses of diversity and quality of work life. Management
Communication Quarterly, Vol.15,No.1, pp 121-127.
Kline, P. (1999). Handbook of Psychological Testing (2nd ed.), Routledge, New York.
Kotz, M. (2005). The nature and development of the construct "quality of work life". Acta Academica,
Vol.37,No.2, pp 96-122.
Lms, A. and Putait, R. (2006). Development of organisational trust among employees from a contextual
perspective. Business Ethics: A European Review, Vol.15,No.2, pp 130-141.
Long, C. P., Sitkin, S. B. and Cardinal, L. B. (2003). Managerial Action to Build Control, Trust, and Fairness in
Organizations: The Effect of Conflict. Paper presented at the International Association of Conflict
Management Conference, pp 1-57. Melbourne, Australia.
Martel, J. P. and Dupuis, G. (2006). Quality of Work Life: Theoretical and Methodological Problems, and
Presentation of a New Model and Measuring Instrument. Social Indicators Research, Vol.77,No 2, pp 333-
368.
Martins, N. (2000). Developing a Trust Model for Assisting Management During Change. SA Journal of Industrial
Psychology, Vol.26,No.3, pp 27-31.
Martins, N. and Von der Ohe, H. (2002). Trust as a factor in determining how to attract, motivate and retain talent.
SA Journal of Industrial Psychology, Vol.28,No.4, pp 49-57.
May, B. E., Lau, R. S. and Johnson, S. K. (1999). A longitudinal study of quality of work life and business
performance. South Dakota Business Review, Vol.58,No.2, pp 3-7.
Mayer, R. C., Davis, J. H. and Schoorman, F. D. (1995). An integrative model of organisational trust. Academy of
Management Review, Vol.20, pp 709-734.
McEvily, B., Weber, R. A., Bicchieri, C. and Ho, V. T. (2006). Can groups be trusted? An experimental study of
trust in collective entities. In R. Bachmann and A. Zaheer (Eds.), Handbook of Trust Research (pp 52-67),
Edward Elgar Publishing Limited, Cheltanham.
Phares, J. E. (1991). Introduction to Personality (3rd ed.). New York: Harper Collins.
Pienaar, J., Rothmann, S. R. and Van de Vijver, F. J. (2007). Occupational Stress, Personality Traits, Coping
Strategies, and Suicide Ideation in the South African Police Service. Criminal Justice and Behavior,
Vol.34,No.2, pp 246-258.
Rathi, N. (2010). Relationship of Quality of Work Life with Employees' Psychological Well-Being. International
Journal of Business Insights and Transformation, Vol.3,No.1, pp 53-60.
Rothmann, S. and Coetzer, E. P. (2003). The Big Five Personality Dimensions and Job Performance. South
African Journal of Industrial Psychology, Vol.29No.1, pp 68-74.
Rousseau, D. M., Sitkin, S. B., Burt, R. S. and Camerer, C. (1998). Not so different after all: A cross discipline
view of trust. Academy of Management Review,Vol. 23,No.3, pp 393-404.
Salgado, J. F. (2002). The Big Five Personality Dimensions and Counterproductive Behaviours. Journal of
Selection and Assessment, Vol.10,No.1/2, pp 117-125.
Schneider, B. J. and Dachler, P. H. (1978). A Note on the Stability of the Job Description Index. Journal of
Applied Psychology, Vol.63, No.5, pp 650-653.
Schoorman, F. D., Mayer, R. C. and Davis, J. H. (2007). An Integrative model of Organisational Trust: Past,
Present and Future. Academy of Management Review, Vol.32,No.2, pp 344-354.
Schumacker, R. E. and Lomax, R. G. (2004). A Beginner's Cuide to Structural Equation Modeling (2nd ed.),
Lawrence Erlbaum Associates Inc,New Jersey,
Sekwena, E. (2007). Interaction between work and personal life: Experiences of Police Officers in the North West
Province. Acta Criminologica, Vol.20,No.4, pp 37-54.
Shaw, W. H. (2005). Business ethics, Thomson Wadsworth, Belmont.
Sirgy, M. J., Efraty, D., Siegel, P. and Lee, D. (2001). A New Measurement of Quality of Work Life (QWL) based
on Need Satisfaction and Spillover Theories. Social Indicators Research, Vol.55,No.3, pp 241-302.
Srivastava, A. K. (2008). Effect of Perceived Work Environment on Employees' Job Behaviour and
Organizational Effectiveness. Journal of the Indian Academy of Applied Psychology, Vol.34,No.1, pp 47-55.
Thomson, L. and de Bruin, K. (2007). Personality as predictor of life balance in South African corporate
employees. Journal of Contemporary Management, Vol.4, pp 68-85.
193

Nico Martins and Yolandi van der Berg

Thoresen, C. J., Kaplan, S. A., Barsky, A. P., Warren, C. R. and De Chermont, K. (2003). The Affective
Underpinnings of Job Perceptions and Attitudes: A Meta-Analytic Review and Integration. Psychological
Bulletin, Vol.129,No.6, pp 914-945.
Tschannen-Moran, M. and Hoy, W. K. (2000). A Multidisciplinary Analysis of the Nature, Meaning, and
Measurement of Trust. Review of Educational Research, Vol.70,No.4, pp 547-593.
Vacola, M., Tsaousis, I. and Nikolaou, I. (2004). The Role of Emotional Intelligence and Personality Variables on
Attitudes toward Organisational Change. Journal of Managerial Psychology, Vol.19,No.2, pp 88-110.
Van der Doef, M. and Maes, S. (1999). The Leiden Quality of work Questionnaire: Its construction, Factor
structure, and Psychometric Qualities. Psychological Reports, Vol.85, pp 954-962.
Van der Doef, M. and Maes, S. (2002). Teacher-specific quality of work versus general quality of work
assessment: A comparison of their validity regarding burnout, (psycho)somatic well-being and job
satisfaction. Anxiety, Stress and Coping, Vol.15.No.4, pp 327-344.
Von der Ohe, H. and Martins, N. (2010). Exploring Trust Relationships during times of change. Journal of Human
Resource Management, Vol.8,No.1, pp 1-9.
Von der Ohe, H., Martins, N. and Roode, M. (2004, Winter). The influence of credibility on employer-employee
trust relations. South African Journal of Labour Relations, Vol.28,No.2, pp 4-32.
Walton, R. E. (1973). Quality of Work Life: What is it? Sloan Management Review, Fall, pp 11-21.
Wayne, J. H., Musisca, N. and Fleeson, W. (2004). Considering the role of personality in the work-family
experience: Relationships of the big five to work-family conflict and facilitation. Journal of Vocational
Behavior, Vol.64, pp 108-130.Wilson, M. G., DeJoy, D. M., Vandenberg, R. J., Richardson, H. A. and
McGrath, A. L. (2004). Work Characteristics and Employee Health and Well-Being: Test of a Model of
Healthy Work Organization. Journal of Occupational and Organizational Psychology, Vol.77.No.4, pp 565-
588.
Wright, T. A. and Bonnet, D. G. (2007). Job Satisfaction and Psychological Well-Being as Nonadditive Predictors
of Workplace Turnover. Journal of Management, Vol.33,No.2, pp 141-160.
Wyatt, T. A. and Wah, C. Y. (2001). Perceptions of QWL: A Study of Singaporean Employees Development .
Research and Practice in Human Resource Management, Vol.9,No.2, pp 59-76.
194
Identification and Governance of Emerging Ethical Issues
in Information Systems: Empirical and Theoretical
Presuppositions
Laurence Masclet and and Philippe Goujon
LEGIT, Computer Sciences Faculty, University of Namur (FUNDP)
Namur, Belgium
laurence.masclet@fundp.ac.be
pgo@info.fundp.ac.be

Abstract: Our paper addresses the topic of the conference, by a critical approach of the Issues in IS design and
development and more generally, a critical point of view of the way Information Systems are created and
managed, and especially how ethics is implemented in IS projects. We seek to address the presuppositions in IS
ethical practice and how they are related to presuppositions in current theories of governance. IS are a growing
part of the functioning of industries nowadays and the identification and governance of the ethical issues that it
raises become a vital matter for society. This paper will describe a study among Information Systems
professionals about how they perceive the emerging ethical issues that are present in new IS projects, how they
acknowledge them and the strategies in place to address them. This is the first part of the research, and it is
where most empirical studies stop. The originality of our research is that it articulates to the empirical qualitative
research (made by online questionnaires and follow-up Skype interviews) a theoretical critical perspective on the
governance theories that determine ethical strategies in IS. We go back and forth between governance theories
and IS practices to find the presuppositions that are going on in both sides. Most governance theories fail to
address the problem of the actual implementation of their theories. Our diagnosis is that current and traditional
governance theories (derived from Rawls, Habermas, etc.) fall into some presuppositions that doesnt allow them
to address implementation properly. For example, they assume that it is sufficient to come to a consensus
between stakeholders during the construction of the norm (following strict procedures to ensure fairness), to
reach legitimacy and to reach as a by-product the application of the norms. However, the last implication is not
necessarily true. The reasons why people accept a norm as valid are not necessarily the reasons to accept its
conclusion as a maxim for action. This presupposition comes from a more general rationalist background. To
overcome presuppositions we propose a more comprehensive governance theory, which takes into account the
context of application of the norms within the construction of the norms (presuppositions, values, ways of
thinking, etc. found in the interview).

Keywords: ethics, information systems, emerging technologies, governance, interview, empirical and theoretical
study, links between theory and practice
1. Introduction: Setting the problem (diagnosis on the state of IS and ethical
governance)
A certain focus on ethics has become increasingly necessary in the field of Information systems (Felt,
2005). As stated by Pearson et al. (Pearson, 1996) there are many reasons for IS professionals to try
to progress in the ethical field, and to actually take a reflexive point of view on their own work. The
main reason is probably the growing impact of their jobs on society. Indeed, IS professionals have a
responsibility in the selection of information that will lead the decision-makers to make their decision.
The selection of information is one of the most crucial skill needed from IS professionals. With the
Internet, the situation has reversed. It is not any more the lack of information that will lead to major
problems, or induce the decision makers to take wrong decisions, but the excess of information
available (Wurman, 2000), and the variability of the quality of information. The quality is often even
difficult to assess even if there are some rules and clues like the existence and reliability of
references, etc. The Information System Professional, and as a matter of fact the Information System
that he/she manages, is the link between the flow of information and the decision maker. The
mechanism of selection of information is in his hands, and every bias on that mechanism will lead to
troubles.

As a basis for decision, the selection of information needs to be ethical. Not only to make good
decisions for the sake of society in general, but also to make useful decisions for the business firm,
that will not leads to a lot of troubles in the process of acceptation by the community of users and the
society in general. Having ethical decisions at the level of the IS professional will avoid bad decisions
for society, and bad decision for the company, because both are linked together. In the absence of
acceptation from society, which includes the employees of the business firm and the stakeholders, the
195

Laurence Masclet and and Philippe Goujon
decision will lead to negative reactions, even if it is totally legal, well-thought, and actually useful for
the users.

So, implementing ethics in Information Systems management and development seems to be a good
and necessary thing for both civil society and industry. As a consequence, it needs to be encouraged
by policy makers.

But the will for implementing ethics in IS does not seem to be sufficient. Misunderstandings and
ineffectiveness of ethical assessment and ethical awareness seems to be common place in the
process. As we will see, there are a lot of presuppositions that restrain and even stop the process of a
good and fully comprehensive ethical governance of IS. Our approach is to determine the
presuppositions that are present in both ethical theories and IS practices, with an analysis of
interviews with IS professionals from around the globe. An empirical study has been made by De
Monfort University, the partner of the University of Namur in the IDEGOV project (IDEntification and
GOVernance of emerging ethical issues in Information Systems), via qualitative interviews by
telephone or Skype with IS professionals representing every continents, and every scale of business.
(The contact with the IT businesses has been made possible via the help of the IMIS (Institute for the
Management of Information Systems)). There has also been online questionnaires that has been
send at the occasion of ETHICOMP 2011 The Social Impact of Social Computing in Sheffield
Hallam University in the UK.

Our analysis relies on a grid of analysis designed before the starting of the interview process, in
order to start the analysis of the interview with a fully grounded theoretical background. The grid of
analysis helped us to found the best questions to ask the interviewees according to parameters that
we found through our analysis of the limits of the theoretical trend of governance. By making the grid
of analysis and describing our parameters and our findings as clearly as possible, we are aim at
letting our own framing apparent, which seems the only way to deal with the classic epistemological
problem of applying in our research the same scheme we are criticising in others.
2. Presuppositions in ethical theories
There seems to be a problem on the way ethics is implemented into IS. Our hypothesis is that this
problem does not come only from a lack of concern in the IS field. This problem raises more
theoretical issues. This is a problem within ethical theories themselves, and on particular how they are
taking into account the possibility of their own implementation. Our point of view is to take ethics not
as a solution given to professional, who would only have to be kind enough to implement them in their
work. Ethics is in itself a question. It is in itself a problem.

A presupposition has been going on for a long time in the area of ethics and governance, that the only
way to do ethics in highly differential societies, is to rely on rational procedures. Philosophers like
Habermas, Apel or John Rawls took the rationalisation of the world as a way of resolution of values
conflict in a growingly heterogeneous world. Max Weber, who famously diagnosed the rationalisation
of the world and, as a consequence, the disappearance of ethics, has not seen the possibility of a
radical change in ethics: the possibility for ethics itself to become formal (Ferry, 2002).

This change in ethics has been done during the XXth centuries. The central question of ethics
remains what do we ought to do but with the inflection of the question: what procedure can we use to
guarantee that what we do is legitimate? This question leads to the construction of theories that
answer to the question of the construction of a norm that guarantees legitimacy and avoid values
conflict, has been by the means of rational argumentation and consensus (Habermas,1981).

The predominance of rationality and its ability to resolve any question has already been put in
question (Simon, 1972), but the trust in rational procedure, and the use of procedural theories are still
very much alive, especially when talking about the construction of the norm and its application to
technical fields. The question of the reason why this is the case cannot possibly be discussed here.
What is relevant to us is the impact of that prevalence of procedural thinking in the question of ethical
implementation.

The problem of the ethical theories implementation is in the theory itself, which lacks at taking into
account that problem in its very construction. The application of the norm is supposed to come
naturally from the legitimacy of the norm.
196

Laurence Masclet and and Philippe Goujon
Procedural ethical theories, in particular, first set themselves the task of indicating a
procedure through which norms and modes of action can be rationally grounded or
criticized, as the case may be. Because they must deal with this task separately, the
impartial application of valid principles and rules arises only as a subsequent problem.
(Habermas, 1991)
The important part of the problem for philosopher like Habermas and Rawls, who are the main
influence in contemporaneous political theories, is not the application of the norm. That application is
a subsequent problem, a problem that can be fixed afterwards.

The separation of the application level from the norms construction level seems to be one of the
sources of the lack of communication between ethics and technology. This separation is due to a
number of presuppositions (Lenoble and Maesschalck, 2003). The first presupposition is to assume
the intention to adopt a norm is sufficient to reach the effectiveness of the norm. This presupposition
is called intentionalist by Maesschalck and Lenoble. However, it is well known that the rational will is
very often overcome by other instincts, other preoccupations. The Greeks called it Akrasia, the
Christians wrote entire treaty about the weakness of the will (Saarinen 1994). Why do contemporary
polititical theories assume that once norms are rationally constructed, and accepted by rational
consensus, people will act according to them?

Another presupposition that we can find in most governance theories is the mentalist presupposition,
which is the presupposition that the conditions that determine the effectiveness of norms are linked to
rules presupposed within the mind and consequently are supposed to be a function of mental
capacities. Since the mental capacities are independent of the external context of the subject, most
procedural governance theories ignore the question of the effectiveness of the norm. In other words,
they think that the effectiveness of the implementation of norms is not a question, because it is not
dependant of external governance, but is the intern result of the norm itself. The mentalist
presupposition is then thinking that the existence of norms is enough to activate mechanism in the
mind that will assure the effectiveness of implementation.

The third presupposition isolated by Maesschalck et al. is the schematising presupposition. It relies on
the mind having a set of rules (or schemes, in Kants words), that predetermines the effect of a norm,
and does not depend on any exterior context (to that of the thinker). This is commonly seen when
participants in a participatory approach come to the setting with their own particular ethical framing, or
with some preconceptions as to what ethical issues might arise. (Rainey and Goujon, 2009)

Another presupposition, identified by another author, Jean-Marc Ferry, is that most ethical theories
and governance arrangements ignore that the condition why we accept a norm is not equal to the
condition of its justification. (Ferry, 2002), which means that we can accept a norm for other reasons
than its rational justification, and that we can accept a norm without agreeing with its justification.

Those presuppositions can be found in traditional ethical governance theories. However, the more
contemporary one have embraced those presuppositions as well, even if they do shape it in another
way. We can say that the disregard in ethical theories for the problem of the resolution of ethical
issues is related to those presuppositions. There is still, and maybe more than ever, a separation
between the problem of norms construction and the problem of norms application, which leads to a
blind point concerning the context of application of the norms in governance theories (including the
one from Maeschalck et al.). The third presupposition we can find is the presupposition that the
determination of ethical issues and the construction of a norm is sufficient to resolve every ethical
issue. This last presupposition is common place in the ethical governance project about technology.
Indeed, a lot of the project are focus only on identification of ethical issues in a particular technology
or technical field, and the project stops there, assuming that once the ethical issues has been
identified, it will be naturally taken in charge and resolve. This is obviously related to the intentionalist
presupposition. Our research has shown that the awareness of an issue is not indeed sufficient to
resolve it (Masclet and Goujon, 2011). Identification can be a first step, but it is often misleading,
because it can be use as an alibi for industries and policy makers.

Knowing where the problem is, where there is a space for a potential ethical problem does not in itself
resolve the problem. It may raise awareness and carefulness, but that is a consequence that cannot
be taken for granted, and above all, that is not necessarily sufficient to avoid the issues. Some issues
need more to deal with it than awareness. Awareness is a good first step, but it cannot be assume
197

Laurence Masclet and and Philippe Goujon
that, because people are aware of an issue, they necessarily will take care of it, and make sure to
avoid to actualise the potential issue. This is all connected with the theory of bounded rationality.

We cannot expect people to act fully rationally. It is not because one is aware of an issue that he/she
will act to resolve it. There are too many interests, too many contextual incentives on the policy
makers, on the technological developers and on every stakeholder, for us to assume that it is enough
to point out an issue to reach a solution. As we will see, this presupposition is shared by a lot of IS
professional, as well as with ethics professionals. This, in a way, makes the problem of the
ineffectiveness of the norms worst, because, if issue identification is indeed the only area that is
shared by IS professionals and philosophers, it can be seen as a nice and easy way of ethics
implementation by everybody in good will. However the identification of ethical issues does not
resolve them (Van den Hoven, 2008), and can even hide a more general problem in the field and lead
to contextual blindness. Indeed, making a list of issues that professional should beware make them
less aware of other contextually induced issues. It takes the responsibility off of the professional
shoulders, and makes them unreactive to possible ethical disaster that would not be in the ethical
issue list because it could not be predicted. This is the same critique as the critique we can do against
codes of conduct and ethical guidelines.

Before discussing the best ways to overcome those presuppositions and the negative impact they
have on the relationship between ethics and technology, we would like to pass to the other side of the
gap, and look at the presuppositions about ethics in the IS field. This section will rely on the interview
done for the IDEGOV project by the DMU University (Wakunuma and Stahl, 2011), and the analysis
of the interviews that we have done in collaboration (Masclet and Goujon, 2012).
3. Presuppositions in IS
We said in our introduction that ethics was (or at least should be) profitable for enterprises and that
IS, as any technology, would be better implemented and safest, not only on human side, but also on
the economic point of view, if it implement ethical behaviour. Now, we have to see what is really been
done in IS concerning ethics.

As shown in the interviews we did for IDEGOV (Wakunuma and Stahl, 2012), information Systems
professionals are aware to a certain extend to ethics. They usually implement a top-down approach
on ethical issue. The main solution given to ethical issues seems to be awareness, as we mentioned
earlier. It is assumed that information is the best way to deal with ethical issues. For example, to deal
with the ethical issues that can provoke the installation of camera in shops or banks, it seems
sufficient to explain to the users why the cameras are installed.

This is a remarkable example of how the principles of bioethics have migrated into every
technological field. This is due to the development of the bioethical field. It seems that information and
free consent are seen now for what ethics is about (even if, in our example, the consent is not even
invoked). This lead to top-down management of ethical questions, which are treated by the chief or at
least the person responsible (personal director, or IT manager). Society or users are not involved
per se in the mechanism of treating with an ethical issue. Reducing the ethical strategies to
information and consent is also a way for the IS professionals not to change their systems or their
technologies, so, not to change their point of view. The fault is put on the ignorance of the user and
the society. This prevalence of awareness and information is also due to the very positive bias that
bring the notion of information nowadays (notion that come from the development of cybernetics and
mathematical theories of information, to invade every sphere of society within years, to become part
of the new ideology of the modern world (Wiener, 1950)). Even if we found reflexivity in the ethical
behaviour of the IS professionals, it is always restricted to first-level reflexivity (Argyris and Schn,
1978), which is thinking about the reason of its own action, but never a second level-reflexivity, which
would involve a critical thinking about the presuppositions and the construction behind ones own
actions.

Moreover, the governance strategies in place seem to never use co-construction or even consultation.
It usually use a standard model (Jolly, 2001), which is a expert-driven model, who provide the
normativity from a (supposed) objective point of view, or the revised standard model, which is the
extension of the standard model, with a consideration of the social construction of the problem, but in
a very external point of view, as in risk management, for example. In this revised model, public
influence and participation in risk management are considered with great suspicion. In other words,
198

Laurence Masclet and and Philippe Goujon
the context is considered, but reduced to risk assessment according to the main framing. We do not
say that ethical problems are not discussed in IS field and IS companies or companies that use IS.
But, when companies want to implement ethics in their practices, they very often use ethical tools that
are much decontextualised, and that does not take into account the social construction of the ethical
problem, or the question of the acceptance of their product by the users in the construction of their
norms of behaviour. IS professional very often relies on their own experience. And when they do have
discussion about ethics, it is very often within other member of the team. They do not seem keen on
using the experience of people from civil society, or even from other profession related to theirs.
There is however a noticeable exception with law-people, who are often consulted. Nevertheless, law
is distinct from ethics because there always will be a difference between what we ought to do, and
what is allowed. The reduction of ethics to law is a quite popular presupposition. It leads to
misunderstandings about ethics and can have bad consequences. The reduction may have been
induced by the proceduralisation of ethics, which became more and more a matter of compliance.
However, the question is not the same. Lawyers can tell what is allow or not in a certain context.
Being ethical involves something else. It is not only a matter of compliance, it cannot be externally
imposed. That particularity is what makes it so difficult to implement: the very term of implementation
is misused. Ethics cannot be implemented. It has to come from the persons. (Masclet and Goujon,
2012)

But that is not to say that ethical theories have no impact. Even if ethics requires somehow a will, that
will is a necessary condition, but not a sufficient condition. That is where ethical governance theories
can help.
4. Hint for a comprehensive solution
How can ethics help? In our research, we found various closures and presuppositions from ethical
people and technology people. There is also a gap between the two communities, due to various
reasons (difference of jargon, disinterest for implementation from the philosophers, disinterest for
ethics and assimilation to laws from IS professionals, context of separation between human
sciences and pure sciences, notably in education, and so on). The role of ethics nowadays seems to
be opening framings and allowing full reflexivity (Schn, 1983) for everybody, in order to make bonds
between the two disciplines. This approach involves reflexivity on the trends of ethical and
governance researches itself. This first task is a task that has always been prevalent in the work of
philosophers. Reflection on its own activity is at the heart of philosophy. Putting that task back in the
centre of the research in ethics is a good first step. However, the presuppositions in ethics are not
really about reflexivity per se. The problem might be that ethical researchers are too focused on the
theories they are elaborating. A good balance between reflexivity in ethics and a renewed interest on
the context of application of the theories has to be found. Our diagnosis, after our research for EGAIS
and IDEGOV, is that ethical theories have to take into account the context of application of their own
theories, within the theory itself, which is to say, include an opening, in the theories, to challenges
from the field to which they want to apply their theories, and to society in general. The validity of the
theories, insure by ethical procedures, does not necessarily mean that the theory will actually be
applicable. There is more to take into account than the legitimacy of the procedure to create norms. In
the procedure, it should be acknowledge that the validity of a norm is not always sufficient to insure
its acceptation, and furthermore, that somebody can accept rationally a norms, and even help to
create it within a discussion framed by ethical procedure (taking into account every argument, law of
the best argument, and so on), will not necessary take that norm as a maxim for action (Ferry, 2002).

There is a lot of reason for that gap in ethical theories. One reason is the rejection of value from the
field of norm elaboration. Ejecting values from the discussion does however not seems to be sufficient
to avoid them in the field of application of the norms. On the contrary, values, beliefs, life experiences,
individual point of view, and everything that influence the behaviour without complying with the
exigencies of the rational discussion, can come disrupt the norm application. Opening the theories of
norms construction to context is one solution that can be handled by ethics. What does that mean
exactly? First, that implies to overcome the gap between ethical theories and the so-called field
implementation. As we said, this cannot be about implementation of ethics into a field any field. The
process of reaching ethical behaviour and ethical innovation and technological development has to
embrace ethics as a collective task, where everybody is responsible and contributor of every step of
the process, from the first idea to the launching of a new technology in the society and the analysis of
its impacts, its acceptation and its use.

199

Laurence Masclet and and Philippe Goujon
There is a political call, notably from the European commission, to improve the mechanism of
consultation and participation at the development stage of a new technology. However, we believe
that, even if it is of course necessary, the framework in which they want to improve the procedure
does not achieve a full reflexivity from every stakeholder, and, as a consequence, stay very external.
The framework used keeps the process of norm construction outside the technology project, which
has to applied the rules at one point, show that the ethical procedures has been respected before the
ethical review process, as see in figure 1 (to some extend, and taking into account the necessary
simplification of a schema). There is, in this process, a mechanism of participation from the
technological community in the process of the construction of the norm, but this participation is much
decontextualised. It does not involve a particular project, and does not, as a consequence rely on the
particular knowledge of the scientific that would participate, who is a stakeholder as anybody else (to
the extend that he/she may raise better arguments, that can be taken into account). The problem is
that, even if the ethical process involves scientists and members of technology project as stakeholder
and partner of the discussion that will lead to the norms, the norms will still be applied in a very
external way, at a very specific point of the development of the technology (usually at the end). This is
related to the limits of the use of experts and their cognitive closure (Masclet and Goujon, 2011),
Process of norms
construction
Technology
community
First idea
Participation
(as stakeholder)
Ethical
review
Relations between ethics and technology projects
(how it works now)
Ethics
community
Technology
project
Process of norms
construction
Technology
community
First idea
Participation
(as stakeholder)
Ethical
review
Relations between ethics and technology projects
(how it works now)
Ethics
community
Technology
project

Figure 1: Relation between ethics and technology projects, as it is now
What we are trying to design, in our project, is a way of implying ethical thinking at every steps of the
development of a project. For that, we cannot only rely on ethical expert. We have to involve the
scientist and the developers in the process from the beginning (figure 2).
Ethical and
technological
communities,
working in constant
collaboration
Process of norm
construction, integrating
the context of the
particular project and its
context of application
A technology
project.
Relation between ethics and a technology project (to be
achieved)
Ethical and
technological
communities,
working in constant
collaboration
Process of norm
construction, integrating
the context of the
particular project and its
context of application
A technology
project.
Relation between ethics and a technology project (to be
achieved)

Figure 2: Relation between ethics and a technology project, how it should work
200

Laurence Masclet and and Philippe Goujon
It has to involve training and learning mechanisms, it is also necessary to reconnect the communities
of philosophy and science, and overcome the presuppositions that are going on in both side.
5. Conclusion
We have to find ways to open the framing of the developers and the users of technologies, for them to
act ethically at every step of their project, but we also have to find a way to open the framing of the
philosophers of governance.

The way of doing ethics has an impact on the ethical theories. Of course, we do not say that ethics
has to come only straight from the scientists and project developers and that ethical theories do not
matter. The input of ethics in technological development would be to create theories that could back-
up ethical discussion and give process to reach fairness and reflexivity, and on the other hand, the
input of professionals could be to teach ethical theories on how to take into account the context of
application of the theories. This implies not only the specificities of a particular field or project, but also
the particularity of the people who conduct those projects and the people who will use the technology.
A way of doing that is to open ethics and governance theories to other level of discourse than pure
argumentation and reason. We have to think about a way to include narration, to include
interpretations and reconstruction, to include values and context into the procedure of governance.
This is currently being tested as a theory called comprehensive proceduralism, which takes some of
the hints toward solution we have developed in this article into a more elaborate theory.

Our approach involve a constant reflexivity from every actor involve, in order to understand better the
framing for action, the contextual incentives, and the background theories that inform practices.
Opening framings can also be done by involving other actors. This is why we are currently analysing
the impact of that civil society organisation (CSO) can make on technological project, by using our
method of exploring the background preposition that inform notion like participation.
Acknowledgments
This article relies partially on studies done for the IDEGOV project by the University of Namur.
IDEGOV states for Identification and Governance of emerging ethical issues in Information systems.
The project has been founded by the CIGREF foundation between 2011 and 2012 and involves the
Laboratory for Ethical Governance of Information Technology (LEGIT) in the University of Namur,
Belgium, and the Center for Computing and Social Responsibility in De Montfort University in
Leicester, UK.
References
Argyris, C and Schn, D. A. (1978), Organisational Learning, Vol.1 A Theory of Action Perspective, Reading, MA,
Addison, Wesley
Brey, P. (1999), Method in Computer Ethics: Towards a Multi-Level Interdisciplinary Approach in Ethics and
Information Technology, 2:3, 1-5.
Felt Ulrike, (2005), Science, Technology and Democracy Report to the European Commission, Science in
society-Forum, Brussels March 9-11, 2005 Session 2
Ferry, Jean-Marc, Valeurs et Normes, (2002), La question de lthique, Bruxelles, Edition de luniversit de
Bruxelles
Habermas, Jrgen, (1981), The Theory of Communicative Action
Habermas, Jrgen, (1991), Erluterungen zur Diskursethik, Frankfurt am Main, Suhrkamp
Honneth Axel (1996), Struggle for Recognition. The Moral Grammar of Social Conflicts, Polity Press
Joly, P.-B. (2001) Les OGM entre la science et le public? Quatre modles pour la gouvernance de l'innovation et
des risques, Economie Rurale, 266.
Laudon, K., Laudon, J., Management des systmes dinformation (11
e
dition) Pearson Education, 2011.
Lenoble, Jacques and Maesschalck, Marc, (2003), Toward a Theory of Governance, the Action of Norms (trad. J.
Paterson), Kluwer Law International
Masclet Laurence, Goujon, Philippe, (2011), IDEGOV D.1.1. Grid of Analysis, CIGREF Foundation.
Masclet Laurence, Goujon, Philippe, (2011), IDEGOV D.3.2. Model of current and emerging governance
strategies, Map of governance and ethics, CIGREF Foundation.
Saarinen, Risto, (1994) Weakness of the will in medieval though: from Augustine to Buridan, EJ. Brill, the
Netherlands.
Schn, D., The Reflective Practitioner. How professionals think in action, London: Temple Smith, 1983.
Simon Herbert A, (1972) theories of bounded rationality, in CB. McGuire and Ray Radned (Eds) Decision and
Organization, North-Holland Publishing Company
201

Laurence Masclet and and Philippe Goujon
Pearson, J.M., Crosby, L., Shim, J.P., (1996) Modeling the relative importance of ethical behavior criteria: A
simulation of information systems professionals ethical decisions. J. Startegic Inform. Systems 5 (4), 275-
291
Rainey Stephen, Goujon Philippe, (2009) EGAIS 4.1 Existing Solutions to the Ethical Governance Problem and
Characterisation of their Limitations
Rawls, J., A Theory of Justice, Harvard University Press, Cambridge (Mass.), 1971
Van den Hoven, Jeroen, (2008) Moral Methodology and Information Technology, in The Handbook of
Information and Computer Ethics, K. Himma and H. Tavani (eds.), Hoboken, NJ: Wiley, pp. 49-68.
Wakunuma, Kutoma, Stahl, Bernd, (2011), IDEGOV D.1.2. Data Collection Strategy Document, CIGREF
Foundation.
Wurman, Richard, (2000) Information anxiety 2, Pearson Education.
Wiener, Norbert (1950), The Human Use of Human Beings: cybernetics and society, The Riverside Press
(Houghton Mifflin Co.)
202
Breaking Consensus in IS Evaluations: The Agitation
Workshop
John McAvoy, Tadhg Nagle and David Sammon
Business Information Systems, University College Cork, Ireland
j.mcavoy@ucc.ie
t.nagle@ucc.ie
dsammon@afis.ucc.ie

Abstract: As researchers evaluate organisations, there is a desire for a consensus from those within the
organisations who are participating in the research. A common consensual perspective from a team appears to
reflect an optimal state where those being studied have a common understanding of the current state of events
within the context of their environment. The question arises, though, whether an evaluation finding consensus
reflects the reality: there are a variety of reasons why a common understanding may be false consensus. This
paper proposes an evaluation method where, when symptoms of problems such as groupthink are identified, a
consensus of perspectives is challenged before they are considered valid. This is achieved in a workshop where
participants reflect on their own perception of reality and represent this reality in a matrix of influencing and
relevant factors. The individual matrices are then combined and used to highlight disparities in the participants
perspectives through a single matrix visualisation. Discussion in the workshop then focusses on the areas,
highlighted by the matrix, where differences of perspectives are identified. In effect, the common understanding
presented by those being evaluated will be challenged, and a new common understanding will have to be
created.

Keywords: common understanding, consensus, workshops, groupthink, evaluation
1. Introduction
Organisations are faced with increasing demands to deliver, and evaluations are used to determine
opportunities for improvement. When evaluating organisations, there is a desire for consensus from
the participants as it is assumed that consensus represents the reality of the organisation because all
participants agree on this reality. This aligns somewhat with Richardsons (2003, p.1625) description
of psychological constructivism where if the individuals within a group come to an agreement about
the nature and warrant of a description of a phenomenon or its relationship to others, these meanings
become formal knowledge. Others have noted the importance of consensus both in research and in
practice: Bjorn and Morton (2005) describe how individual perspectives can have a negative impact
which makes agreement difficult, while McMahon (2003) notes the importance of consensus in Agile
software development teams. So while it is acknowledged that consensus within a team can give a
good representation of reality, there are times when this may not be so,

Questions must be raised as to whether consensus is necessarily the optimal goal that evaluators of
organisations should strive for. For example, Pfeffer and Sutton (2000) illustrate the existence of a
knowing-doing gap, where decisions are not made rationally on the basis of related known facts, but
are shaped by normative or political-cognitive influences. Additionally, discrepancies between
knowledge as justified true belief and actions taken by a social actor is captured by Argyris and
Schon (1978) in their conceptualisation of espoused theories versus theories in use. Espoused
theories are the beliefs individuals profess as guiding their behaviours and decision making, while
theories in use guide actual behaviour. When evaluations are conducted in groups, further problems
arise such as where various participants attempt[ed] to justify their own position or to persuade
others to that opinion (Love, 2000, p.431).

One of the major causes of false consensus within organisations is groupthink and has been shown to
impact on organisations of differing sizes and goals (cf. Esser, 1998; Leana, 1985; Turner and
Pratkanis, 1998): from small Information System development teams (cf. McAvoy and Butler, 2009) to
the United States government (cf. Janis, 1972). Groupthink is defined by its originator (Janis, 1972,
p.9) as a deterioration of mental efficiency, reality testing, and moral judgement that results from in-
group pressures. This is further refined as the psychological drive for consensus at any cost
(Ottaviani and Sorensen, 2001, p.394) or as extreme concurrence seeking (Levine and Moreland,
1990; Turner and Pratkanis, 1998). When evaluating organisations, this leads to problems as those
participating in the evaluation, especially if it is a group based evaluation, may be under pressure to
conform to the groups views or experience a a pull towards the group as described by Asch (1952,
p.483). What the researcher (or practitioner) evaluating the organisation is presented with is
203

John McAvoy, Tadhg Nagle and David Sammon
consensus from the participants; this, though, may only be the illusion of unanimity if all agree then
it must be true (Argyle, 1989; Manz and Sims, 1982; Von Bergen and Kirk, 1978).

The remainder of this paper is organised as follows. The next section describes one possible method
of deriving real consensus (and avoiding problems such as groupthink), while also highlighting flaws
with the approach. This is followed by a description of a proposed new evaluation method involving a
workshop, and a case study describing the use of this workshop is then described. The results of the
workshop are presented, followed by recommendations for researchers and practitioners involved in
evaluating organisations.
2. Devils advocate
Solutions to the false consensus seen in groupthink generally involve creating a climate where
decisions and perspectives are questioned and critically evaluated, disagreement is encouraged, and
external perspectives sought (cf. Janis, 1972; Von Bergen and Kirk, 1978). A proposed method of
doing this is through the use of devils advocate.

The use of devils advocate, where a member of the team has the task of deliberately opposing or
critiquing the groups decision, can provide benefit in creating confrontation within the group (Thomas,
1988). This technique was used effectively by President Kennedys team during the Cuban missile
crisis (Janis, 1972; Thomas, 1988): by using a devils advocate (in this case the Presidents own
brother Robert), the team avoided errors from an initial superficial analysis, by creating conflict in the
team. The use of devils advocate has been described in a variety of research papers, notably
Nemeth and Goncalo (2004), Schweiger et al. (1989), Herbert and Estes (1977), and Schwenk
(1998). Those who argue for the use of devils advocate assume that any decision or perspective that
can withstand critique is good, where critique, or conflict, reduces the likelihood of a false consensus
(Cosier, 1981). The use of devils advocate has also been shown to have benefits beyond just the
avoidance of groupthink: in decision making (Hammond, Keeney and Raiffa, 2006), strategic planning
(Boland, 1984; Mason, 1969), and, specifically for Information Systems, in ERP projects (Sammon
and Adam, 2007).

While the benefits of the use of devils advocate have been noted above, there is no universal
agreement as to its effectiveness. While Schweiger et al. (1989) argue that the use of devils advocate
did not impact on a groups satisfaction; this is not an uncontested argument. For example, Nemeth et
al. (2001) found that antipathy can arise when the devils advocate approach is used and that
problems can be created for and within cohesive teams (Nemeth and Goncalo, 2004). While Herbert
and Estes (1977) argue that this antipathy can be reduced, and Sambamurthy and Poole (1992)
argue that conflict can be beneficial, they also acknowledge that the problems exist and must be dealt
with. Further, Sammon and Adam (2007, p.1071) note that traditionally, the devils advocate
approach, while useful in exposing underlying assumptions, has a tendency to emphasise the
negative. This aligns with the argument of Turner and Pratkanis (1998) that solutions to groupthink
may exacerbate problems in the group if they regard them as intrusions that question the groups
ability to deal with problems.

There are further issues for a researcher or practitioner evaluating an organisation. Typically, the
evaluator will be external to the organisation or team being evaluated. Intervention by an outsider is
not the ideal way of trying to deal with problems such as groupthink as this itself can be part of the
problem. Outsiders perspectives are rejected by teams subject to groupthink (Furst, Blackburn and
Rosen, 1999), and an evaluator would be regarded as an outsider. Wastell (1999) talks about a
paranoid view of the world outside of the group where any complaints against the team were incorrect
and unnecessary (Manz and Sims, 1987), with sloganistic thinking about the immorality of outgroups
(Oberschal, 1978, p.239). In fact, Janis (1972), the originator of the term groupthink, lists one
symptom of groupthink as advice from outsiders not being sought.

For someone conducting an evaluation, therefore, there are problems when dealing with teams and
organisations where false consensus may be impacting on their perspective of reality within the
organisation, and therefore the evaluation of this reality: e.g. if a round table/group discussion as part
of an evaluation was showing symptoms of groupthink. The idea of the devils advocate creating
conflict through critique could have benefits, but if done by an outsider could exacerbate the problem:
ultimately this would not only lead to an incorrect evaluation but actually worsen the problems that led
to the incorrect evaluation. How then can an evaluator, external to the team or organisation being
204

John McAvoy, Tadhg Nagle and David Sammon
evaluated, create the necessary critique and conflict required to ensure that the evaluation is a valid
one? The answer appears to be that the critique and conflict must be done by, and created by, those
being evaluated. The question though, is how to get a team/organisation to do this. In the next
section, our agitation workshop is proposed and the method of enabling such a solution is presented.
3. The agitation workshop
The individual conducting the evaluation, as described above, must facilitate and encourage the
critique and conflict necessary to overcome initial perspectives expressed by the participants, which
may be incorrect and restricted by problems such as groupthink. If the team appear to have a
consensual perspective of their current situation, and appear to be showing signs of groupthink, then
they must be encouraged to challenge these perspectives: the steps are described below and
illustrated in Figure 1.

In the agitation workshop the role of the evaluator is to get the team to challenge their perspectives
and to break apart their consensual perspectives. Only then can the evaluation be trusted in so far as
it can be assumed to be free of groupthink (and other issues which impact negatively on an
individuals perspective). The term breaking apart is deliberately used above as it will involve
conflict. As the evaluator is an outsider, though, the conflict must come from the team and not the
evaluator. The evaluator uses a set of factors which are relevant to the focus of the evaluation (in this
study, the factors used were measures of agility to examine a teams suitability for Agile methods
adoption: see McAvoy and Sammon (2005)). Each workshop participant provides an individual
assessment of the area being evaluated, using a simple binary yes (1) or no (0) as to the presence
or absence of a factor. Which factors used are not of importance, and are chosen based on what is
being evaluated. The critical element of this stage of the workshop is that each participant provides
their own assessment of the factors without any group discussion. If a group discussion on the factors
were to take place, it is likely that participants would be influenced by others and the picture
presented to the evaluator would be one that was, what Furst et al. (1999) describe as, an illusion of
consensus and cohesion.

Once each participant has provided their assessment, these are amalgamated into a single matrix
visualisation. This amalgamated assessment is then presented to the group for discussion. As this
represents the perspectives of the participants, the possibility is removed of the group feeling that it
has been influenced by the incorrect and unnecessary (Manz and Sims, 1987) or immoral
(Oberschal, 1978) views of the outsider in this case, the evaluator. The evaluator can now lead a
discussion based on the matrix (see table 2 for a sample single matrix visualisation). To facilitate
discussion, and to further ensure that it is those being evaluated (as opposed to the evaluator) who
critique their perspectives, the evaluator needs to concentrate on some elements of the matrix to
promote discussion (and/or conflict). The factors where there is disagreement between the
participants need to be highlighted in order to promote discussion on these factors: this is especially
relevant where there had been a consensual perspective on such factors before the workshop (i.e. in
prior round table or group discussions). It is the disagreements that will provide the evaluator with the
most opportunity to facilitate the critique/conflict necessary to generate a true picture of the reality
under investigation. Within the single matrix visualisation, a simple count of the 1s and 0s will suffice
to show workshop participants that there is no universal agreement on these factors.

The discussions and critique of the differences uncovered is still a team discussion as opposed to a
critique by the evaluator (an outsider). As such, the participants are less likely to reject the different
perspectives that they are confronted with than if it was an outsider trying to give advice (as per Janis,
1972). Whereas groupthink creates a false consensus, the difference in perspectives between
participants cannot be ignored or easily reconciled to a single (false) consensual perspective:
therefore, this is working against the pressure to conform to the groups perspective (as per Asch,
1952). Discussions now take place where the workshop participants must confront their differences
and critique the differentiated perspectives that are being presented. Again, the conflict and critique is
based on the perspectives expressed by the participants through their assessments (without the
potential for the corrupting influence of outsiders). The term agitation workshop comes from the
premise that the (group influenced) collective consensual perspective has been agitated, through the
use of the single matrix visualisation, to the point that there are now differing perspectives. A
consensual perspective no longer exists and a new perspective must be created by the participants
though discussion and critique of the differing perspectives. In the next section, a case study is
205

John McAvoy, Tadhg Nagle and David Sammon
presented showing the agitation workshop in practice and the benefits that it brought to an evaluation
of a global software development organisation.

Figure 1: Agitation workshop protocol
4. The agitation workshop in practice
The objective of this exploratory study is to determine the feasibility and potential benefits of the
agitation workshop: for this case study research, Texunatech was the organisation in which the
workshop was run. TexunaTech has established itself in the global market as a trusted service
provider of web-based data management applications, serving a range of government, healthcare and
private sector organisations. Coupling geographical location with specific segments of the ISD
lifecycle, the organisations structure is defined as follows: (i) London (UK) incorporates business
analysis, project management and business development, (ii) Cork (Ireland) incorporates call centre
operations, after sales service and first line support, (ii) Moscow (Russia) - incorporates software
development, technology infrastructure maintenance and software testing. The research objective
was in line with the CEOs requirement, emerging from a strong organisational necessity, for an
external analysis and evaluation of TexunaTechs ISD lifecycle and his belief that it could be more
efficient. This was the point of departure for the three person research team from which the case
study research protocol was developed (see Table1 below).
Table 1: Case study research protocol (after: Kelliher (2005))
Research Activity Description
Objective To determine the feasibility and potential benefits of the agitation workshop
Approach Case study
Motivation CEOs desire for an evaluation of ISD lifecycle
Case Selection
Process
A software development organisation where the CEO sought an evaluation of the
organisations ways of working
Case Access A unique openness to share information and a willingness to make personnel available
for the research, to the extent that operations were suspended for three days to enable
workshops to be carried out.
Instrument The research team (3 researchers)
Boundary Device ISD lifecycle
Data Gathering
Techniques
Round table discussions, on-site agitation workshops, and group based interviews
Data Analysis
Techniques
The agitation workshop single matrix visualisationis used to present and analyse data

206

John McAvoy, Tadhg Nagle and David Sammon
The evaluation commenced with round table discussions with the various groups in the organisation.
The workshops took place in both the Cork office and the Moscow office, with London staff travelling
to one or other of the locations. In these round table discussions, the different teams within
TexunaTech were asked to provide their perspectives of TexunaTechs software development
process. It was during these round table discussions that the researchers noted that symptoms of
groupthink appeared to be impacting on the perspectives expressed. The symptoms of groupthink
(from Janis, 1972) that were noted were a pressure to conform to the groups views (where there
appeared to be a team perspective and team answer rather than individual perspective) and a
stereotyping of outsiders (where other groups were described negatively and collectively i.e. the
developers dont take ownership). This subjective opinion of the researchers led to concerns that an
evaluation based on the perspectives expressed might not be accurate as it appeared to be corrupted
by norming and cohesive power of the team. Because of this, it was necessary to explode or break
apart the consensual perspective in order to get an accurate portrayal of each individuals real
perspective.

For the agitation workshop itself, using the protocol described in Figure 1, each participant performed
an individual assessment of the organisations suitability for Agile. As can be seen in the single matrix
visualisation in Table 2, there was a large variation in assessments between the participants: due to
space restrictions, only a portion of one of the workshop assessments is presented, and is reflective
of the other assessments.
Table 2: A sample of the participants differing perspectives of the Agile factors

When the single matrix visualisation was presented to the group, and it became clear to the workshop
participants that, rather than having a consensual perspective, there were marked differences in
perspectives within the group. They now realised that there were areas where the group did not have
consensus and that these issues needed to be addressed: additionally, the team had originally been
in agreement on many of these factors before the workshop (during the round table discussions).
From this point on, there was a noticeable increase in discussion, to the extent that a degree of
conflict entered the discussions. In effect, the group had moved from presenting a consensual
perspective of the organisation to a situation where the group was discussing and disagreeing with
each other as to the reality within the organisation. It was also noted that the group were more open
to accepting that fault existed within their own team (geographic location) as opposed to with
outsiders (other teams).
5. Evaluating the evaluation
For the evaluator, the question arises as to whether the new perceptions of the participants in the
workshop are now valid: are they still corrupted by the norming and cohesive power of the group (e.g.
is groupthink still influencing the results?) There are two methods of determining whether the
evaluation is now free of corrupting influences from the group. The first is subjective while the second
is a more quantifiable determination. In the first case, the evaluator can see for themselves whether
the perspectives of the workshop participants are sufficiently different as to create a debate and
critique. In TexunaTech, it was clear that the participants were no longer expressing the groups
perspective; rather they were specifically expressing their own perspective. The level of discussion
and disagreement among participants showed that there were differences between the participants
that had not been expressed before the agitation workshop in the round table discussions. As an
example, in the round table discussions before the workshop, business analysts (BAs) had all agreed
that they were good at dealing with customers. After seeing the single matrix visualisation (customer
involvement (CI) factor in Table 2), they came to realise that, despite the previous agreement,
customer involvement was problematic: several BAs then commented that they felt helpless when
dealing with the customer. This would be highly unlikely to have occurred if groupthink was
influencing them: if groupthink was influencing the evaluation then these differences in perspective
would be absent from the discussions. Additionally, in the TexunaTech case, it became very clear that
individuals no longer solely blamed other groups for problems and were more willing to blame their
207

John McAvoy, Tadhg Nagle and David Sammon
own group and themselves. For example, the developers had originally blamed the BAs for the late
clarification of requirements. After the workshop, the developers were more understanding of the
difficulties involved for the BAs and also accepted that part of the fault existed with themselves for not
addressing the issue. For years, the developers handled late requirements without adequately
explaining the impact to the BAs: they now accepted that they should have challenged the status quo.
The second method involved the use of a mindfulness instrument to evaluate the state of the
organisation before and after the agitiation workshop. Mindfulness is a process that promotes self-
regulation of attention and a posture of acceptance in individuals (Hayes and Shenk, 2004, p. 249)
providing better process awareness and stronger accountability, and is a feature of High Reliability
Organisations (Weick, Sutcliffe and Obstfeld, 1999). For more details on mindfulness assessments
and the components of mindfulness see Mu and Butler (2009) and Nagle et al. (2011).

Mindfulness assessments were conducted by each participant, and standard deviation in mindfulness
assessments was used to quantify the degree of difference in perspectives, and how much the
differences have increased/decreased between the pre and post agitation workshop. These
Mindfulness measures were taken before and after the workshop to determine the impact of the
agitation workshop, where mindfulness was a measure of the workshop effectiveness; it was not the
focus of the evaluation. The differences in assessments were noticeable, with the group lowering their
evaluation of the organisation (and in a minor number of cases raising it). Overall, the difference in
assessment (the assessment returns a numeric assessment of mindfulness) was negative: e.g. one
element of the assessment dropping by 24% between pre and post workshop. If the standard
deviation of the participants mindfulness assessments has increased, then the level of disagreement
has also increased. In some cases the standard deviation may decrease for some parts of the
evaluation: in these cases it shows that any debate and discussion has actually led to more
agreement within the team. This would not be unexpected, but the evaluator would need to be
cautious with the perspectives expressed if all the values came closer to agreement (as this implies
that normative pressures, such as groupthink, may be corrupting the discussions). For the
TexunaTech case, some values had a decrease in standard deviation (more agreement between
individuals) but the majority of standard deviations increased (more disagreement within the team). It
is argued herein that the increase in standard deviation shows that the individual values
(perspectives) are less likely to have been corrupted by normative pressure, and thus are more likely
to be a more accurate reflection of reality. This more accurate version of reality can now be used, with
more confidence, in the evaluation of the team/organisation. Table 3 illustrates the differences in
standard deviation in TexunaTech, between the individual perspectives before the agitation
workshop (represented as Pre STDEV) and the perspectives after the agitation workshop
(represented as Post STDEV).
Table 3: Measuring differences in perspectives caused by agitation workshop
MDRS1 MDRS2 MDRS3 MDRS4 MDPF1 MDPF2 MDPF3 MDSO1 MDSO2 MDSO3 MDSO4 MDSO5
PreSTDEV 1.191 1.095 1.044 1.834 0.786 1.342 1.272 0.505 1.036 0.688 1.000 0.647
PostSTDEV 1.598 1.246 0.991 1.506 1.389 1.506 0.835 1.414 1.642 1.669 1.309 1.356
Difference 0.407 0.151 0.053 0.328 0.603 0.164 0.437 0.909 0.606 0.981 0.309 0.709
MDCR1 MDCR2 MDCR3 MDCR4 MDEX1 MDEX2 MDEX3 MDEX4
PreSTDEV 0.647 0.674 1.567 0.944 0.522 0.688 0.944 0.786
PostSTDEV 0.535 1.302 1.165 1.356 1.414 0.835 1.553 1.188
Difference 0.112 0.628 0.402 0.412 0.892 0.147 0.609 0.402
Reluctancetosimplify
interpretations Preoccupationwithfailure Sensitivitytooperations
CommitmenttoResilience Deferencetoexpertise

To determine the effectiveness of the agitation workshop, there are two relevant points to be taken
from the differences in the standard deviations above. Firstly, the count of values that showed an
increase in standard deviation was four times greater than the count with a decrease in standard
deviation. This clearly shows that perspectives are much more differentiated across values, post
workshop. Secondly, the size of the increase versus the size of the decrease is noticeable. As the
mindfulness measurement values ranged from 1 to 7, a standard deviation change of 0.5 or more was
regarded as significant. Nearly half of the increases in standard deviation were greater than this
value, showing that there was a significant increase in the differences of perspectives between the
workshop participants; there were no decreases in standard deviation (coming together of
perspectives) of significance. Even ignoring the use of a significant value (0.5 in this case), the
208

John McAvoy, Tadhg Nagle and David Sammon
mathematical sum of all changes in standard deviation is positive (more disagreement than
agreement) and the average increase in difference was twice the size of the average decrease in
difference. Taking all of these numerical calculations together, it is clear that the workshop created
more differences in perspectives between the participants in their assessment of the organisations
software development process.
6. Conclusions
The protocol for the agitation workshop demonstrated the desired effect: breaking up the common
consensual perspective, which was influenced by groupthink, and giving the participants the
opportunity to create a new (more accurate) perception of reality. It is clear, from the example in the
case above, that the values taken after the workshop are free (or more free) of corrupting normative
pressures such as groupthink. This should give the evaluator more confidence in the perspectives
expressed by the participants after the workshop. Ultimately this should ensure a more accurate
representation of the reality being evaluated and the problems (or positive aspects) seen.

It is not relevant to the researcher whether the new reality represented by the participants is a more
positive perspective of the team/organisation or more negative perspective. What is important is that
the evaluator has more confidence in the picture of reality presented by those being evaluated i.e. the
evaluator has, through the agitation workshop, minimised the level of normative group corruption of
the participants individual perceptions. This can only lead to more accurate evaluations.
References
Argyle, M. (1989) The social psychology of work, Penguin Books, Middlesex, UK.
Argyris, C. and Schon, D. (1978) Organizational learning: A theory of action perspective, Addison-Wesley, MA,
USA.
Asch, S. (1952) Social Psychology, Prentice-Hall, NJ, USA.
Bjorn, P. and Morten, H. (2005) Proactive behaviour may lead to failure in virtual project-based collaborative
learning, International ACM SIGGROUP conference on Supporting group work (Eds, Pendergast, M.,
Schmidt, K., Mark, G. and Ackerman, M.) ACM, Florida.
Boland, R. (1984) Sense-making of accounting data as a technique of organisational diagnosis, Management
Science, 30 (7), pp. 868-882.
Cosier, R. (1981) Dialectical inquiry is strategic planning: a case of premature acceptance?, The Academy of
Management Review, 6 (4), pp. 643648.
Esser, J. (1998) Alive and well after 25 years: A review of groupthink research, Organizational Behaviour and
Human Decision Processes, 73 (2/3), pp. 116-141.
Furst, S., Blackburn, R. and Rosen, B. (1999) Virtual team effectiveness: a proposed research agenda,
Information Systems Journal, 9 (4), pp. 249-269.
Hammond, J., Keeney, R. and Raiffa, H. (2006) The hidden traps in decision making, Harvard Business Review,
Vol. January, pp. 118-126.
Hayes, S. and Shenk, C. (2004) Operationalizing mindfulness without unnecessary attachments, Clinical
psychology: Science and practice, 11 (3), pp. 249-254.
Herbert, T. and Estes, R. (1977) Improving executive decisions by formalizing dissent: the corporate devils
advocate, The Academy of Management Review, 2 (4), pp. 662667.
Janis, I. (1972) Victims of groupthink, Houghton Mifflin Company, MA, USA.
Kelliher, F. (2005) Interpretivism and the pursuit of research legitimisation: An integrated approach to single case
design, The Electronic Journal of Business Research Methodology, 3 (2), pp. 123-132.
Leana, C. (1985) A partial test of Janis groupthink model: effects of group cohesiveness and leader behaviour on
defective decision making, Journal of Management, 11 (1), pp. 5-17.
Levine, J. and Moreland, R. (1990) Progress in small group research, Annual Review of Psychology, 41 (1), pp.
585-634.
Love, K. (2000) The regulation of argumentative reasoning in pedagogic discourse, Discourse Studies, 2 (4), pp.
420-451.
Manz, C. and Sims, H. (1982) The potential for groupthink in autonomous work groups, Human Relations, 35
(9), pp. 773-784.
Manz, C. and Sims, H. (1987) Leading workers to lead themselves: The external leadership of self-managing
work teams, Administrative Science Quarterly, 32 (1), pp. 106-128.
Mason, R. (1969) A dialectical approach to strategic planning, Management Science, 15 (8), pp. 403-414.
McAvoy, J. and Butler, T. (2009) The role of project management in ineffective decision making within Agile
software development projects, European Journal of Information Systems, 18 (4), pp. 372-383.
McAvoy, J. and Sammon, D. (2005) Agile methodology adoption decisions: An innovative approach to teaching
and learning, Journal of Information Systems Education, 16 (4), pp. 409-420.
McMahon, J. (2003) 5 lessons from transitioning to eXtreme Programming, Control Engineering, Vol. 50, pp. 59-
60.
209

John McAvoy, Tadhg Nagle and David Sammon
Mu, E. and Butler, B. (2009) The Assessment of Organizational Mindfulness Processes for the Effective
Assimilation of IT Innovations, Journal of Decision Systems, 18 (1), pp. 27-51.
Nagle, T., McAvoy, J. and Sammon, D. (2011) Utilising Mindfulness to analyse Agile global software
development, European Conference on Information Systems, Helsinki.
Nemeth, C., Brown, K. and Rogers, J. (2001) Devils advocacy versus authentic dissent: Stimulating quantity and
quality, European Journal of Social Psychology, 31 (6), pp. 707-720.
Nemeth, C. and Goncalo, J. (2004) Influence and persuasion in small groups, In Persuasion: Psychological
insights and perspectives(Eds, Shavitt, S. and Brock, T.) Allyn and Bacon, MA, USA, pp. 171-194.
Oberschal, A. (1978) Theories of social conflict, Annual review of sociology, 4 291-315.
Ottaviani, M. and Sorensen, P. (2001) Information aggregation in debate: Who should speak first, Journal of
Public Economics, 81 (3), pp. 393-421.
Pfeffer, J. and Sutton, R. (2000) The Knowing Doing Gap: How Smart Companies Turn Knowledge Into Action,
Harvard Business School Press, MA, USA.
Richardson, V. (2003) Constructivist pedagogy, Teachers College Record, 105 (9), pp. 1623-1640.
Sambamurthy, V. and Poole, M. (1992) The effects of variations in capabilities of GDSS designs on management
of cognitive conflict in groups, Information Systems Research, 3 (3), pp. 224251.
Sammon, D. and Adam, F. (2007) An Extended Model of Decision Making for a Mindful Approach to IT
Innovations (Enterprise-Wide ERP Project Implementation), European Conference on Information
Systems(Eds, Osterle, H., Schelp, J. and Winter, R.) St Gallens, Switzerland, pp. 1064-1076.
Schweiger, D., Sandberg, W. and Rechner, P. (1989) Experiential effects of dialectical inquiry, devils advocacy,
and consensus approaches to strategic decision making, Academy of Management Journal, 32 (4), pp.
745772.
Schwenk, C. (1998) Effects of devils advocacy on escalating commitment, Human Relations, 41 (10), pp. 769
782.
Thomas, H. (1988) Policy dialogue in strategic planning: Talking our way through ambiguity and change, In
Managing ambiguity and change(Eds, Pondy, L., Boland, R. and Thomas, H.) John Wiley and Son,
Chichester, UK, pp. 51-77.
Turner, M. and Pratkanis, A. (1998) A social identity maintenance model of groupthink, Organizational Behaviour
and Human Decision Processes, 73 (2/3), pp. 210-235.
Von Bergen, C. and Kirk, R. (1978) Groupthink: When too many heads spoil the decision, Management Review,
67 (3), pp. 44-49.
Wastell, D. (1999) Learning dysfunctions in information systems development: Overcoming the social defenses
with traditional objects, MIS Quarterly, 23 (4), pp. 581-600.
Weick, K., Sutcliffe, K. and Obstfeld, D. (1999) Organizing for high reliability: Processes of collective mindfulness,
Research in Organizational Behavior, 21 81-123.
210
Drivers and Challenges for Biometrics in the Financial Services
Karen Neville
Business Information Systems, University College Cork, Ireland
KarenNeville@UCC.ie

Abstract: Mobile banking and biometrics are currently making profound changes to the Financial Services
landscape. Anyone with access to a cell phone has a place to keep his or her savings without needing a
traditional bank account. The Mobile payment value chain has various roles, all of which need to be addressed
and managed. There is a potential for mobile operators, security technology organizations can target FS
organizations and provide both the technical solution, business experience and collaborative forum necessary to
solve barriers for banks such as security (biometric identity assurance), regulations/standards (collaborative
forum) and partnerships. This opportunity for mobile payments has already been verified through preliminary
analysis by blue chip FS organisations. Additionally there is the potential of piggybacking on the national ID in
emerging markets to allow payments functionality. This would allow poorer countries without a FS infrastructure
capability to leapfrog to financial inclusion. As The New York Times Magazine noted in a recent cover story, last
year migrants across the globe sent home $300 billion. The potential of tapping this market by Financial Services
and Mobile operators is enormous. In addition to the risk of an un-standardised market, banks are at risk of not
facilitating such a market in developing countries from new competitors siphoning potential customers. Therefore
the remittance and unbanked markets are where m-Banking will be world-changing. The objective of this study is
twopronged: it is both to determine and highlight the potential of Biometrics for the Financial Services market

Keywords: biometrics, financial services, banks, regulation
1. Introduction
Due to the risk of an un-standardised market, banks are at risk of not facilitating such a market in the
US, Europe and developing countries from new competitors siphoning potential customers. Mobile
banking is considered a lucrative option for the majority of the banks interviewed with the
acknowledgement that partners will be needed. To ensure security as well as reliability and
trustworthiness in electronic transmission of data, certain issues need to be taken into account and
resolved, including: authentication, integrity, non-repudiation and confidentiality. To this end, a
number of technologies and paradigms can be brought to bear on activities such that risk factors are
reduced to an acceptable level, problems can be tracked seamlessly and business continuity
principles are upheld. Amongst the technologies that are currently being investigated, this report
presents a business case for biometrics, specifically biometrics in action, identity challenges and case
studies of current deployments. In 2001 MIT identified biometrics as one of the top 5 technologies
to watch out for and in 2012 this foresight has been proven accurate with the commercialisation of
the technology into everyday life. The American, European Union, Australian and Japanese
governments have all adopted and introduced the technology to nearly 250 million unique users as
entry safeguards.

Biometrics enables the identification of individuals by specific biological traits thus assuring
transactions of varied proportions. This assurance has propelled its acceptance by the public offering
unparalleled security advantages, convenience as you are your password, robustness in meeting
whichever challenge is posed and critical to any deployment. The technology is not merely an add on
to existing security systems it is essentially integrated into business processes allowing a seemliness
secure layer to verify the different stakeholders in large scale deployments such as mission critical
security systems. This paper attests to the value of the technology in assuring security to customers,
merchants, content providers, financial institutions, governments and mobile operators to name but a
few.
2. Biometrics
Biometrics facilitates the automatic authentication of a living person based on his/her unique
physiological or behavioural characteristics. Common physical biometrics include: fingerprints, hand
or palm geometry, and retina, iris, ear shape or facial characteristics. Behavioural characters include
signature, voice, keystroke pattern, and gait. Of this class of biometrics, technologies for signature
and voice are the most developed. There are two main reasons that biometrics are used in security
systems; (1) to verify or (2) to identify users. Verification involves confirming or denying a person's
claimed identity. In identification, one has to establish a person's identity, which tends to be the more
difficult of the two uses because a system must search a database of enrolled users to find a match,
211

Karen Neville
which is a one-to-many search whereas verification is a one-to-one search. Biometrics, as a form of
identification, is preferred over traditional methods involving passwords and PIN numbers as the
person to be identified is required to be physically present at the point-of-identification. Biometrics
removes this disadvantage and therefore the cost incurred in being physically present for verification.
In addition to this convenience customers no longer have to remember password/s or carry a token.
This control is utilised in every industry as serious identity challenges and poor online authentication
is threatening an invaluable channel to customers. Additionally insider fraud is enabled through poor
access control. The drive to protect identity has led to increased security measures; yet restrictive
security measures can reduce efficiency for an organisation and can be an inconvenience to
customers. However, identity management properly developed and executed can mitigate risk, enable
transactions, and act as a source of competitive differentiation and therefore advantage. Central to
achieving this is combining security and convenience. Identity management can be moved from a
must do to a must have. In fact identity management is cited in the Deloitte Global 2007 Security
Survey as one the primary security issues for organisations to solve (www2). Biometric technologies
provide identity management solutions through the automatic authentication of individuals. Biometrics
has an inherent ability to assure security, provide convenience and mobility through their uniqueness,
universality, permanence and acceptability. Biometrics enable authentication for the worlds major
border security systems such as the US and Japanese VISIT authentication systems. These mission
critical systems are large scale deployments of the technology proving its scalability and robustness in
meeting security threats. Ultimately the technology can counteract whichever challenges have driven
its acceptance.
2.1 Drivers and challenges for biometrics
Incidents such as online attacks, insider fraud and data breaches have damaged institutions brands
and reduced confidence in for example online banking. Attack trends have grown more sophisticated
as banks have sought to counter fraud with improved security and customer education. The
perpetrators of cybercrime have become organized; far from it being a case of keeping one step,
ahead of criminals, many companies are struggling to keep pace. Figure 1 illustrates examples of
some of well-known incident/threats and ultimately drivers for the adoption of a secure identity
management/biometric technologies. The rampant spread of identity theft, as well as the above
challenges, has been significant drivers in the acceleration of biometrics adoption. Consumers are
welcoming the convenience, privacy and security advantages associated with enrolling their
biometrics with merchants and organizations.


Figure 1: Biometric drivers and challenges
212

Karen Neville
2.2 Biometrics industry in action
The biometrics industry has evolved into the most secure and convenient approach to identity
management as a biometric cannot be borrowed, stolen, or forgotten, and forging is practically
impossible. Token-based methods of identification like passports and driver's licenses, may be forged,
stolen, or lost which is why biometrics are so important for increased security. Complex passwords
are easy to forget, while simple passwords can be easily guessed by unauthorised people. In large
scale mission critical deployments across the world biometrics have assured security and
convenience. Some possible applications and advantages of biometrics include: (1) Physical access;
(2) Virtual access; (3) E-commerce Applications (4) M-Commerce Applications (5) Covert surveillance
and (6) Smart-cards.
Physical access
The primary application of biometrics is to control access to secure locations such as rooms or
buildings. Unlike photo identification cards, which a security guard must verify, biometrics permits
unmanned access control. Disney World for example uses a fingerprint scanner to verify season-pass
holders entering the theme park.
Virtual access
Biometrics can increase a company's ability to protect its data by implementing a more secure key
than a password and employees are freed from the burdens of password-based logons to operating
systems and applications. Biometrics carries this out automatically and forgotten passwords are a
thing of the past which not only reduces the costs for hotline/system administration in large-scale
environments, but also increases productivity because employees no longer need to sit around
waiting for the administrator to reset their forgotten password. Users can logon to a Windows
operating system for example, by using a biometrically-activated smart-card.
E-commerce Applications
E-commerce developers are exploring the use of biometrics and smart-cards for the purpose of
verifying a trading party's identity, to lessen fraud and ensure non-repudiation. Point-Of-Sales (POS)
system vendors are working on the cardholder verification method, using smart-cards and biometrics
to replace signature verification.
M-commerce Applications
There are over 3 billion mobile phone users globally and the use of mobile devices for payments is
steadily increasing. Combining biometrics with mobile phones for authentication is an obvious solution
to binding the user to a transaction. Additionally mobile phones are equipped with voice capture
capability with the added potential of using inbuilt cameras. The combination of these two
technologies is changing the payments landscape. Mobile banking is also expected to drive the
utilisation further with numerous banks (Bank of America, Citibank, and America First) offering mobile
banking products (www3) and forming partnerships with Mobile carriers and trusted third parties.
Covert surveillance
One of the more challenging research areas involves using biometrics for covert surveillance. Using
facial and body recognition technologies, researchers hope to use biometrics to automatically identify
known suspects entering buildings or traversing crowded security areas such as airports.
Biometrics & Smart-cards
A smart-card is a ...credit card sized conventional plastic card, containing an integrated circuit chip
allowing significant amounts of information to be stored, accessed and processed either online or
offline. The following are the advantages of combining smart-card technology with biometrics: security
(two-factor authentication which considerably reduces fraud), memory size (allowing them to carry
biometrics and encryption), portability (not limited to a particular desktop), convenience (enabling an
effective transition for Financial Services), multiple applications (allows companies to collaborate with
partners), cost savings (by eliminating paper and handling) and micro-charging (a cheap alternative to
cheques).

The combination of biometrics and smart-cards is a familiar application of biometrics. Biometrics offer
simplicity and strong authentication and has also proven robust and scalable in a range of large scale
deployments. In addition to the applications outlined above the integration of biometrics with smart-
213

Karen Neville
cards has also been identified, in industry, as a convenient transition for Banks into the biometrics
field or as a legacy issue in the case of existing use.
3. Solving identity issues in financial services
Biometrics has proven its longevity, scalability, performance and reliability due to its adoption by
governments throughout the world. The financial services have been exploiting the technology and it
is one of the key markets positioning itself for explosive growth in niche areas such as mobile
banking. Fully integrated biometrically enabled financial service systems that efficiently and
conveniently serve millions are also evolving. Biometric technology can achieve significant business
process improvement and enhance end user convenience. Current biometrics penetration in financial
services will continue on its current path of successful implementations and continued growth. The
development and deployment of fully integrated biometric-enabled financial systems has resulted
from improvements in technology (accuracy, reliability, processing speed, durability, and usability)
despite the dynamics of the financial services market. Numerous deployments have provided
identification & authentication solutions where biometrics demonstrated reliability, quantifiable return
on investment and operational performance.

Customer acceptance of biometrics has been significant. This is especially true at the point-of-sale
(POS). Several vendors have taken biometrics directly to consumers in retail applications and much
to the surprise of many industry pundits and privacy advocates alike, consumers are responding with
curiosity and enthusiasm. From iris scan ATMs to finger-scan supermarket checkout, intrigue
outweighs fear. In one early 1998 bank pilot in Texas, 80 percent of customers who could use iris
scan ATMs did. Ninety-five percent of these customers said they were satisfied and 35 percent said
they had opened accounts specifically because of the biometric ATMs. However the primary
obstacles to rapid uptake of biometrics are the cost and complexity of deployments. These issues
include legacy systems integration, enrolment, infrastructure, database storage, identification and
authentication management, interoperability and standards. As with most industry sectors evaluating
biometrics, the most successful financial service pilots and deployments have proven that the benefits
far exceed the costs.

The drive within financial services towards biometrics is based on the urgent need to leverage
potential efficiencies of existing IT infrastructure investments particularly for E-commerce and the
Internet while decreasing costs. In order to drive down operation costs, financial service organizations
must reduce the cost of each customer interaction. Customer self-service is the key and strong
authentication is essential to this process. As self-service increases, strong authentication becomes
increasingly more cumbersome for each individual and biometrics as an alternative to pins,
passwords and tokens becomes ever more appealing.

Financial industry analysts suggest that widespread internal use of biometrics for employee-facing
applications will likely precede roll-outs of large-scale, customer-facing deployments. It is possible;
however, that effective deployment of biometrics at the point-of-sale (POS) may pressure some
sectors of the industry to deploy customer-facing applications sooner. The following section illustrates
some examples of biometrics in action, particularly emphasising customer facing applications in
Financial Services.
3.1 Financial services cases
Established vendors and new entrants have identified niches where biometrics provide a unique
solution, the following are some examples of current deployments:
Banking group Deploys Voice-verified Password Reset-service
Allied Irish Banks (AIB) has rolled out a VoiceVault's Password Reset Service with initial
implementations in head office locations. The bank has enrolled more than 5,700 of its employees to
optimise IT helpdesk resources and reduce costs. Users of AIB IT systems speak a few words into
the VoiceVault-powered system and these are used to biometrically voice-verify the staff member's
identity over the phone. Once verified, the user's password is automatically reset and spoken back to
them. As well as being simple and quick to use, VoiceVault enhances password security and
substantially reduces the volume of IT helpdesk calls so that support teams can focus on other user
issues. Another key benefit to AIB is that the service is available 24 hours a day, seven days a week.
214

Karen Neville
AIB plans to further deploy this solution to another 15,000 staff across its head office, branch and
capital markets staff.
Identity Management System for Colombian Banking Industry
A financial institution in Colombia has rolled out a multi-biometric identity management system. The
solution will enable the enrolment, management and verification of personal identities to eliminate
identity fraud and financial loss. The system will utilize face, fingerprint and signature biometric
identifiers as well as provide real-time duplicity check upon enrolment to avoid multiple enrolment
attempts and confirm that account holders are who they claim to be.
Citibank Singapore Rolls Out Biometrics
Citibank Singapore has rolled out Pay By Touch's biometric payment services to its Citibank Clear
Platinum cardholders. Cardholders are invited to enrol in the biometric credit card service, and pay for
goods and services with the touch of a finger. As the credit card is very much a part of daily lives,
consumers want greater flexibility in making payments. This is the ultimate in convenience the
ability to make credit card transactions with just a finger scan. The new biometric payment service is
available to Citibank Singapore's Clear Platinum cardholders as part of the launch of its new Clear
Platinum card. Platinum cardholders will be able to make cardless credit card transactions at retail
outlets such as music and IT stores, as well as clubs, restaurants and cinemas. Enrolment takes
minutes, and requires only a government-issued photo ID, a Citibank Clear Platinum credit card and a
secure finger scan. Cardholders also create a seven-digit Personal Search Number, which facilitates
the use of their scanned finger image to authenticate payment transactions.
The Bank of Currituck Utilises Fingerprints for Physical Access
The Bank of Currituck recently purchased and implemented US Biometrics' AccessQ system for
controlling access to various doorways with fingerprint biometric technology. US Biometrics worked
with The Bank of Currituck to specify and implement an access control system, including hardware
and software, with the goal of biometrically controlling strategic doorways in their facility. The
installation of these devices provides an effective biometric perimeter around the bank but it is also
adds a new level of convenience for employees. They no longer have to carry cards or badges. The
solution is designed to be a secure, convenient, and cost-effective alternative to passwords, badges,
swipe cards and PINs. It allows for the storing of credentials including fingerprint profiles, names,
addresses, and employee information as well as providing a mechanism for scheduling access based
on authorization levels and the time of day. Additionally reporting allows, through a web browser, to
view who accessed which devices at what time as well as provide some basic statistics about the
system's usage.
First Facial Recognition System for Online Banking
Las Vegas, CA - Cogneto with Cognitec Systems has integrated facial biometrics into UNOMI,
Cogneto's consensus-based risk adaptive authentication solution to bring facial recognition to the
online banking market. Cogneto will offer facial recognition as an additional layer of authentication,
increasing the security level for UNOMI users. Using their own webcams, users can receive a higher
level of identity assurance for their online bank accounts. Cognitec's software matches images of
users with photos taken by UNOMI during the normal log-in process. This application will be
integrated into UNOMI's consensus model, which uses information from multiple factors to determine
a user's security and risk rating. As webcams are now standard on many PCs, this is an easy way for
banks to manage risk and increase the level of security on certain customer accounts. The integration
of facial recognition allows for potential expansion of the product into ATMs and in-branch verification.
Using an individual's online photo verification records, a bank teller or ATM will have an additional
method to accurately identify people and further avoid identity fraud.
Paycheck Secure Service for Underbanked Consumers
Paycheck Secure is a biometric check-cashing service that can help banks and credit unions
generate additional non-interest fee-based income while decreasing the risk of fraud and meeting
regulatory compliance requirements. The service lets people cash checks using a simple finger scan
to authenticate their identity. Already used by more than three million consumers in 1,700 retail
locations nationwide, Paycheck Secure is a popular and widely adopted biometric check cashing
service. The service is aimed at helping banks and credit unions reach the more than 45 million
'underbanked' consumers who are presently underserved by financial institutions. According to the
Center for Financial Services Information (CFSI), this group represents 40 million households and
spends $10.9 billion per year on 324 million alternative financial transactions. A 2006 study by
215

Karen Neville
BearingPoint and Visa revealed that the underbanked generate $1.1 trillion in income. For instance,
the historically underbanked Hispanic community will drive 50 percent of the growth in retail banking
in the decade to come.
Payment Service for Supermarket Retailers
Thriftway Supermarket in Seattle, Washington was the first retailer to install a new point-of-sale
application that provides a turnkey solution for Pay By Touch(TM), a popular biometric payment
service that lets shoppers make purchases with the touch of a finger. The application was
implemented in 13 checkout lanes. It enables Thriftway to handle biometric payments faster,
enhancing customer service. A supermarket can quickly deploy Pay By Touch to support the growing
demand for reduced costs and increased security in electronic payment transactions. For simplicity
and added security, Thriftway is using IBM e-business Hosting Services. As part of an IBM Software
as Services solution, shoppers' Pay By Touch digital wallet information is securely stored off-site at
IBM data centres. IBM Software as Services offers clients lower costs that are aligned with usage,
minimal upfront expense, rapid implementation and reduced risk. Shoppers at Thriftway Supermarket
can now purchase groceries by providing a simple fingerprint image that is linked to their financial
accounts and loyalty programs. The shopper selects which account they want to use, the transaction
is processed as if a card or check has been presented, and rewards points are automatically
recognized and awarded. The checkout routine is faster, helps protect customers from identity theft
and eliminates the need for shoppers to carry cash, multiple credit cards, or bring their check book.
Kiosk-Based Banking
Real-Time Data Management Systems and SAFLINK have a long-standing relationship to provide
credit unions with full-service kiosks that essentially act as self-service branch facilities. Credit Union
customers can open accounts, cash checks, apply for loans and make CD and deposits. The kiosks
enable Credit Unions to leverage opportunities for new membership at locations that are lucrative but
which cannot cost-justify the establishment of a new branch. They also provide 24-hour service for
customers who cannot access branches during regular business hours. Customer satisfaction levels
have been extremely high for Credit Unions that have deployed these kiosks.
Retail Point Of Sale (POS)
In the US more than 500 million checks are forged each year. Herndon, Virginia based BioPay
addressed this issue with a biometric a finger scan system that allows retail merchants to share
negative customer financial transaction information in real time, significantly reducing risk of bad
check acceptance. The companys initial focus has been in the convenience store market where
check cashing offers a lucrative business opportunity as long as fraud can be contained. BioPay is
beginning to see momentum in this market. More than 4.5 million check-cashing transactions worth
more than $2 billion have been completed using their systems.
3.2 Biometrics acceptance
Innovation is defined as an idea, practice or object perceived as new by the individual (Roger
1962). For an organisation or institution, it is any product, input, process, service or technology that
the organisation perceives as new. In the case of the Financial Services Industry biometrics are
regarded as innovative and not as a form of identity management but as a vehicle in targeting new
markets such as mobile payments/ banking, in house control and niche markets such as the
underbanked. Figure 2 illustrates the variables necessary for the successful acceptance of an
innovative technology which helps technology implementers (developers, banks, organizations)
advance the diffusion of selected technologies. The model is based on both the characteristics of
perceived adoption as identified by practitioners through deployments. The following are the eight
characteristics recognised to enhance the rate and effectiveness of diffusion: (1) relative advantage,
(2) compatibility, (3) trial ability, (4) ease of use, (5) visibility, (6) result demonstratability, (7) image
and (8) voluntariness.

The first characteristic is the relative advantage of the innovation over the idea it replaces (in for
example Financial Services) including economic profitability, convenience and/or other benefits. A
technology is more likely to be accepted if it is perceived as bringing advantages. The second
characteristic is the compatibility of biometrics (innovation) with the existing values, past experiences
and needs of adopters.
216

Karen Neville

Figure 2: Modified model of intention to adopt using PCI measures
People (organizations/ customers/ banks/ merchants) are more likely to adopt technology if it is
functionally compatible to those previously adopted and is consistent with the existing values, needs
and past experiences of adopters. Therefore ease of transition from one service /product (for example
the integration of biometrics with smart-cards and mobile phones in banking) to another is vital in
acceptance. The third characteristic relates to the level of complexity or the ease with which an
innovation can be understood. As biometrics have been prevalent in government identity systems for
years it is relativity common to millions of people. The fourth and fifth related characteristics are
described as trialability, or the degree to which adopters can implement an innovation, on an
experimental basis and observability, or the extent to which results of an innovation are visible to
others. Section 3.1 described deployments in the Financial services which have proven advantageous
in performance, flexibility, scalability and in counteracting the challenges. These factors are not just
dependent on the specific nature of the innovation but also on the specific characteristics of the
adopting group. Ease-of-use is vital for end-user acceptance. But in the case of biometrics customer
acceptance is primarily high. Visibility is the extent to which the innovation is perceived to have
diffused. Result demonstrability is how well the benefits and results are recognised by the potential
adopter (financial institution). Image is the status or prestige that the adopter thinks they will gain from
the innovation. Voluntariness is the degree to which an individual has the choice whether or not to
adopt the innovation which ultimately depends on the competitiveness of the adopter. The value of
innovative applications is dependent upon its adoption and acceptance by relevant parties (users,
organizations). Critical mass, the point at which....enough individuals have adopted an interactive
innovation to cause the perceived cost-benefit of adoption to change from negative to positive so that
the innovations rate of adoption becomes self-sustaining is therefore vital so that a new technology
is economically viable (Roger 1991). Therefore one could not but argue that biometrics is accepted
given the level of critical mass currently achieved.
4. Conclusions
The adoption of biometrics within financial services to support existing and target new markets is
inevitable as the volume of successful deployments increases. This will result in industry comfort and
consumer acceptance. Additionally the number of applications (and their uses) will expand as
biometrics will become a ubiquitous component of the financial services infrastructure. The application
of biometrics ranges from authenticating inter-bank transfers and accessing local savings bank
accounts to the weekly purchase of groceries at the supermarket, personal identification and
transaction processing. Figure 2 illustrates the characteristics necessary for adoption as identified in
research while emphasing the importance of a Trusted Third Party for authentication and assurance
and a collaborative body to interface with the banks and content providers to provide guidance for
both deployment and standardisation. Ultimately biometrics will become as familiar and trusted as
displaying a drivers license or making a purchase with an ATM card. Additionally figure 2 illustrates
217

Karen Neville
the numerous types of biometric deployments categorised according to the type of biometric used. All
of these implementations highlighted the numerous advantages associated with the adoption of the
technology for the industry. Financial services like any of the other industries availing of the
technology is driven and challenged by key issues such as interoperability and costs. However it is
TTPs (Trusted Thirst Parties) which can provide the technology, expertise and interoperability
necessary to successfully meet the requirements of any potential customer. As illustrated in Figure 3
a TTP will not only provide the ultimate approach to identity management but the characteristics
necessary for acceptance of biometric to any and all stakeholders.

Figure 3: Providing expertise in identity management

References
WWW1: 10 Emerging Technologies That Will Change the World, MIT Technology Review (January February,
2001 www.techreview.com).
WWW2: 2007 Global Security Survey: The Shifting Security Paradigm. Deloitte, 2007
www.deloitte.com/assets/Dcom-Shared% 20Assets/Documents/dtt_gfsi_GlobalSecuritySurvey_
20070901.pdf).
2007 Payments System Research Briefing: Complex Landscapes in Japan, South Korea and the United States,
Federal Reserve Bank of Kansas City.
Rogers, E. (1962) Diffusion of Innovations, New York: The Free Press.
Rogers, E. (1991) The Critical mass in the diffusion of interactive technologies in organisations, Harvard
Business Research Colloquium, Publishing Division, Harvard Business School, Boston.
218
Did you get Your Facebook Session Completed?
Markku Nurminen
University of Turku, Turku, Finland
markku.nurminen@utu.fi

Abstract: Traditionally, most information systems were used at work. These systems offered means and tools for
people that enabled them to perform their work more effectively, easily, or with higher quality outcome. One of
the most important criteria of evaluation of such systems is derived from the contribution of the system to the
work objectives of its users. Such evaluation can be started by embedding the IS actions and operations as
inherent parts in the users work processes. This means that the ultimate criteria of the system come from the
outside of the system itself. It is interesting that the absence of quality is easier to observe than its presence.
Today, many users spend their time with surfing in the Internet, playing games, or attending at social media.
Electronic services give one more use situation where this kind of traditional goal-oriented evaluation no longer
seems to be sufficient. In all of these (and many other) use situations the objectives are not necessarily clearly or
explicitly defined and therefore it is difficult to evaluate, to what extent the objectives have been fulfilled. Such
activities are said to be weakly purposeful. The added value created cannot be observed in the external object of
the work, the change is often likely to take place inside the actor him/herself, for example as a use experience or
improvement of the competence through learning. This paper addresses the problem characterised above that
goes deep to the core problems of evaluation. First the generic concept of purposeful activity will be discussed.
Work lends itself to be analysed in terms of three modalities of work: individual work, collective work and
services. Then the electronic services and IT-services are analysed in terms of this generic concept, paying
special attention to self-services. Finally, the main problem of evaluation of IT-artefacts used for weakly
purposeful activities is discussed and some guidelines for evaluation are derived. The contribution of the paper is
in its conceptual analysis, and it is only indirectly based on own empirical work and material.

Keywords: self-service, evaluation criteria, purposeful activity, added value, use experience
1. Introduction
Evaluation of an Information System is about success and failure. Because the introduction of an IS
always carries a cost, it should also create some benefit in order to be justified. The comparison of
costs and benefits happens typically both before and after the introduction. In the ex ante evaluation
the motivation is in the desire to make good and well-informed decisions about the future investment,
and in the ex post evaluation the rationale comes from the need for follow-up: did the expectations
come true? In the best case this will lead to better decisions in the future investment rounds.

In both ex-ante and ex-post evaluation it is crucial to base the evaluation on a thorough understanding
about the mechanism of the creation of the benefits by means of the information technology.
Otherwise the information system remains as a black box that is expected to produce added value
due to a mythical belief. There is no generally accepted understanding of such a mechanism. One
frequently occurring concern is that the framework of expectations and follow-up is too narrow, in
particular too technically determined. It is justified to call such failures expectation failures (Lyytinen
1988).

Jan.L. Andresen (2001) has selected four evaluation methods that are representative among a large
set of approaches or perhaps serve as an abstraction or aggregation of the otherwise long list of
them. The four methods are:
Net Present Value (NPV)
Information Economics (IE)
Critical Success Factors (CSF)
Measuring the Benefits of IT Innovation (MBITI)
The first one (NPV) is obviously derived from general investment evaluations. The monetary interest
is open, but the underlying mechanism is entirely bracketed. The second one (IE) allows a broader
spectrum of factors to be considered on the side of traditional cost-benefit analysis, such as strategic
match and competitive advantage. The model needs, however, all factors to be quantified and does
not directly address the problem of articulating the value-creating mechanism. The measuring needed
for quantification is extremely difficult and must often be based on peoples opinions than real
changes in the business processes. The Critical Success Factor (CSF) approach allows deeper
participation of the key persons in the evaluation process. A detailed model of the effecting
219

Markku Nurminen
mechanism is, however, difficult to combine with the critical success factors, because such factors
necessarily are defined at a high level of abstraction and large business units. Like in the previous
approach, the reality as the object of evaluation, is often conveniently replaced by the opinions about
the reality. The fourth approach MBITI comes perhaps closest to our idea of addressing the mediating
mechanism. This approach analyses all the main business processes and identifies benefits in terms
of efficiency, effectiveness, and performance. The straightforward approach also aims at finding
means of measuring the benefits and a person responsible for the achievement of these benefits. Yet
it seems that the investment is regarded as something that is separate from the basic activities of the
company. This is exactly the problem in the vast majority of all evaluation approaches known by the
author.
2. Approach to evaluation
2.1 Appropriation
When information technology artefacts and information systems are implemented, they should be
embedded in the activities of the organisation that uses them. This means that each user should
appropriate (Baillette and Kimble) the artefact so that it becomes her property in terms of control and
mastery so that it is practically impossible to observe or analyse it separately from the rest of her
work. We decide to call this characteristic as the Inseparability principle: IT use and other work
tasks constitute an inherent whole. For the evaluation interest this means that whenever the evaluator
is able to observe and analyse the IS as a separate entity, this proves that the implementation has
failed and the evaluation therefore will necessarily give a poor degree. This statement does no imply
that evaluation of IT would be a mission impossible by its nature. But it means that an evaluation
attempt makes sense only when it is integrated with the evaluation of all the activities that the system
is supposed to support.
2.2 Work-centred evaluation
The criteria of evaluation now come from the criteria of the work activities, that (paradoxically enough)
reside outside the information system as a technical construct. In order to extract such criteria, we
have to describe the contents of the work activities in terms that are independent of the technical
solution used for its performance. Often it is useful to refer to the abstract functions of IT such as data
storage, communication, and processing information instead of actual concrete means and tools.
From this perspective, the previous situation (with the old IS or without any) and the future (current)
situation appear as two alternative ways to solve the same problem of job design. In other words, they
are directly comparable. In an ex ante evaluation, alternative future system may be compared
pairwise in order to find the most promising candidate. This kind of approach is extremely useful,
since the origins of the expected added value are made visible.
2.3 Relationship between IS and work activities
The argumentation above indicates that in this work we see the connection between work and
information technology according to the inseparability principle. One or another conceptualisation of
this relationship is necessary, if we want to perform any evaluation that pays attention to the
objectives of the work and business activities.
2.4 Work role
The basic building block of this conceptualisation of the relationship between IT and work is the Work
Role. In my early conceptualisations I was striving after a richer notion of work than process thinking
alone can give and therefore I started in opposition to business processes. Recently, I have tried to
find a new understanding of work that is not a mere antithesis of business processes, but is
something more generic, giving some kind of synthesis. The new formulation acknowledges the huge
power of business processes and their representations, but does not accept the claim that the
process view is the whole truth
2.4.1 Construction of the work role
The skeleton of the work role consists of one or more business processes or parts of them assigned
to the work role. As a part of this, related IS tasks are regarded as an inherent part of the process.
220

Markku Nurminen
This offers the key to the mechanism of the creation of the benefit of IT, which is crucial for the
evaluation interest.

The work role is not only processes, but the employees have also other duties that can be collected
under the general notion of responsibility. Some more detailed issues on such responsibility can be
identified:
The successful performance alone is not always enough. The actor should take care also of the
site of the work situation after processes. The IT tasks are normally expected to leave the data
bases in the correct state, but the responsible actor is supposed keep his eyes open and detect
any errors or other deviations that may require attention and perhaps corrected.
The responsibility of the actor does not concern the primary object of the work only, also the work
environment and the tools have to be observed, and any action of maintenance or cleaning
should be taken if it is likely to promote the future performance. The knowledge and skill of the
actor himself is naturally one object of maintenance, occasionally requiring learning or education.
The actor is supposed to maintain a sufficient level of awareness in his work environment. Any
factor that may effect to an exceptional routing or performance of the work should be
acknoweledged and necessary rearticulation done. One important object of awareness is the
recognition of errors and exceptions.
The stereotypical notion of business processes typically assume that a process is triggered by an
event coming from outside, such as the customer entering and requiring attention. The
responsible actor may identify the need for example for a maintaining action and initiate an
appropriate process, or improvise a new and relevant one.
All these points assume that the actor is trusted and familiar with the best practices in and around
his work role; then he can take the responsibility given to him.
2.4.2 The three modalities of work
The mechanism of creation and collection of the added value enabled by information systems varies
according to the actual modality of work. People work in one or more of the three modalities. Or
probably all three are present in most work situations, even if one of them may have a dominant role.
The set of modalities lends itself to be used in the analysis and understanding of the structure and
functioning of even complicated settings in working life and, hopefully, also private spheres of work-
like purposeful activities. The three modalities are
Individual work
Collective work
Services
In individual work modality the actor works on an individually assigned work process. Some tasks are
performed by means of IT. Before starting the action the actor articulates the current task and tailors it
to fit to the situated factors and his own competence. Requirements (specifications) for the task are
often mediated by means of the IT as well as the reporting of the completed task. The information
system is seen as a tool for the actor. In collective work modality the actors share their work tasks
jointly. Thus they also have to articulate the work jointly. Not only each individual articulates his lot but
also the collaboration has to be articulated. This collective aspect is the most frequently used notion
of articulation work (Schmidt and Bannon, 1992) according to the introduction of the term by Anselm
Strau. The information system typically mediates the collaboration within the collective and it can be
understood by the metaphor Medium.

Individual and collective work can be seen as two poles that span a field of dialectical tension
between them. This implies that none of these two can be thoroughly understood without a reference
to the other. The third modality, service, is not equally obvious, somebody has suggested that service
could be seen as a specific form of collaborative work in which both the producer and the customer
jointly create the added value that is the ultimate purpose of the service. This suggestion ignores,
however, the fundamental gap between the two parties of the service. In collective work the team (or
group, if you prefer) is characterised by shared objectives to be strived after. In services the two
parties are not sitting on the same side of the counter, but they are on the opposite sides of it. This
asymmetry is further emphasised by the fact that the added value created typically will be the benefit
221

Markku Nurminen
of the customer or his processes. The two parties may even have conflicting interests: the producer
may want to sell the service at a high price whereas the customer often is willing to pay a lower price.

The character of services is thus distinct from the character of collective work. The distinctive feature
is probably the exchange. The difference is more clearly visible in B2B services than in B2C services.
In the B2B services the customer has externalised some parts of its own processes to be ordered
from the provider. In other words they have decided to select the alternative BUY instead of MAKE
(the two main options of exchange presented in the Transaction Cost Theory (Williamson 1981))
these parts themselves. In the services two otherwise distinct processes (or practices) of the two
parties meet each other to enable the delivery of the service to happen. Successful delivery integrates
the service as a part of the customers processes. After the delivery the two processes of the both
parties continue their own life cycles. And consequently, the customer is the side that is supposed to
be the beneficiary of the added value created in the service (the provider receives the payment). The
articulation in services takes a form of contract due to the tension between the parties.
2.4.3 Evaluation
Each modality of work is based on the notion of purposeful activity of the employees working in the
organisation or with their service customers. When the IS functions are embedded in the processes
and activities of the users, we will be able to evaluate the degree to which the IT artefact contributes
to the objectives of the activity under study. Important is that the success or failure is evaluated in
terms of the activitys goals, not in IT specific terms of usability. The measurement clearly boils down
to pairwise comparison between the old (old system or manual practice) and new or future practice.
2.5 IT as service and self-service
In the era of electronic services we have to make it clear for ourselves, how we understand the role of
IT in such services. It is important to notice that the use of computers is a service in itself. Many of us
still remember those days in the 1950s and 1960, when particular organisations collected around
Computer Centres performed computing service for their customers that delivered their input data
coded in punched cards (later magnetic tapes) and received sheets printed by high speed printers.
Many specialised supporting subservices flourished as parts in fluent service chains, such as key-
punching, transportation, etc.

Electronic data transmission and on-line real time systems operated by time-sharing operating
systems turned this whole service chain into a sequence of self-services. One bank clerk could
perform the coding of the transaction and its transformation to machine-readable form, transfer these
data to be processed without practically any human intervention of the computer operator. The bank
clerk produced the IT-services in a self-service mode. The IT artefact had become a tool for her that
was in fairly good control of him.

The next step in the development of e-services happened when the banks customer started to work
as the operator in entering the details of the desired transaction, first in the corner of the bank hall or
vestibule, later at home with the broadband networking. Here the entire banking service was moved to
self-service. At the first glimpse this seems to be in contradiction with our preliminary analysis of the
service as one of the three modalities of work. The emerging situation has sometimes been
misinterpreted by saying that the customer take over the earlier work role kept by the bank clerk. This
is necessarily wrong, because the bank does not afford to give its customers the privileges of its
employees (as the part of the work role). Rather, it is crucial to maintain a clear distinction between
the two parties: the service provider and the customer. What the customer does when entering the
details of the desired transaction to his screen while doing his home banking, is, that he writes and
delivers the service request to the bank. The bank (or the software representing it) then analyses the
request, gives remarks if needed (e.g. too low balance on the account for paying the bill requested),
and finally executes the transaction and gives feedback about it (e.g. a receipt). The fact that there is
no indentifiable person of flesh and blood on the other side of the desk, does not mean that the work
role is entirely missing. Whenever an error or unclear transaction occurs, an actor with sufficient
expertise and privileges will show up and clear the mess. But most of the working operation hours the
responsible clerk has left the desk for automated processing of routine tasks. This is not much more
complicated than the operation of automatic washing machine, but the point is extremely important for
the phenomenon of electronic services. The work role in such services can be left to be performed by
222

Markku Nurminen
computer software, even if we know quite well, that if something unexpected occurs, there is a role-
keeper in person who enters and solves the emerged problems.

Electronic services in some cases lead to the delivery of physical objects (book store), but the real
power of e-services is visible whenever also the delivery can be electronic. This happens for example
when one downloads software or music directly from the Internet site.
3. Odyssey of the individual
The decision to anchor the evaluation to the objectives of work or purposeful is an obvious strength by
addressing the core issues of the introduction of IT and its legitimation: why should we make these
investments. This decision turns to be the source of problems as soon as we start analysing
applications that are not directly connected to such activities. For example, listening to music or
playing games most often are done during leisure rather than working hours and lack clearly
articulated objectives that could be used as reference for evaluation.

The conceptualisation of the three modalities of work presented above helps us to make sense of
these activities. We now can regard them as services, more accurately self-services, produced by the
customer himself. In these examples, the customer is also the beneficiary of the added value created
through the production of the service. He obviously is excited or stimulated through his experience to
the extent that he is willing to pay a relevant price for it. We must be aware that the object of activity
not always is outside the actor. For example, learning, training, entertainment and many other leisure
activities have the aim at changing the state of the actor himself.

It seems to be irrelevant that these use situations happen outside work organisations. We just enter
the area of Everyday Informatics (EI) (Stringer et al). This does not mean that we must give up the
framework that is based on purposeful activity. Our customer wants to get his stimulation and acts
purposefully in order to get it. The main difference is in the observation of the goals and goal
achievement. This time the criteria of success are internal to the customer; it is, indeed, justified to
talk about use experience, since bad music may indicate a failure of this use.

One of the important questions of Everyday Informatics is in the modelling of human life. In a work
environment, it seems to be appropriate to accept social practices to be described in terms
externalised and purpose-oriented representations, such as process models. The situation is not
necessarily equally straightforward in private areas of life. Of course, there are some areas of
activities that lend themselves to be conceptualised as projects or similar purposeful activities. Many
pathways of care in the citizens health care are good examples of first identifying the current state
(diagnosis) and then designing and deciding a stepwise action plan that aims at reaching the desired
state. In a recent paper (Lahtiranta and Nurminen 2012)) I have presented a metaphor of Health
Navigator that in the spirit of GPS navigator supports the patient-citizen to proceed through the route
specified in the care plan.

Even if many activities can be regarded as purposeful activities that can be supported by means of EI,
we should remember that human life probably cannot be reduced to a set of processes.
3.1 Surfing in the internet
There are two more uses of Information technology that appear to be still less goal-oriented than
many applications of Everyday Informatics. We will discuss two of them: Internet surfing and the
social media (Facebook).

The websites are designed by their publishers. The structure and contents of the site determines to a
great extent what kind of needs it can meet, the service portfolio. This portfolio is not always visible to
the user, who often starts the surfing without clearly articulated set of objectives. There is then no
direct way to compare the objectives and their fulfilment. Sometimes it is not easy to decide whether
and when the surfing session is completed. It is but natural, that most approaches to website
evaluation are derived from rather technical usability characteristics, such as graphics, stability,
compatibility, speed, reliability, accuracy. I am not saying that these factors are irrelevant, but the
mechanism of their impact can be observed as reduced quality of the goal attainment. Yet, I do not
believe that the ending condition of surfing is entirely subjective, as it was in our discussion about
gaming and other excitement as motivation.
223

Markku Nurminen
The website of an organisation is an intelligent visit card telling its visitors essential information of
itself. As an example, university websites give answers to many relevant questions presented by
different stakeholders, the students and teachers in the first place. The website can thus be seen as a
communication platform between primary stakeholders. If a user has a specific question in mind, he
can tell whether his surfing session was successful in respect to this specific need. Another outcome
may be the answer that this website does not give the answer to this particular question.

The degree of purposefulness can be increased by adding interactive characteristics to the website.
For example, the student could take more responsibility of the administration and performance of his
studies. He could register and download his personal, tailored study plan and the schedules of the
courses of this plan. Such use could support his navigation through the studying space, provided that
he has his study navigator. The connection between his own navigator and the website would add the
purposefulness, because now he could really perform both planning and performing his studying
activities (e.g. deliver the exercises electronically). Then also the evaluation could be based on the
evaluation of the success of the goals of the activity.

Similar direction of development could be suggested also for many websites serving e-Government. It
seems that too often the interface is based on the assumption that the customer has only a dull
terminal without practically any storing or processing capacity. With this equipment, he is supposed
to register to the service providers website. But unfortunately, he sometimes is not even given the
opportunity to store his intermediate results for continuing his work later.
3.2 Social media
The use of social media can be structured in terms of input and output. Each visit in the Facebook
connects the user to the community: he can see messages delivered by his friends that tell about the
state and significant events of them. He is also expected to leave some messages about himself in
order to express his current situation and experiences to the community of his friends. On the top of
these two reciprocal modes of action there is the opportunity to give and receive feedback and thus
maintain the active structure within the community.

Much of these regular visits have the same function as repeated phone discussions with the How are
you doing? type issues in the agenda. The maintenance of the awareness of the state of the
community and its individual members also performs the hand-shaking function: the connections are
reliably functioning.
3.3 Weakly purposeful activities
The principle of anchoring the evaluation of an IS tightly with the purposeful activity that the IT artefact
is supposed to support proved useful, because it enabled the evaluator to identify the point and the
mechanism of the creation of the added value. This approach cannot be directly applied to the
activities that are only weakly purposeful. Above we learned this to be problematic in two weakly
purposeful use situations: Internet surfing and social media (Facebook). The term weakly purposeful
describes the situation by indicating that the purpose is not entirely absent. This can be seen in the
fact that the user must decide to initiate the activity. The trigger comes often from the actor himself,
and the objectives for a session are perhaps not articulated explicitly. Therefore, it is not easy to
determine, when the activity is completed and can be finished. It may be still more difficult to give an
evaluation of the degree of success (number of like-thumbs in the Facebook).

Yet, there obviously is a reason to initiate a session, since otherwise it would never be done. And
sooner or later most sessions will end, even if this may happen because the actor gets tired rather
than feels the goals fulfilled. The incentives may be in the experienced social pressure rather than the
actors true need to get something accomplished. These observations lead us to psycho-social
analyses of IT uses and their motivations. Nevertheless, something is done during every session; it
would obviously be misleading to regard such action as meaningless. On the other hand, traditional
usability characteristics (speed, consistency, easy navigation, etc.) alone are not sufficient to tell
anything essential of these weakly purposeful actions. This leaves us with a challenge to study this
use of IT in more depth. One hint for such studies might be to select an entire session to the basic
unit of analysis.

224

Markku Nurminen
4. Conclusions
The evaluation of information cannot ignore the objectives of the activity it is intended to support. The
usability characteristics of the system are not sufficient alone if the user cannot perform his work with
acceptable quality and efficiency. This statement calls for two frontiers of future research. The first is
the generation of the representation of the objectives of work in a form that supports the evaluation
process better than today. The second is the conceptualisation of the evaluation criteria for situations
of weakly purposeful activities. In the subsequent discussion some preliminary ideas for both are
suggested.
5. Discussion
During the requirement specification the objectives for the system are specified, often in detail. These
objectives are not always identical, not even consistent with the objectives of the activity to be
supported. At the best the system objectives are derived from the activity objectives with too little
consideration of the mechanism of transforming the system benefits back to the activity benefits. In
this paper I have suggested the notion of work role that is richer than process model to be used as a
point of departure for creating the representation of human work that, together with the three
modalities of work lends itself for an operational means also for evaluation purposes.

The weakly purposeful activities is a serious challenge that is too many-sided to get the solution here.
Since I in this paper have been in favour of a representation of objectives in strongly purposeful
activities, I would be unhappy of getting into fully psychologised approach to evaluation of weakly
purposeful activities. Quite contrary, I believe that there is an implicit structure of objectives. Perhaps
we could use some suggested approaches in Knowledge Management schools to making the actors
tacit knowledge visible for others. The most promising strategy for future research on this problem is
in my thinking based on backward tracking of the activity. In other words, after the finished session
the actor could be asked about the satisfaction and reasons for it. Such interviews could make the
implicit objectives visible for both the actor interviewed and the interviewer.
References
Andresen, Jan L.: A Framework for Selecting IT Evaluation Method in the Context of Construction. Danmarks
tekniske universitet. 2001. ISBN 87-7877-069-6. PhD Thesis.
Baillette, Pamla and Kimble, Chris. The Concept of Appropriation as a Heuristic for Conceptualising the
Relationship between Technology, People and Organisations. http://arxiv.org/pdf/0804.2847.pdf (reviewed
April 22, 2012)
Lahtiranta, Janne and Nurminen, Markku I. (2012) PHR Revisioned navigating in the personal health space.
Manuscript delivered for publication.
Lyytinen, Kalle (1988) Expectation Failure Concept and Systems Analysts View of Information System Failures:
Results of an Exploratory Study, Information & Management, Vol 14, pp 45-56.
Schmidt, Kjeld and Bannon, Liam (1992):Taking CSCW Seriously: Supporting Articulation Work, Computer
Supported Cooperative Work, Vol 1, No 1-2, pp. 7-40.
Stringer, Mark, Halloran, John, Hornecker, Eva and Fitzpatrick, Geraldine. Situating Ubiquitous Computing in
Everyday Life: Some Useful Strategies.
Williamson, Oliver E. (1981):The Economics of Organization: The Transaction Cost Approach, The American
journal of Sociology, Vol 87, No 2, pp.233-
http://www.informatics.sussex.ac.uk/research/groups/interact/publications/stringer_ubicomp05.pdf (reviewed April
22, 2012)
225
Infusion of Mobile Health Systems in the NHS: An
Empirical Study
Yvonne O Connor
1
, John O Donoghue
2
and Phillip O Reilly
1

1
Department of Accounting, Finance and Information Systems, University
College Cork, Ireland
2
Health Information Systems Research Centre, University College Cork, Ireland
y.c.oconnor@umail.ucc.ie
john.odonoghue@ucc.ie
phillip.oreilly@ucc.ie

Abstract: Frequently criticised as a technological laggard, the healthcare industry is now beginning to appreciate
the benefits which can be obtained from adopting Mobile Health Systems at the point-of-care. As a result,
healthcare organisations are investing heavily in mobile health initiatives with the expectation that individual users
will employ the system to enhance performance. However, researchers argue that such benefits can only be fully
realised if the technological innovation is infused within an individuals work practice. A synopsis of the state of
the field in mobile system implementation research reveals that little is known on Mobile Health Systems
infusion. Infusion is a distinctive feature in the Cooper and Zmud (1990) model, which reflects the extent to which
a technological innovation is fully embedded in an individuals work system through comprehensive and
integrative use. However a review of extant literature reveals that infusion is inconsistently defined and under
investigated with a lack of literature focusing on Mobile Health Systems infusion. This paper makes a number of
contributions to the literature. It provides a comprehensive definition of infusion and presents a conceptual model
exploring infusion of Mobile Health Systems. Through an exploratory study of Mobile Health Systems
implementation in Britains National Health Service, the presented model is empirically investigated. By
identifying and highlighting issues affecting infusion, future research efforts can focus on how such issues can be
overcome. The paper concludes with a checklist of critical success factors which healthcare organisations should
consider in order to successfully infuse Mobile Health Systems within their organisation.

Keywords: mobile health, infusion, critical success factors, NHS
1. Introduction and theoretical grounding
Over the last decade the application of Information Technology (IT) in health care has grown greatly
and its potential to improve effectiveness and efficiency has been recognised by governments
globally (Institute of Medicine, 2001). This is reflected in organisations worldwide investing heavily in
the implementation of technological innovations. For example, engagements in Swedish e-health
initiatives cost the healthcare sector approximately 700 million annually (Ministry of Health and
Social Affairs, 2010). Recent developments in healthcare have witnessed the emergence of
ubiquitous computing, namely Mobile Health Systems (MHS). MHS is defined for the purpose of this
paper as any mobile handheld device running medical applications which are used as part of clinical
practice. The rise in implementing Mobile Health Systems is reflected in the marketplace whereby the
m-Health industry is valued between $50 billion and $60 billion globally (McKinsey, 2010).

It is well established in the Information Systems (IS) literature that long-term success of a
technological innovation depends upon its adoption and continued use (Bhattacherjee, 2001).
However, it is argued (Cooper and Zmud, 1990) that the true potential of technological innovations
can only be achieved through infusion. Yet, despite substantial research in adoption and continued
use of MHS there remains a dearth of literature focusing on how infusion can be achieved by
individuals (O Connor et al., 2012).

In order to address this gap in literature this paper explores individual infusion in order to identify and
highlight issues affecting infusion and to present a checklist of critical success factors which
healthcare organisations should consider in order to successfully infuse MHS. The paper is structured
as follows. MHS infusion is discussed (section 1.1) whereby a comprehensive definition for the
concept of infusion is presented. The subsequent section (section 2) focuses on an Individual Mobile
Health Infusion Model. This conceptual model, which draws upon and extends extant literature, is
operationalised using a case study approach (section 3). Section 4 presents the findings leading to a
seven critical success factors for individual infusion of MHS. Section 5 presents the key implications
for theory and practice of this study and discusses the potential for future research within individual m-
health infusion.

226

Yvonne O Connor, John O Donoghue and Phillip O Reilly
1.1 Mobile health system infusion: Definition
Cooper and Zmud (1990) proposed a six phase model of IT implementation. These stages include
initiation, adoption, adaptation, acceptance, rountization and infusion. As a dearth of research exists
which focuses on MHS infusion this study primarily focuses on the final phase of the Cooper and
Zmud (1990) model, namely infusion.

The concept of IT infusion has being studied by numerous authors at various levels of analysis in
diverse academic disciplines. Although renowned as the final stage in the Cooper and Zmuds (1990)
diffusion model, there exists a lack of evidence as to the existence of a comprehensive definition of IT
infusion. Prior studies have defined IT infusion in two levels: organisational and individual. Initially,
when the concept of IT infusion emerged in IS literature it was studied by many scholars (for example,
Cooper and Zmud, 1990; Zmud and Apple, 1992; Saga 1994) at the organisational level. As a result it
was often defined as increased organisational effectiveness obtained by using IT application to its
fullest potential (Cooper and Zmud, 1990, pp. 124-125). Others have defined infusion as the degree
to which IT has penetrated a company in terms of importance, impact, or significance (Sullivan,
1985) through the integration of technology with existing business processes (Eder and Igbaria, 2001,
p. 234) whereby the extent to which an innovation is used completely and effectively and improves
the organisations performance (Wynekoop and Senn, 1992).

It is argued (Fadel, 2006) that organisational infusion of any technological innovation can only be
achieved as individuals infuse the technology into their own work practices. This is further reinforced
by Sundaram et al., (2007) who argue before organisations can optimise Information Systems
potential it should first optimise the potential of individual users. This rationale led to the modification
of existing definitions of infusion in extant literature to reflect the individual user and not the
organisation. As a result, infusion at an individual level is defined by Fadel (2006) as the extent to
which an information system is used completely and effectively and improves the individuals
performance. Similarly, Jones et al., (2002) defines individual IT infusion as the extent to which an
individual user (a salesperson in their study) uses technology (Sales Force Automation) to its fullest
extent to enhance their productivity. A careful examination of these definitions suggests that
comprehensive, integrative and inclusive use is the defining features of IT infusion (Yu et al., 2009, O
Connor et al., 2011). Therefore as this study investigates infusion at an individual level it is defined
herein as: the degree to which individual users employs the full potential of a mobile health system
within their work practices through comprehensive and integrated use (adapted from O Connor et
al., 2012).

The remainder of this paper is structured as follows. A model of determinants associated with MHS
infusion is presented (section 2). This Individual Mobile Health Infusion Model, which draws upon and
extends extant literature, is utilised as the basis for identifying technological issues impacting infusion
(Section 4). Seven critical success factors are presented and described for healthcare practitioners to
follow if they wish to infuse MHS are part of their work practices. Section 5 presents the key
implications for theory and practice of this study and discusses the potential for future research within
individual MHS infusion.
2. Mobile health system infusion: A conceptual model
In order to enhance studies on IT infusion many researchers (Jones et al., 2002; Wang and Hsieh,
2006; Hsieh and Wang, 2007; Ramamurthy et al., 2008; Ng and Kim, 2009; Wu and Subramaniam,
2009) turn to existing theories/models in the IS field to identify antecedents to infusion. However,
analysis of existing infusion models revealed their unsuitability for investigating MHS infusion among
individuals, with such models primarily focused on infusion of stationary technologies.

Acknowledging that MHS differ from their static counterparts, O Connor et al., (2012) proposed an
Individual Mobile Health Infusion (IMHI) model (Figure 1). These authors argue that successful
infusion is determined by three characteristics namely, technology, user and task characteristics.
However, this paper focuses on three dimensions (i.e. Availability, Maturity and Portability) associated
with technological characteristics. Given the important role that healthcare technology has on the
delivery of quality healthcare services (S-Mohamadali and Garibaldi, 2012), it is important to
investigate technology characteristics. The authors recognise that there are a number of other
categories presented in the conceptual model which could have been explored. However, at an
227

Yvonne O Connor, John O Donoghue and Phillip O Reilly
individual level, infusion is not likely to occur unless the user of the technology engages with the
technology (Fadel, 2007).

Figure 1: Individual mobile health infusion (Source: O Connor et al., 2012)
Technology characteristics refer to specific features, functionality, or usability of a technology that can
affect its infusion by target users (adapted from Agarwal and Venkatesh, 2002). This characteristic
has the following dimensions; availability, maturity and portability. Availability is the ability of
accessing mobile health system when required. Maturity implies the existence of a level of system
quality that is perceived as satisfactory (Triandis, 1980) and the perceived need for system
improvement (Gebauer, 2008) by the user. Therefore, it is conceptualised as the perceived need for
system improvements by an individual (Gebauer, 2008). Portability refers to the degree of ease
associated with transporting the mobile health system (Hoehle and Scornavacca 2008).
3. Research methodology
The objective of this research is twofold; (1) To identify and highlight issues affecting infusion and (2)
to present a checklist of critical success factors which healthcare organisations should consider in
order to successfully infuse MHS. This is achieved through an exploratory study of MHS infusion in
Britains National Health Service, whereby the conceptual model highlighted previously is empirically
investigated. Given the exploratory nature of this study, case study methods are presented as an
appropriate approach in which to investigate the research objective. A single qualitative case study
method is favourable given the research objective and current gap in the literature. Marshall and
Rossman (1989) indicate that when the state of knowledge in a field is at an early stage of
investigation, a need exists for the research purpose to focus on discovery, and be exploratory in
nature. Galliers (1992) states that for such an exploratory approach a case study is a valid research
method. The case study approach enables the researcher to investigate and capture the reality of the
phenomenon (Yin, 1994).

The National Health Service case was chosen as it represents a critical case with regard to
understanding determinants of infusion of MHS in a healthcare environment. Data was gathered over
a one month period in October 2011. University Hospital Birmingham National Health Service
Foundation Trust (UHBFT) is one of the most-consistently highest performing trusts in the National
228

Yvonne O Connor, John O Donoghue and Phillip O Reilly
Health Service and has been rated "excellent" for quality of clinical and non-clinical services by the
Healthcare Commission. UHBFT began using tablet technology ten years ago and currently utilises
over 500 tablets in operation within the Trust. Over ten hours of interviews were conducted onsite with
a broad spectrum of healthcare practitioners ranging from clinical lead in pharmacology, nurses, PICS
(Prescribing Information and Communication System) training personnel to pharmacist technicians.
Applying case study techniques, primary sources of empirical data consisted of interviews and the
collection of numerous documents pertaining to MHS in the National Health Service. The MHS used
in UHBFT included a Mobile Clinical Assistant (MCA) running Prescribing Information and
Communication Systems.
4. Findings
This section identifies technological issues affecting infusion of MHS and proposes a checklist of
critical success factors which healthcare organisation should consider in order to successfully infuse
MHS within their organisation. Healthcare practitioners indicate that technological dimensions such as
availability, technology maturity and portability of MHS are pertinent for individual infusion. Each
dimension is discussed subsequently in more detail.
4.1 Issues affecting infusion - availability
MHS are expensive (approximately 2,000 each), thus it is a relatively large investment in MCA's.
As a result, some healthcare practitioners are required to share the equipment. Although most wards
within the hospital were assigned 15 MHS there were often reports that some were missing from a
ward. This issue is reflected in comments by various healthcare practitioners who stated that there
have been problems with wandering tablet PCs. I dont know where they go. I did a ward last week
that should have had 15 and they had 7. It was revealed that some healthcare practitioners who work
on various wards carry one MCA to different wards. As a result, some healthcare practitioners had to
find an available Mobile Health System to work with, which was considered as a time consuming
activity.

Moreover, there are high reports of malfunctions associated with MCAs. Our findings reveal that this
issue is more a social issue rather than a technological issue. This is exemplified in a comment
revealed by one medical practitioner who states we have more out of action because people havent
bothered to report any problems with battery life or stylus missing. While observing individual users
of MCAs it became apparent that the styluses were being removed by a small minority of healthcare
practitioners and stored in their pockets. As a result, other users would have access to the MCA but
would have no method of inputting data thus, making the device redundant. As revealed in the
previous comments there are also concerns about battery performance. Similarly, this is a social
issue rather than a technical issue. Findings exemplify that the MCA does not be put into the charger
orrectly and that no one is willing to take accountability for the charging of such devices as one
nterviewee revealed they are not a priority for the nurses, they are flying around all day.
c
i

MHS are ubiquitous in nature; therefore, such technologies require a stable underlying infrastructure
(i.e. Wi-Fi) to operate. It was reported that initially the Wi-Fi was a bit flaky with relatively few black
spots. As a result, this affected the use of MHS within the hospital. However, this issue was rectified
and the majority of individuals interviewed agree that the Wi-Fi environment is currently stable with
little than .07% of downtime running PICS (Prescribing Information and Communication Systems)
over the last 8 years.

Moreover, in a healthcare environment mobile devices are often locked down. Locking down mobile
devices refers to limited access to application and feature use of mobile artefacts. Due to privacy and
security concerns the management team decided to lock down some features of the MHS. Therefore,
users cant use the MCA for other purposes. As a result, this hindered the high infusion of the mobile
artefact by individual users as users were limited in terms of exploring the system. As a result, some
interviewees stated that I dont think we use them enough. Noteworthy, however, certain individuals
have access to a testing environment whereby they were free to explore the features of the PICS
application. This is exemplified in comments made by various interviewees (nurse, pharmacists, and
clinical lead in pharmacology). For example, one pharmacist stated that he explored using the
training domain.

229

Yvonne O Connor, John O Donoghue and Phillip O Reilly
4.2 Issues affecting infusion technology maturity
It is evident from the last comment that the technology is in place within the healthcare organisation
for a long period of time (over 10 years). However, some individuals within the organisation indicated
that there was a need for system improvements. Comments from one pharmacist indicate the need to
update the system to meet their daily work practices. According to this individual the software we feel
from a pharmacy point of view needs a lot of changing to it a lot more up to date things on there for
example. Acknowledging that the system did initially meet her needs (when we first started using the
PICS, having the electronic prescribing, it was brilliant) the interviewee stated that the way we work
has changed and will change again and this is not reflected in the MHS. Although changes are
frequently made to the MHS, analysis revealed that the process to get anything changed takes time.
Although the Mobile Health Systems identifies some of the changes which have been made it is the
responsibility of the user to pick up that they have changed. These finding reveals the need for
adapting the MHS to accommodate changing work practices and the need for change management.
4.3 Issues affecting infusion - portability
In a healthcare environment clinical work is highly mobile. Health professionals move frequently
among wards, clinics, offices, and other locations and require patient-related information at each of
these locations. Comments from various healthcare practitioners indicate that the ergonomics
associated with MHS is a concern. MHS must be designed to accommodate an individuals work
practices. Some people (junior doctors, pharmacists and nurses) feel that the device itself is too
unwieldy - it is too heavy and this is having a negative impact on existing users with complaints of
having problems with their neck and shoulders and causes a lot of wrist pain. As a result, some
healthcare practitioners resorted to the use of COWS (Computers-On-Wheels). However, due to ward
space limitation those users who resorted to the use of COWS were also restricted in terms of their
use. These findings clearly indicate that the portability of such devices impacts their use.
4.4 Critical success factors (CSF) for mobile health infusion by individuals
This section concludes by leveraging the discussion presented in the previous section to derive a set
of seven macro-CSFs which require particular attention from managers in order to ensure MHS are
infused by individuals. These seven CSFs for individual infusion of MHS are described below:
CSF 1: Ensure the Mobile Health Systems adapts to the users work/task practices
This ensures that users of MHS will continue using the MHS. Evidence supports the fact that when
MHS provide alternative or supplementary products or services, and little effort is required to learn
new operations or behaviour change, users are likely to continue using the systems (Chen and
Adams, 2005). In our findings the majority of interviewees indicated that they did not have to change
their work practices thus, were happy to utilise the MHS. Nonetheless, it was also evident that work
practices of certain individuals within the organisation did change overtime. Initially, the MHS met the
needs of these healthcare practitioners but eventually their needs changed but the MHS did not adapt
to the changes within the users work/task practices. Therefore, MHS must be continuously adapted
to the users work/task practices.
CSF 2: Establish Change Management Protocol
This ensures that any changes made to the MHS will be communicated to the relevant parties. Any
technological innovation which has been in place for a long period of time commonly involves some
element of change. Therefore, it is important to control change. Having a dedicated team to promote
and communicate changes to individual users is pertinent for the infusion of work practices. Evidence
postulates that communication between the dedicated team and individual users of mobile health
systems is critical in controlling change management.
CSF 3: Ensure the Mobile Health Systems is fit for purpose and a stable infrastructure exist
This ensures that the MHS will be used in a comprehensive manner. In our findings interviewees
stipulated that the ergonomics of the MHS was an issue. Some interviewees believed that the
handheld mobile devices were heavy and thus, their portability was insufficient for daily use. As a
result, they often resorted to COWS. Therefore in selecting hardware mobile devices, consideration
should be given to those individuals who will be using the devices. Although selecting the hardware
devices is pertinent, it is imperative that healthcare practitioners can access relevant patient data via
the devices when required. Therefore, it is integral to have a stable, underlying Wi-Fi infrastructure in
230

Yvonne O Connor, John O Donoghue and Phillip O Reilly
place. Without a stable infrastructure the MHS cannot be used thus, impacting upon individuals
infusion of the device.
CSF 4: Train users of Mobile Health Systems to maintain the Mobile Health System
This ensures that MHS will be available when required. It is evident from the findings that key issues
surrounding physical attributes of the MHS (namely, poor battery life and limited device accessories
i.e. stylus) restrict users from infusing mobile health systems as part of their clinical practice. The
findings revealed that these issues were social rather than technical. Therefore, it is pertinent to
provide adequate training to users on maintaining the MHS to overcome such issues.
CSF 5: Ensure that there is a dedicated team to support Mobile Health Systems
This ensures total commitment to the MHS at an organisational level. It is important to realise that
infusion of mobile artefacts will not occur overnight. In this case study, the use of MHS (i.e. MCA with
PICS) was implemented initially back in 1999 with some key players championing the
implementation. According to the clinical lead in pharmacology having key players involved in the
implementation process is integral, without that it would have been terribly difficult so thats why it
survived. The reasons underlying the success of the MCA running PICS to date is the establishment
of a PICS development team. This team is responsible for organising weekly meetings with various
medical practitioner groups (for example, PICS steering group, PICS nursing group) to discuss any
updates or amendments made to the electronic prescribing systems.
CSF 6: Saturate the organisation with Mobile Health Systems
This ensures that individual users have the possibility to interact with the MHS. Evidence supports the
fact that users who have a propensity to spend more time on a system will learn new ways of
exploiting the systems capabilities or become more adept at discovering more efficient ways of
using systems outside of their original use (Jain and Kanungo, 2006). However, some healthcare
organisations do not have a sufficient amount of MHS available to the end user. It is imperative
therefore to have enough MHS for end users to interact with.
CSF 7: Promote comprehensive and integrative use of Mobile Health Systems by creating a safe
environment to exploit the system.
This ensures that users go beyond routine and standardised usage of MHS. As a result, these
individuals achieve a higher level of usage that may allow them to exploit the fullest potential of the
system. Evidence exist which highlight that individual users are reluctant to explore any system in an
healthcare organisation due to the ill-affects their actions may have on the delivery of healthcare
services to patients at the point-of-care. The addition of a safe environment (e.g. training
environment with dummy results) where individuals can go beyond routine and standardised should
be established. The existence of a safe environment is critical to ensuring that individuals have the
opportunity to explore the system.
5. Concluding remarks: Implications and further research
Achieving individual infusion of any technological innovation is a difficult process. As a result, this
paper focuses on individual infusion of Mobile Health Systems (MHS). This paper describes the
issues associated with an individuals infusion of MHS. By identifying and highlighting issues affecting
infusion, future research efforts may focus on how such issues can be overcome.

This study offers several implications for both theory and practice. The first contribution to theory is
the presentation of a comprehensive definition for infusion. Analysis of the literature pertaining to
infusion revealed inconsistencies for what constitutes the term infusion. Based on this analysis,
infusion refers to the degree to which individual users employs the full potential of a mobile health
system within their work practices through comprehensive and integrated use. Secondly, this paper
is among the first to explore MHS infusion by individuals, more specifically in a healthcare
organisation. In doing so, the research findings presented in this paper contribute to theory
development in IS by adding to the current, limited understanding of individual infusion of MHS.

This study has potentially significant implications for organisations looking to invest in m-health
technologies and for those seeking to infuse MHS as part of their daily work practices. A set of seven
macro-CSFs are derived from our findings which if followed, theoretically, should result in the
successful infusion of MHS at the individual level. Whilst this study has contributed to the domain of
individual infusion of MHS more research it has its limitations and requires further research. Firstly,
231

Yvonne O Connor, John O Donoghue and Phillip O Reilly
the findings are derived from the use of a single case study. It is argued that single case studies limit
the generalizability of the results. Secondly, the seven derived CSFs are primarily associated with
technological characteristics affecting infusion. As a result, the seven CSFs are not an exhaustive list
of CSFs for achieving individual infusion. Further research is required to investigate CSFs associated
with user and task characteristics. It is only then that a full, exhaustive list of CSFs can be presented
to management to ensure individual infusion of MHS is achieved.
Acknowledgements
This research was partially funded by Business Information Systems, Conference Travel Support
Scheme, University College Cork, Health Information Systems Research Centre (HISRC), and by the
Science Foundation Ireland (SFI) SFI"11/RFP.1/CMS/3338.
References
Agarwal, R. and V. Venkatesh (2002) "Assessing a Firms Web Presence: A Heuristic Evaluation Procedure for
the Measurement of Usability", Information Systems Research, 13(2): 169.
Chen, JJ. And C. Adams (2005) User Acceptance of Mobile Payments: A Theoretical Model for Mobile
Payments, Proceedings of the Fifth International Conference on Electronic Business (ICEB), Hong Kong,
December 5-9, 2005
Cooper RB. And Zmud, RW. (1990), Information Technology Implementation Research: A Technology Diffusion
Approach, Management Science, 36(2), pp. 123139.
Eder, L. B. and M. Igbaria (2001) "Determinants of intranet diffusion and infusion", Omega 29(3): 233-242.
Fadel, Kelly, (2006) "Individual Infusion of Information Systems: The Role of Adaptation and Individual
Cognitions". AMCIS 2006, Proceedings Paper 38.
Fadel, K.J. (2007). Infusion of Information Systems: The Role of Adaptation and Individual Cognitions, Ph.D.
Dissertation, The University of Arizona Graduate College, 2007.
Gebauer, J. (2008) "User requirements of Mobile Technology: A summary of Research Results." Information,
Knowledge, Systems Management 7(1): 101-119.
Hoehle, H. and E. Scornavacca (2008) Unveiling Experts Perceptions towards the Characteristics and Value
Propositions of Mobile Information Systems, 7th International Conference on Mobile Business, IEEE, pp.
334 343
Hsieh, J. J. P. A. and W. Wang (2007) "Explaining Employees' Extended Use of Complex Information Systems",
European Journal of Information Systems 16(3): 216-227.
Institute of Medicine (2001) Crossing the Quality Chasm: A New Health System for the 21st Century.
Washington, DC: National Academy Press.
Jain, V. and S. Kanungo, (2006) IS-enabled performance improvement at the individual level: evidence of
complementarity, Proceedings of the 2006 ACM SIGMIS CPR conference on computer personnel
research: Forty four years of computer personnel research: achievements, challenges & the future, ACM
New York, NY, USA,
Jones, E., Sundaram, S., and Chin, W. (2002) "Factors Leading to Sales Force Automation Use: A Longitudinal
Analysis," Journal of Personal Selling & Sales Management 22(3), pp. 145-156.
McKinsy and GSMA (2010), mhealth; A new Vision for m-health, McKinsy and Company, Inc. and GSMA, 2010.
Available:
(http://www.gsmaembeddedmobile.com/upload/resources/files/GSMA%20McKinsey%20mHealth_report.pdf
)
Ministry of Health and Social Affairs (2010) National eHealth the strategy for accessible and secure information
in health and social care Available online: http://www.sweden.gov.se/content/1/c6/16/79/85/8d4e6161.pdf
Ng, E.H. and H.W. Kim, (2009) Investigating Information Systems Infusion and the Moderating Role of Habit: A
User Empowerment Perspective, Proceedings of International Conference of Information Systems, ICIS
2009 Proceedings. Paper 137
O'Connor, Y., J. O'Donoghue, and P. O Reilly (2011) Understanding Mobile Technology Post-Adoption
Behaviour: Impact upon Knowledge Creation and Individual Performance, Tenth International Conference
on Mobile Business (ICMB), 2011.
O'Connor, Y., PJ. ORaghailligh, and J. O Donoghue. (2012). Individual Infusion of M-Health Technologies:
Determinants and Outcomes, ECIS 2012, in Press
Ramamurthy, K., A. Sen and A.P. Sinha. (2008). Data Warehousing Infusion and Organizational Effectiveness.
Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions, 38(4): 976-994.
S-Mohamadali, N. A. K. and J. M. Garibaldi (2012) Understanding and Addressing the 'Fit' Between User,
Technology and Organization in Evaluating User Acceptance of Healthcare Technology, HealthInf 2012
International Conference on Health Informatics: 119-124.
Saga, V. L. and R. W. Zmud (1994). "The nature and determinants of IT acceptance, routinization and infusion",
Diffusion Transfer and Implementation of Information Technology 45(A-45): 67-86.
Sullivan, C. H. (1985) "Systems planning in the information age", Sloan Management Review 26(2): 3-12.
Sundaram, S.Schwarz, A., Jones, E. and W. Chin (2007) Technology use on the front line: how information
technology enhances individual performance, Journal of the Academy of Marketing Science, 35(1), p.101-
112.
232

Yvonne O Connor, John O Donoghue and Phillip O Reilly
Triandis, H. C. (1980) Values, Attitudes, and Interpersonal Behavior, in H.E. Howe (ed.), Nebraska Symposium
on Motivation, 1979: Beliefs, Attitudes and Values, Lincoln, NE: University of Nebraska Press, pp. 195-259
Wang, Wei and Hsieh, Po-An, (2006) "Beyond Routine: Symbolic Adoption, Extended Use, and Emergent Use of
Complex Information Systems in the Mandatory Organizational Context" (2006). ICIS 2006, Proceedings.
Paper 48.
Wu, X. and C. Subramaniam. (2009) Understanding RFID Adoption in Supply Chain: An Empirical Study,
Proceedings of Forty Second Hawaii International Conference on Systems Science, January 5-9, 2009.
Wynekoop, J. L. and J. A. Senn (1992) Case Implementation: The Importance of Multiple Perspectives,
SIGCPR '92 Proceedings of the 1992 ACM SIGCPR conference on Computer personnel research, ACM.
Yu, S., A. N. Mishra, et al. (2009) IT Infusion and its Performance Impacts: An Empirical Analysis of
eProcurement in the Service Industry, ICIS 2009 Proceedings. Paper 121.
Zmud, R. W. and L. E. Apple (1992) "Measuring technology incorporation/infusion." Journal of Product Innovation
Management 9(2): 148-155.
233
An Exploratory Study of Innovation Intermediation in IS
Education
Brian OFlaherty and Joe Bogue
Departments of Business Information Systems and Food Business and
Development, University College Cork, Cork Ireland
boflaherty@afis.ucc.ie
j.bogue@ucc.ie

Abstract: The importance of innovation to economies across the world has been widely documented and this
has been particularly true in Irish terms in the development of a knowledge-based economy (SSTI, 2006). One of
the ways of facilitating an innovation culture is through third level education and students taking innovation and
entrepreneurship programmes or modules. Innovation education is a central part of third level education globally
(OGorman and Fitzsimons, 2007; Streeter and Jaquette, 2004) and manifests itself in different undergraduate
and postgraduate levels and across different disciplines, such as, ICT, Engineering and Food. There is clear
evidence of significant innovation and entrepreneurship activity in Ireland (Cooney and Murray, 2008) and
worldwide (World Economic Forum, 2009), yet there is limited research into the organisation or process roles of
participants in this area and even less cross disciplinary comparative reflection. The objective of this paper is to
investigate the innovation intermediation role of IS student enterprise teams, identify the intermediation
processes adopted, explore sources of innovation, examine the practice and effectiveness of the process and
consider the comparative cross disciplinary implications in relation to the food sector. Innovation intermediation
(Howells, 2006) is the theoretical sensitising lens that underpins this research study. This perspective is widely
cited and has been applied to countless research areas, such as technology transfer, innovation and networks.
Innovation intermediation can be viewed as organisational roles, such as bridge builders, technology brokering,
surrogate ties and diffusion facilitation. It can also refer to a range of functions in innovation processes, such as
scanning, knowledge creation, testing, validation and commercialisation. The innovation intermediation model is
therefore an ideal device for articulating this comparative case study research.

Keywords: innovation intermediation, IS education, entrepreneurship, student enterprise
1. Introduction
This paper presents research from a study of IS student enterprise projects and looks at the
intermediation role they play in the innovation process. In addition, this intermediation role is analysed
in comparison to similar student enterprise processes undertaken by Food Business students.
1.1 Entrepreneurship and innovation
The role of entrepreneurship education in the development of entrepreneurial societies across the
world is central to economic development. According to Hisrich et al. (2005) entrepreneurship has
been endorsed by educational and entrepreneurial education and has never been so important to
societies. Goodbody Economic Consultants (2002) found that the Irish Educational System was seen
by entrepreneurs to have played a very limited role in practice and that there was very little direct
focus on entrepreneurship and innovation within the Irish Educational system. In addition, Fitzsimons
and O Gorman (2005: 4) in the Global Entrepreneurship Monitor 2005 recommended that: The
education sector should be harnessed in a systematic way across all disciplines to increase
entrepreneurial mindsets and to enhance the capacity of those who decide to become entrepreneurs.

Hisrich et al. (2005) noted that many universities across the United States had courses in
entrepreneurship and that the courses could be found in liberal arts colleges, business schools and
engineering schools. There are also universities in the US offering majors in entrepreneurship and
entrepreneurial skills can be classified into three main categories: technical skills, management skills,
and personal skills. They have noted this as a new phenomenon due to changes in technology,
computers and competition. However, Hisrich et al. (2005) noted that few students in college think
they will pursue entrepreneurship as their major life goal. However, the figure they noted had
increased but that they generally did not start a business directly after their university studies (Hisrich
et al., 2005). Education is also a central aspect entrepreneurship and contrary to opinion,
entrepreneurs are not less educated than the general population ((Hisrich et al., 2005). In fact, both
male and females entrepreneurs are more educated than the general populace. This highlights the
importance of entrepreneurship education and also the need for research of this nature.

234

Brian OFlaherty and Joe Bogue
The importance of entrepreneurship in terms of the Irish economy is well documented but this has not
been mirrored in the education system. Fitzsimons and O Gorman (2005) noted that there was a
need for the development of the entrepreneurship agenda in the education system and that there was
a need to develop creativity within students in terms of the innovation process and to then link this
creativity with the concept of entrepreneurship. They also suggested at third level that there should be
more emphasis on developing entrepreneurship and innovation modules across all degree
programmes in science, technology and business studies (Fitzsimons and O Gorman, 2005).
1.2 Innovation intermediation
The role of intermediaries in entrepreneurship and the innovation process is significant in terms of the
role they play in facilitating the innovation process and ultimately in the commercialisation of new
technologies. Howells (2006) noted that the different actors who perform a variety of tasks along the
innovation process are called intermediaries. Innovation Intermediation is defined as a role between
creators and users of inventions (Hoppe and Ozdenoren, 2005). Howells (2006) defines the
innovation intermediary as an organisation or body that acts as an agent or broker in any aspect of
the innovation process between two or more parties. Interest in the role of intermediaries has
emerged in various interest areas, such as: 1) the technology transfer literature, 2) general innovation
management of intermediation and the organisations that deliver this function, 3) the innovation
systems literature and 4) service and Knowledge Intensive Business Services (KIBS) firms (Howells,
2006). Many intermediaries exist, such as: University Technology Transfer Offices, who assess the
commercial value of technologies and seek investors with the resources and interest to further exploit
the technology. Similarly, venture capitalists allocate resources to entrepreneurs, who take on an
intermediary role typically in industries with high risk information gaps between technology and
markets. Furthermore, innovation intermediation has been in existence for some time. Although,
intermediaries can be traced back to middlemen in the agricultural, wool and textiles industries of the
sixteenth century, the role as it exists in the innovation process has only being significantly recognised
in recent years (Howells, 2006).

The Innovation Intermediation literature has been applied in numerous ways and across many
industry and domain areas (Howells, 2006). As a conceptual phenomenon it underpins research
explaining quite an extensive range of inter-organisational structures, such as: innovation
intermediaries and internet market places (Lichtenthaler and Ernst, 2008); knowledge intermediation
between Universities and businesses (Yusuf, 2008); academic inventors as brokers (Lissoni, 2008);
intermediaries in cross-industry innovation processes (Gassmann et al., 2011); the role of incubators
as intermediaries in knowledge transfer processes in product development in small technology -parks
(Saari and Haapasalo, 2012); and intermediation roles between Universities and businesses
facilitating knowledge creation in an innovation system (Metcalfe, 2009).

A number of authors have applied intermediation to the agricultural and food business areas, with
regional development outcomes, by considering the bridging function between supply and demand
side of agricultural knowledge infrastructures (Klerkx and Leeuwis, 2008), as well as, innovation
brokers that orchestrate agri-food innovation networks (Batterink et al., 2010). More specific to
education, the Innovation Intermediation concepts are used to assess the effectiveness of a Global
Access Program (GAP), a continuing professional development support programme, which assists
Finnish firms gain access to global markets (Dalziel and Parjanen, 2012). A study of the impact of
entrepreneurship education in Higher Education Institutions (HEI) and Small to Medium Sized
Enterprises (SME) recommends that more research is required into intermediaries, their network
relationships and the impact on entrepreneurship education (Gordon et al., 2010). It is legitimate
therefore to consider innovation intermediation activity in the context of this study of technology and
food entrepreneurship education.

Innovation Intermediation research has significant characteristics, namely there is an overlap between
conceptual approaches, there is a distinction between studies that focussed on intermediaries as
organisation and intermediation as a process. However, there is a low level of cross referencing
across the research and the exploration of intermediaries in the innovation process has not been well-
grounded theoretically (Howells, 2006). In synthesising the literature, Howells (2006) makes a
distinction between the perspectives of Intermediaries as organisation and intermediation as a
process. It is legitimate to consider Student Enterprise Projects from the perspective of organisation
roles (Howells, 2006), which is summarised in Table 1. Intermediation is also considered as an
organisational role, within a network topology (Lissoni, 2008). A categorisation of a new range of
235

Brian OFlaherty and Joe Bogue
technology market intermediaries is described by Tietze (2010), which demonstrates how the
innovation intermediation literature explains emerging new role insights.

The intermediation process as defined by Howells (2006) is a broad ranging typology that is too
comprehensive for assessing Student Enterprise Projects within an academic environment. This
present study uses Howells role typology as a sensitising framework for examining the innovation the
intermediary roles of student teams. The research questions posed in this cross domain research
are: 1) what are the intermediary roles of student enterprise teams and 2) what are the comparative
characteristics of student enterprise projects?
Table 1: Summary of intermediary roles (adapted from: Howells, 2006)
Role Description
Intermediaries Explores role of intermediary agencies support technology transfer to small firms.
Examines the role of intermediaries in technology exploitation.
Role of intermediaries in effecting change within science networks and local
collectives.
Adapt solutions available in the market to the needs of the individual User.
Public and private organizations that act as agents transferring technology between
hosts and users.
Role of mission agencies in formulating research policy.
Proactive role that certain types of service firms play as intermediaries within
innovation systems.
Help orient the science system to socio-economic objectives.
Actors filling gaps in information and knowledge in industrial networks.
Organizations that facilitate a recipients measurement of the intangible value of
knowledge received.
Third parties Persons or organizations that intervene in the adoption decisions of others
Brokers Agents facilitating the diffusion of a social systems of new ideas from outside the
system
Consultants as
bridge builders
Role of independent consultants as bridge builders in the innovation process
Bricoleurs Agents seeking to develop new applications for new technologies outside their initial
development field
Superstructure
organizations
Organizations that help to facilitate and coordinate the flow of information to
substructure firms
Regional
institutions
Provide surrogate ties by serving as functional substitutes for a firms lack of bridging
ties in a network
Boundary
organizations
Role of boundary organizations in technology transfer and co-production of
technology
Knowledge
brokers
Agents that help innovation by combining existing technologies in
new ways

2. Research methodology
The objectives of this research were: 1. to investigate the innovation intermediation role of Information
Systems (IS) student enterprise teams; and 2. to explore sources of innovation, examine the practice
and effectiveness of the process and consider the comparative cross disciplinary implications in
relation to the food sector.

The research methodology adopted was a comparative case study of two separate innovation
programmes: the MBS Business Information Systems and the BSc in Food Business with historic
longitudinal data collection. This qualitative study reflects on the experience of two innovation
programmes, the level of analysis, that have been running separately, and in isolation, for between 8-
12 years. These two programmes in the ICT and Food Sectors have resulted in the creation of on
average twenty student business plans per annum. Forty projects were selected for analysis with an
equal split between IS and Food projects and they were selected to give maximum variation (Miles
and Huberman, 1994). The reason why IS and Food enterprise projects were selected was to
compare innovation intermediation across high technology (Information Systems) and low technology
(The Food sector being generally regarded as low technology) sectors. The data sources used in this
research included: business plans, learning journals, cross case comparisons and interviews with
students. The data was transcribed, organised, stored and analysed using the software package QSR
NVivo (QSR International, 2002). This computer programme aided in the identification of codes and
the preparation of codebooks in line with best practise (Cohen et al., 2000; Gilchrist, 1992). Iterative
236

Brian OFlaherty and Joe Bogue
investigation and systematic coding progressed until a comprehensive interpretation was reached
(Miles and Huberman, 1994).
3. Research results
The results presented in this paper are from an analysis of the roles and functions of student
enterprise projects over a 5-year project period across two domains: Information Systems and Food.
The enterprise projects generate on average approximately twenty projects per year combined and
vary in terms of the novelty of the technologies used by the student enterprise teams and the
commercial potential of the products/services developed by the student entrepreneurs.

The Business groups engaged in student enterprise in both programmes were drawn randomly at the
beginning of the year, so that students, who were friends, were not all in the same group. This meant
that the groups tended to have students of mixed abilities and also students from different disciplines
as Food Science and Technology students, in the case of the food programme, were also linked into
the groups. This innovative part of the process simulated the linking of marketing and technical
aspects of new product development. The objective of the projects is to develop a technical blueprint
and commercialisation strategy for a new idea or the upgrading of an existing product whilst the
students learn about the process of product innovation.

The scope of the food and software projects varied from those using new technologies (those using
University IP such as in the software, dairy or brewing sectors) to those developing improved versions
of familiar products (continuous innovation in terms of new ingredients, new uses or new packaging).
The Food Business and IS students go through the various stages of new venture creation from
ideation to product launch. An analysis of the data revealed that the students as innovation
intermediaries played an important role linking the idea with developing a technical product
specification and ultimately identifying routes of the product to market. The students across the
different projects played different roles and performed different functions. Although the Food Business
students were exposed to the entrepreneurial process there were no actual entrepreneurship modules
available to the students. However, they did receive support in relation to the innovation process and
were obliged to make presentations to staff to highlight key learning outcomes as they progressed
through the innovation funnel.
3.1 Intermediary sources and roles
For this research an analysis was conducted of the sources of ideas across the forty projects from
both the IS and Food domains. Across both domains, IS and Food, contemporary issues were the
most significant source of project ideas (see table 2) while market research was seen as a more
useful source of ideas in the food domain. The academic project supervisors liaised closely with other
research groups across the University. There were intermediaries in their own right in terms of the
innovation process. Across the Food domain there were multiple ideas generated from University
sources/IP which was indicative of the close link between the Food Business project supervisor and
Food Science research groups. The ideas were identified from the research groups and postdoctoral
researchers also played an important role in project supervision. The IS groups also relied on their
project supervisors for liaison with cross university research teams. The research data revealed that
the IS liaisons with research groups were less consistent and more diversified than those in the Food
domain. Where the ideas were sourced from University sources/IP then academic project supervisors
(from IS and Food) played an important gatekeeping/liaising role during the innovation process.
Table 2: Sources of innovation in student enterprise
Domain
Sources of Innovation/ideas
Information Systems Food
University sources/IP

External company IP

Perceived industry need

Contemporary issues

Novelty

Market research

External firms

Legend: No evidence Considerable evidence
237

Brian OFlaherty and Joe Bogue
The intermediation role of the Food Business students depended to a great extent on the
project/product idea and where it was sourced from by the students. In some instances where the
technology/IP was provided to the students by the University or University research groups then the
role they played was to apply the technology to a new situation and to identify a route to market for
that technology. The route to market identified by the students in this instance was heavily influenced
by the supervisory staff within the Univerity. However, where the students themselves identified a
market need, or identified a novel usage, then the students had a stronger influence or sense of
ownership on the route to market and also the commercialisation strategy. Where students worked
directly with Food Firms then the Food Firm had a strong influence on the outcomes of the innovation
process. Where the students worked directly with firms then the students felt that their independence
in the project and its direction was somewhat restricted. In each case the learning outcomes for the
students varied depending on the sources of ideas for the enterprise projects.
3.2 Outputs of the innovation projects
Table 3 outlines the output of the student enterprise projects from a sample of forty projects. In the
Food sector there were no patents or licenses developed from the projects. However, on the IS side
there was limited evidence of patents and licenses as outputs and across both domains there was
evidence of students becoming entrepreneurs as a result of their enterprise experience. The key
outcome was business plans or a commercial proposition, which considered the market and resource
requirement in exploiting a technology.
Table 3: Outputs of student enterprise
Domain
Innovation Outputs
Information Systems Food
Spinouts

Patents

Licenses

Enterprise grants

Business plans

Student employment

Academic papers

Winning enterprise competitions

Student entrepreneurs

Legend: No evidence Considerable evidence
3.3 Intermediary roles
The analysis of the student enterprise projects identified the source of innovation or idea, rather than
the outcomes, as a defining indicator of the intermediary roles. Likewise the sources are quite broad
and well defined, but the outcomes consist of predominately business plans with a scattered range of
outliers. The analysis showed that a typology (Tietze, 2010) of three clear intermediary roles emerged
across both the IS and Food entrepreneurship programmes, which were closely aligned to the
sources of innovation (Figures 1, 2, and 3). The student enterprise intermediary roles were: 1)
External scanning intermediaries, 2) University/Technology IP Liaison intermediaries and 3) Creative
scanning intermediaries. This Student Enterprise typology is defined as network topologies in line
with Lissoni (2008).

The main source of innovation in student enterprise as External Scanning Intermediaries (Figure 1) is
contemporary or popular issues, normally centred on a technology or social issue that was receiving
publicity at the time. Across both samples, student teams converged on topical technologies, such as
social media, radio frequency identification (RFID), probiotic ingredients or ingredients for gluten-free
foods targeted at coeliacs. The student teams exhibited a herding or groupthink mentality, which led
to predicable outcomes. In the food-oriented projects, market research and consumer behaviour
testing was more common place, and led to stronger hidden need or evidence based commercial
proposition. While the IS projects tended to retrospectively use market research evidence to justify
initial choices, rather than shaping them. Both analysed segments had examples of innovation
sources from industry needs and/or external firms. In some case these could be developed by the
student teams themselves or through industry relationships with the academic mentors. Examples of
student teams as External Scanning Intermediaries projects were food industry domain projects
238

Brian OFlaherty and Joe Bogue
targeted at the health and wellness sector and at the super premium indulgent market, while Novel IS
projects developed alternative uses for RFID technology and derivatives trading software that was
donated by respective manufacturers.

Figure 1: External scanning intermediaries
An interesting and consistent intermediary role, evident in both the IS and Food programmes, was
Student Teams as University/ technology IP liaison intermediaries (Figure 2). In the case of IS
projects, similar technology was given to different teams, which provided numerous commercial
propositions or product and service ideas that were underpinned by the same technology. These
commercial proposition outcomes could occur in one academic year or over a number of years.
Again the academic mentors played a significant role in sourcing, brokering, gate keeping (Lissoni,
2008) the relationship with the research team or other academic that provided access to the
University Intellectual Property.

Figure 2: University technology/IP liaison intermediary
Examples of student teams as University Technology/IP Liaison Intermediaries were teams in the
Food sector that exploited IP in probiotics or the broad functional foods area. IS student teams
applied wireless sensor network (WSN) technology, developed in the Tyndall Institute UCC, on a wide
239

Brian OFlaherty and Joe Bogue
variety of medical and energy applications, such as geriatric remote living support, wireless vital signs
monitoring in emergency settings and building energy performance monitoring to the retro-fit home
market.

The curriculum for both academic programmes included significant creativity and opportunity
recognition and both programme directors strived and encouraged the student teams to consider
fresh and novel project ideas. Despite this effort, systematising creativity or just being creative can
be a difficult outcome. This is where the cultivation of the entrepreneurial mind-set is really evident
and this intermediary role is the most rewarding, but difficult to achieve. Examples of student teams
as Creative Scanning Intermediaries (Figure 3) were found in both sectors. For the food students
many of the students identified their ideas from being on internships within food firms or from
spending time abroad during their IS placement programme. Examples of these ideas were the
adaption of a flavour unique to one part of the world (Caribbean) to a new situation (the Irish market)
or the development of a product for a new market (bubble tea from china adapted to the Irish market).
An example of a creative ICT project was a smart phone application for car diagnostics or the use of
gesture technology for interaction with advertising.

Figure 3: Creative scanning intermediary
3.4 Ownership of project IP
A major issue for both sets of students was the ownership of IP when the student enterprise project
was completed. For Food Business students this was not always clear, particularly when they were
working with an external firm or University IP. The question arose many times: where does ownership
of IP, or what percentage of IP, rests with the students as they have identified route to market for the
IP and also developed a commercialisation strategy. For example, a student enterprise group worked
with an external firm who funded the project and aimed to exploit Univerity IP generated by research
staff. The Food Science students then further developed the technology while the Food Business
students developed the commercialisation plan including the development of a marketing strategy,
brand, packaging and putting together the market research case. In this scenario, the students felt
that they had developed a significant part of the project, from a technical and marketing perspective,
and would need to be compensated in some way if the external firm decided to commercialise the
idea.
3.5 Student enterprise final event
An innovation showcase was held for both IS and Food Business students to introduce the final
project outcomes to industry. This provided the students with the opportunity to display their
endeavours to industry personnel, academic researchers and those who may be able to
commercialise the research outcomes. Both programmes organised separate Innovation Showcases
as an annual exhibition of new food and software products developed by the respective students as
part of their final-year-research projects. It was attended by many visitors from industry and other
invited guests, as well as staff and students of the University, who view the selection of products. The
240

Brian OFlaherty and Joe Bogue
Showcases allowed the students to highlight their entrepreneurial talents to an assembled audience
and also allowed them to conduct market research by getting consumers to try samples or just give
feedback on the product technology.

The students from both programmes (IS and Food Business) found the showcase to be a great way
to show off their work and also to interact with industry and Technology Transfer Office personnel. In
addition, where the general public were invited to such exhibitions, particularly in the food area, it was
seen by students as a first trial run of the product/service and also gave the students a good idea of
how the product would be received in the marketplace, albeit from limited feedback.
4. Research conclusions
This exploratory study revealed that the university programmes had played an important role in
innovation intermediation and had fostered a culture of innovation within the students. In addition, the
innovation contributed to a number of business start-ups, development of IP and patents. The original
research questions focused on student teams or enterprise as 1) intermediary roles and 2) the
intermediation process. Students endeavour as intermediaries has also added value to local
companies by enhancing existing, or developing, new products or services.

Innovation intermediation underpinned this research, which lead to the identification of three student
enterprise intermediary roles. The proposed intermediary roles were: 1) External scanning
intermediaries, 2) University/Technology IP Liaison intermediaries and 3) Creative scanning
intermediaries. Surprisingly, these roles existed in two distinct and independent entrepreneurship
programmes, which could be categorised as high-tech versus low-tech. The defined roles were
common across both programmes with little dissimilarity. These subtle differences can be attributed to
different emphasis in curriculum, such as consumer behaviour in the Food area leading to more
needs driven commercial propositions.

The outcomes of this study will provide guidelines for other academics, who are currently active or
considering developing entrepreneurship courses. The two programmes evolved over nearly a
decade in tandem, but independently, so these findings will allow other academics fast track their
endeavours. The key focus should therefore be on cultivating diverse external sources of innovation,
by cultivating relationships with external companies and mentors, and also encouraging the student
teams to do likewise. The brokering role of the academics in managing the relationship with
colleagues across the University and adding value to research outcomes, is a surprising outcome,
which is consistent across both programmes examined. This can also provide a rich source of
innovation, and an untapped resource that can explore commercial outcomes of academic research.
It is known that few scientific academics have the skill set to consider and develop business
propositions. The main risk of this type of role relates to the ownership of IP and the extent to which
the contribution of the intermediating team is recognised. The final role identified centres on
encouraging student teams to be creative and think outside the box. Despite the efforts of the
academic in encouraging this, it is difficult to consistently achieving novel and creative projects.
Future research is also required to better understand the student experience relative to the different
intermediary roles and future cross case analysis will validate and possibly expand on the
understanding of these intermediary roles. The intermediation process applied to student enterprise
also warrants further exploration.
References
Batterink, M. H., Wubben, E. F. M., Klerkx, L. and Omta, S. W. F. O. (2010). Orchestrating Innovation Networks:
The Case of Innovation Brokers in the Agri-food Sector. Entrepreneurship Regional Development, Vol. 22,
No. 1, pp 47-76.
Cohen, L., Manion, L. and Morrison, K. (2000). Research Methods in Education, 5
th
Ed., London,
Routledge/Farmer.
Cooney T. and Murray T. (2008). Entrepreneurship Education in the Third-level Sector in Ireland, The National
Council for Graduate Entrepreneurship (U.K.).
Dalziel, M. and Parjanen, S. (2012). Measuring the Impact of Innovation Intermediaries: A Case Study of Tekes.
In Melkas, H. and Harmaakorpi, V. (eds.) Practice-based innovation: Insights, applications and policy
implications, Springer.
Fitzsimons, P. and O Gorman, C. (2005). The Global Entrepreneurship Monitor; The Irish Report, Dublin: The
Global Entrepreneurship Monitor.
Gassmann, O., Daiber, M. and Enkel, E. (2011). The Role of Intermediaries in Cross-industry Innovation
Processes, R&D Management, Vol. 41, No. 5, pp 457469.
241

Brian OFlaherty and Joe Bogue
Gilchrist, V.J. (1992). Sampling in Qualitative Inquiry. In: Doing Qualitative Research: Research Methods for
Primary Care Vol. 3 (Crabtree, B.F. and Miller, W.L., Eds.), pp 31-44, Newbury Park, California, Sage.
Goodbody Economic Consultants. (2002). Entrepreneurship in Ireland, November Dublin: Goodbody Economic
Consultants.
Gordon, I., Jack, S. L. and Hamilton, E. E. (2010). A Study of the Regional Economic Development Impact of a
University-led Entrepreneurship Education Programme for Small Business Owners, Entrepreneurship and
Enterprise Development Working Paper Series, Lancaster University Management School Working Paper
series No. 2010/02.
Hoppe, H. C. and E. Ozdenoren. (2005). "Intermediation in Innovation." International Journal of Industrial
Organization, Vol. 23, No. 56, pp 483-503.
Hisrich, R.D., Peters, M.P. and Shepherd, D.A. (2005). Entrepreneurship, 6
th
Edition, New York, McGraw-Hill.
Howells, J. (2006). "Intermediation and the Role of Intermediaries in Innovation." Research Policy, Vol. 35, No.
5, pp 715-728.
Klerkx, L., and Leeuwis, C. (2008). Balancing Multiple Interests: Embedding Innovation Intermediation in the
Agricultural Knowledge Infrastructure. Technovation, Vol. 28, No. 6, pp 364-378.
Lichtenthaler, U. and Ernst, H. (2008). Innovation Intermediaries: why Internet Marketplaces for Technology
have not yet met the Expectations. Creativity and Innovation Management, Vol. 17, No. 1, pp 14-25.
Lissoni, F. (2008). "Academic Inventors as Brokers: An Exploratory Analysis of the KEINS Database," KITeS
Working Papers 213, KITeS, Centre for Knowledge, Internationalization and Technology Studies, Universita
Bocconi, Milano, Italy.
Metcalfe J. S. (2009). University and Business Relations: Connecting the Knowledge Economy, Centre for
Business Research, University of Cambridge Working Paper No. 395, December 2009.
Miles, M.B. and Huberman, A.M. (1994). Qualitative Data Analysis 2
nd
Ed., London, Sage.
O'Gorman, C. and P. Fitzsimons. (2007). Entrepreneurial Activity in Ireland: Evidence from the Global
Entrepreneurship Monitor. Irish Journal of Management, Vol. 28, No. 2.
QSR International. (2002). N6 (non-numerical Unstructured Data Indexing Searching and Theorizing) Qualitative
Data Analysis Program. Australia: QSR International Pty Ltd. Version 6.0.
Saari S. and Haapasalo H. (2012).Knowledge Transfer Processes in Product Development Theoretical
Analysis in Small Technology Parks, Technology and Investment, Vol. 3, pp 36-47.
SSTI. (2006). Strategy for Science, Technology and Innovation 2006-2013, DETE, Ireland, June.
Streeter D. H. and Jaquette, J. P. (2004). University-wide Entrepreneurship Education: Alternative Models and
Current Trends, Southern Rural Sociology, Vol. 20, No. 2, pp 44 - 71.
Tietze, F. (2010). A Typology of Technology Market Intermediaries Technology and Innovations Management
Working Paper No. 60.
World Economic Forum. (2009). World Economic Forum Report on Entrepreneurship Education, Educating the
Next Wave of Entrepreneurs - Unlocking entrepreneurial capabilities to meet the global challenges of the
21st Century, April 2009.
Yusuf, S. (2008). "Intermediating Knowledge Exchange Between Universities and Businesses," Research Policy,
September, Vol. 37, No. 8, pp 1167-1174.
242
Bringing Some Order to the Black Art of Innovation
Measurement
Paidi ORaghallaigh, David Sammon and Ciaran Murphy
Business Information Systems, University College Cork, Cork, Ireland
paidioreilly@gmail.com
dsammon@afis.ucc.ie
cmurphy@afis.ucc.ie

Abstract: Measurement of innovation is critical to management but unfortunately it is an extremely tall order,
which results in it being referred to as a black art. It is particularly troublesome for firms, which operate in highly
complex and turbulent environments. Extant literature is characterized by a diversity of approaches,
prescriptions, and practices that are more often than not confusing and contradictory. We require good theory
both to suggest which metrics are needed and to interpret the resulting data. In this paper we return to the
literature in order to build a conceptual framework to guide the measurement of innovation. In addition we
perform an initial validation of the framework against an online repository of content on innovation measurement
and in so doing we arrive at a taxonomy of innovation metrics. While useful in its own right, the taxonomy also
highlights both the strengths and weaknesses in the current approach to innovation measurement. Finally we use
the framework to draw out four key questions that should be addressed by management before choosing
appropriate metrics. We foresee management using both the taxonomy and the guiding questions to evaluate
their own measurement activities.

Keywords: innovation, innovation measurement, innovation metrics, innovation model
1. Introduction
Measurement of innovation is critical for both practitioners and academics and is where the rubber
meets the road. It makes organizations aware of where they are in relation to their goals, in which
direction they are travelling, and any corrective actions they may need. But for most organisations
innovation measurement is an extremely tall order, with some going as far as describing it as a black
art. Even for those that do measure innovation, it is common for doubts to persist as to whether they
are using appropriate metrics and for the right reasons. Unfortunately, the extant literature on
innovation measurement is flung far and wide and in any case is characterized by a diversity of
approaches, prescriptions, and practices that are more often than not confusing and contradictory
(Adams et al. 2006). Innovation measurement has always been a thorny issue for researchers
(Archibugi and Planta 1996; Becheikh et al. 2006; Hagedoorn and Cloodt 2003). But the focus of our
attention here is on the measurement of innovation within management practice, which has received
little scholastic attention (Kerssens van Drongelen and Bilderbeek 1999). We attempt to address this
shortcoming by addressing the research question of how organisations should choose appropriate
innovation metrics. While it may be easy to define metrics, it is more difficult to identify meaningful
ones (Bhme and Freiling 2008).

Before we begin our journey it is important that we define the concepts of measurements, metrics and
models - an understanding of which is fundamental to the topic of this paper. A measurement is the
use of an appropriate method to collect data from an observation of an object of interest. To measure
means to attach a number or category to an attribute that represents some aspect of the object
(Bhme and Freiling 2008). Metrics are the rules that assign the resulting data onto a scale in order to
appropriately represent the attribute of interest. For example, the temperature of a room can be
observed from a thermometer, which codes the observed mercury level using a metric, such as
degrees Fahrenheit. But the metric does not necessarily need to comprise of numbers but could
equally consist of categories. For example, the temperature in the room could be categorised as hot,
warm, or cool. A model is a formal representation of the object of interest in terms of concepts that
are necessary and sufficient to describe the object. Models are used in informing the choice of metrics
and are particularly required when there are nontrivial relationships between possible measurements
and the attributes..

The remainder of this paper is structured as follows. We begin by outlining the importance of
measurement to management. We follow this by suggesting a framework to assist management in
choosing metrics. Using the framework we next analyse the state of the art of innovation
measurement. We finish by offering guidelines to assist management in measuring innovation.



243

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
2. Importance of measurement to management
According to Fayol the functions of management include: forecasting and planning, organizing,
commanding, coordinating, and controlling (Carroll and Gillen 1987). We take managerial control,
which Mockler (1970 p. 14) defines as a systematic effort by business management to compare
performance to predetermined standards, plans, or objectives in order to determine whether
performance is in line with these standards and presumably in order to take any remedial action
required to see that human and other corporate resources are being used in the most effective and
efficient way possible in achieving corporate objectives as our starting point. This cycle of control
includes the strategic planning, the measuring activities to determine whether targets are being
achieved, and the correcting or changing of activities as needed (Daft 1998).

Traditionally control systems emphasise execution rather than exploration (Davila et al. 2009). But
innovation efforts prosper in environments that promote experimentation (Amabile 1999).
Management control, through imposing constraints on behaviour, would, therefore, be expected to
reduce the creativity that may be necessary when innovating (Amabile 1999; Davila et al. 2009; Ditillo
2004). But in recent years, changing business environments are challenging organisations to improve
the management of their innovation efforts (Davila et al. 2009). More prosaically, this means
organizations need to be able, for example, to provide sufficient freedom to allow for the exploration of
creative possibilities, but sufficient control to manage innovation in an effective and efficient fashion
(Adams et al. 2006 p. 32).

But when managed poorly, measurement can be a double-edged sword. While metrics can drive
positive change throughout an organization, they can also generate unintended negative
consequences that hinder organisational performance (Collins and Smith 1999). Dysfunctional
behaviour can arise from the design and use of inappropriate measurements (Jaworski 1988). When
faced by inappropriate measurements, managers may cope by resisting or circumventing controls by
behaving in a fashion that is in his or her (rather than the organisations) best interest (Jaworski
1988). So how should we ensure that measurements are chosen to make a positive impact on the
process of innovation?
3. Framework for choosing innovation metrics
Innovation management is somewhat of a black art and managers currently lack the requisite metrics
to make informed decisions about their innovation programs (Muller et al. 2005). Indeed innovation
does not lend itself to measurement. Shapiro (2006 p. 42) states that The essence of innovation is
novelty It may even be that the most effective innovation is that which so changes the scheme of
things that it makes the old measuring scheme obsolete! This does not mean that measurement
approaches cannot be developed, but such approaches require us to reduce the innovation
phenomenon to some simplified and stable conceptualisation (Smith 1998). An accepted conceptual
framework is, therefore, necessary to guide measurement activities and to move innovation
management beyond an art to a scientific basis. Osterwalder et al., 2005 (2005 p. 21) suggests that a
framework makes it easier to identify the relevant measures to follow to improve management. This
ability would facilitate the choice of the indicators for monitoring strategy implementation.

But despite decades of academic attention the innovation process in organisations remains
imperfectly understood by both academia and management (Becheikh et al. 2006). But we require
good theory to both suggest which indicators are needed and to interpret the resulting data, and an
effective response to identified problems (Arundel et al. 1998 p. 4). We, therefore, find ourselves
returning to the literature in order to build a conceptual framework of the innovation process. We draw
on the management control literature, briefly introduced above, to identify three categories of
measurements - each distinguished by the timing of management intervention (i.e., input to output) in
the innovation process. Resource controls are measurable actions taken by management usually
prior to implementation of a process in order to allocate resources (Jaworski 1988). Process controls
are measurable actions taken by management usually during the process in order to influence the
behaviour of actors and activities involved in the process (ibid). Result controls are measurable
actions taken by management usually after execution of the process in setting, monitoring, and
evaluating performance standards (ibid). Using this classification we now develop a framework of the
innovation process.
244

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
3.1 The process of innovation
Various scholars (e.g. Hansen and Birkinshaw 2007; Kandybin and Kihn 2004; Koen et al. 2001),
albeit sometimes using slightly different terminology, view the innovation process as consisting of the
generation, selection, development, and commercialisation of knowledge see Table 1. We dont
intend to imply a well-behaved single, orderly, and linear process but instead the process, especially
in its early stages, is more often than not chaotic, unpredictable, and unstructured before in its later
stages becoming more structured (Koen 2004). This journey is often marked by dead-ends, re-births,
and reversals as the knowledge is rejected, re-introduced, or reworked (Cagan and Vogel 2002).
While there is considerable value in viewing innovation as a staged process it hides the variety and
complexity of the individual activities that occur both within and across stages. We now take a look at
the characteristics of these activities.
Table 1: Stages of knowledge progression (after: Hansen and Birkinshaw 2007; Kandybin and Kihn
2004)
Generation Selection Development Commercialisation
The process starts with
the generation of good
ideas. Viable ideas are
usually ignited when
fragments of knowledge
come together from
different sources both
internal and external.
Idea generation may be
either market-pull
(through market needs)
or technology-push
(through technology
advances).
The pool of ideas
generated in the
previous stage is
funneled into a smaller
number of funded
projects. Not only is it
possible to miss some
good ideas but
accepting too many
ideas is also an issue
resulting in too many
bad ideas being funded
and resources being
wasted on projects that
never reach market.
The funded projects are
developed into revenue-
generating deliverables.
Expediting development
lowers the costs (both
the direct costs of
producing the
deliverable but also the
opportunity costs of not
being able to produce
other deliverables).
Attention now begins to
switch from the
innovation value-chain
to the product supply-
chain to ensure that the
deliverables are where
they need to be when
theyre needed, and to
promote and market
them intelligently. This
depends on getting buy-
in both internally and
externally.
The challenge for organisations is to turn knowledge from both internal and external sources into
exploitable knowledge (Kogut and Zander 1996). But the ability of the organisation to do this depends
on the proximity of the knowledge sources relative to the existing stock of knowledge in the
organisation (O'Raghallaigh et al. 2010). Two critical dimensions of proximity are territorial and
cognitive distances, where the former is the geographical distance of the knowledge source to the
focal firm and the latter is the familiarity of the knowledge source to the focal firm (ibid). The
innovation activities of the organisation are essentially knowledge search activities that can be
distinguished in relation to these two dimensions. Territorial proximity is important as smaller
distances facilitate more intense interactions, reduce institutional and culture differences, and thereby
promote knowledge transfer - especially of sticky tacit knowledge (ibid). On the other hand, cognitive
diversity is important in promoting novelty, which can be critical to innovation performance (ibid).

Search activities along the territorial dimension can be divided into internal, dynamic, and market
categories (O'Raghallaigh et al. 2010). Internal search activities focus on the acquisition of knowledge
within the boundaries of the organisation, such as through R&D and daily operations (ibid). Market
search activities focus on acquiring external knowledge through market-driven transactions, such as
inward-licensing, purchasing of patents, purchasing of equipment, etc. (ibid). The knowledge may be
acquired in either an embodied (e.g. in machines and equipment) or disembodied (e.g. through
licensing agreements or R&D outsourcing contracts) state (ibid). It is also possible to acquire external
knowledge without engaging with the market. Dynamic activities refer to efforts at acquiring external
knowledge through cooperating with external organisations (i.e. suppliers, customers, and
universities) and scanning external information sources (i.e. by attending conferences, reading
scientific publications, and reading technical reports) (ibid). Unlike market activities, which are
essentially arms-length market transactions, dynamic activities require the active participation of the
to-be recipient in the process of knowledge acquisition (Cassiman and Veugelers 2002). An
organisation can, therefore, innovate by producing knowledge through its own inventive activities but
also through obtaining it from sources outside of the firm. The latter may involve no intellectual,
inventive, or creative effort whatsoever on behalf of the focal organisation, while the former does
(Arundel et al. 1998 p. 31). These two aspects of innovation are referred to as innovation through
inventive effort and innovation through adoption (ibid). The ways in which each approach is managed
is likely to differ greatly.
245

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
To foster innovation within organisations, resources must be distributed deliberately based on pre-
defined goals (Alves et al. 2007). Christensen (2006 p. xvii-xviii) argues that organizations
successfully tackle opportunities when they have the resources to succeed, when their processes
facilitate what needs to get done, and when their values allow them to give adequate priority to that
particular opportunity in the face of all other demands that compete for the companys resources.
Innovation resources are widely portrayed as including funding, human, facilities, and tools resources
(Adams et al. 2006) see Table 2.
Table 2: Types of innovation resources (after: Adams et al. 2006)
Funding
Adequate funding is clearly a critical input into the innovation process and funding may need to
be designated to specific activities.
Human
People factors include the number and mix (with respect to their cosmopolitanism, propensity to
innovate, skills, experience, and education) of people committed to the innovation tasks.
Members with higher levels of education and self-esteem from diverse backgrounds increase
the effectiveness of innovation project teams.
Facilities
Facilities or physical resources are a broad category that ranges from buildings to computer
equipment. Slack resources or unused capacity can in some cases be an important catalyst for
innovation, whereby slack provides the opportunity for diversification, fosters a culture of
experimentation, protects against the uncertainty of project failure, and allows failures to be
absorbed. However, in other cases slack becomes synonymous with waste and is a cost that
must be eliminated.
Tools
Use of systems and tools is an important support for innovation in organizations. Tools can be of
various sorts, including tools and techniques for promoting creativity and systems of quality
control ranging from informal methods to specific techniques such as total quality management
(TQM).
Innovation can take place on any dimension of the organisations business model (Sawhney et al.
2006). Innovation is measured is terms of the value it adds to organisation rather than the novelty of
its results. For instance a new product that is technologically superior to its competitors and meets the
needs of a customer segment can fail because it lacks an effective sales and distribution channel.
When innovating, an organisation must consider all dimensions across which innovation can take
place - see Table 3.
Table 3: Types of innovation results (after: Sawhney et al. 2006)
Offerings
These are goods and services offered by the organisation and that are valued by its
customers.
Platform
This is a set of common components, assembly methods, or technologies from the
organisation that serve as building blocks for a wider portfolio of offerings.
Solutions
This is the customized, integrated combination of offerings from the organisation that
solves a customer problem.
Customers
This is the discovery by the organisation of new customer segments or the uncovering of
unmet (and sometime unarticulated) customer needs in existing segments.
Customer
Experience
This includes everything a customer sees, hears, feels and in general experiences while
interacting with the organisation and its offerings.
Value
Capture
This is the mechanism that the organisation uses to capture revenue streams from the
value it creates.
Processes
These are the configurations of business activities that the organisation uses to conduct
internal operations.
Organization
This is the way in which the organisation is structured, its partnerships, and its employee
roles and responsibilities.
Supply Chain
This is the sequence of agents, activities and resources required by the organisation to
move its offerings from source to the customer.
Points of
Presence
These are the channels and the outlets that the organisation employs from which its
offerings can be bought or used by the customer.
Networking
This is the network through which the organisation and its offerings are connected to the
customer.
Brand
This is the set of symbols, words or marks through which the organisation communicates a
promise to the customer.
The resulting synthesis of the stages, activities, resources, and results of the innovation process is
depicted in the framework shown in Figure 2.
246

Paidi ORaghallaigh, David Sammon and Ciaran Murphy

Figure 2: Conceptual framework of innovation process
4. Initial evaluation of the state of the art in innovation metrics
Year after year, results from surveys by the Boston Consulting Group show how most executives are
aware of the importance of measuring innovation rigorously, consistently, and effectively but few of
them follow through in terms of execution. In order to sample the state of the art in innovation
measurement, we reviewed the extensive content available in the Innovation Metrics section of the
Business Exchange
1
, which is a repository containing links to articles, news, blogs, and general
resources about innovation metrics from myriad top online sources. A group of seven postgraduate
students from a Masters Programme of Innovation Studies initially mined the repository for details of
innovation metrics used by organisations. A blind review of the same material was simultaneously
performed by the corresponding author and the results were validated against those derived by the
students. The author overlaid the uncovered metrics onto the framework described in the previous
section. In all 175 sources were reviewed and each of the resulting metrics was successfully
incorporated into the framework see Figure 3. In the first instance this validated the ability of the
framework to handle all uncovered metrics. However, reflection on the results leads to a number of
interesting observations, which we discuss briefly here. These observations are mainly drawn from
the perspective of software-producing firms, for which the measurement problem is particularly acute.

As may have been expected the majority of the uncovered metrics are concentrated in the results
category and to a lesser extent the resources category. When measuring results the emphasis is very
much on financial measures of product performance. This somewhat corroborates the finding that
Percent of revenue from new products is the most commonly used measure of innovation in
organisations (Shapiro 2006). While this metric is widely used, it (and similar result-metrics) is not
without its problems. A general problem is the question of what is new. Would even the most basic
change (such as colour, size, packaging, raw materials, etc.) to an existing product result in a new
product (Shapiro 2006). Secondly, it does not reflect the size of the original investment in the
development of the innovation and neither does it indicate the profitability of the investment (Linder
2006). Thirdly, it is backward focused capturing the impact of past innovations but [not] current
investments and whether or not they will pay off in the future (ibid p. 38). Fourthly, it does not indicate
how the organisation is performing relative to, say, others in the industry (ibid). Fifthly, it does not
reflect the value of those types of innovations that do not directly result in sales (ibid). While customer
experience metrics were uncovered, all other forms of innovation (e.g. process, organizational,
supply-chain, etc.) hardly register at all. Finally, the speed to market for software firms is paramount
and market leadership is maintained by actively removing opportunities for competitors to jump into a
market leadership position (Hoch et al. 2000). It may be necessary for organisations to "cannibalise"
their own products with the result that maximisation of revenue from existing products may be

1
The service is offered by BusinessWeek a top ranking financial and business magazine (with an in print circulation of circa 1
million copies) and whose editors vet all links for appropriateness to the topic. See http://bx.businessweek.com/innovation-
metrics/

247

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
secondary to insuring and increasing market share (ibid). The uncovered metrics can reflect this
scenario through metrics dealing with rates of product uptake and market share.

Figure 3: Map of innovation metrics
248

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
Our results show that organisations devote considerable attention to measuring funding and human
resources both being critical inputs into the innovation process. But other resource types, such as
facilities and tools, receive little or no attention. Yet both are crucial to innovation in software firms. A
disadvantage of focusing too closely or exclusively on resources is that they tell us nothing about the
process of conversion by which they are converted into results. Management of knowledge in terms of
the acquisition, generation and exploitation of knowledge is critical to the success of software firms
(Mathiassen and Vainio 2007). While there is a breath in the process-metrics uncovered, the depth of
the metrics varies from stage to stage with far more metrics focusing on the idea generation stage
than any other stage. Surprisingly, there is little emphasis on measuring the role of collaboration in the
innovation process of organizations. No matter how large, no one software company can achieve
market leadership by itself as gaps in technological, marketing and R&D expertise spring up all the
time (Nambisan 2002). Leading software firms increasingly realize the critical importance of partnering
in filling gaps in expertise, speeding up time to market, increasing market penetration, and supporting
R&D efforts (Hoch et al. 2000). Our evidence shows that outside of some internal R&D metrics, the
uncovered metrics are not designated at the level of the specific activity-types such as
collaborations, scanning, and market acquisitions.
5. Discussion
Case studies of innovation have consistently demonstrated the considerable complexity and diversity
of innovation processes across firms and industries (Smith 1998), which leads one to expect great
difficulty in the measurement of innovation. When it comes to choosing metrics one size does not fit
all and instead the optimal selection of metrics will vary between organisations and industries (Muller
et al. 2005). Unfortunately when faced with measurement issues, management generally just add
more metrics (Linder 2006). But measurement is not a matter of using as many metrics as possible -
there is a cost associated with measuring and in any case what is to say that the added metrics solve
the underlying measurement issue. We recommend a more scientific approach to measurement,
which demands an in-depth understanding of the context in which innovation takes places. Metrics
should not be mistaken for what is being measured and they are no more than proxies for an abstract
reality. No measurement problem can be adequately addressed without answering the questions in
Table 4 - the answers to which inform both the choice of metrics and the balance of metrics within the
overall portfolio used by the focal organisation. The questions are derived from the previously
discussed framework and in particular from its four elements Goals, Resources, Results, and
Process. The metrics chosen must be aligned to these elements in order to ensure their
appropriateness. In addition the resulting metric portfolio must be sufficiently broad to adequately
cover resources, process, and results. Finally, the metrics may need to be designated at the level of
distinct activity categories (e.g. collaboration, scanning, market acquisitions, etc.). This paper has
sought to make a number of key contributions. Most importantly, we acknowledge the difficulties that
management are experiencing when measuring innovation. We expect that this paper will give both
managers and academics firmer grounds on which to (1) understand and appraise their measurement
activities, (2) to choose individual innovation metrics, and (3) to build a portfolio of metrics.
Table 4: Questions to guide the choice of innovation metrics
Question Framework Element Comment
Why
The Goals of
innovating
Are we doing the right things? What are the organisations goals for
innovation? Ultimately the metrics are required to ensure that goals
are being met. This is a question of aligning metrics with the goals.
What
The Resources
required and the
anticipated Results
Are we using the correct resources in order to produce the right
results? What are the resources required in order to achieve each
goal and what are the individual results necessary to achieve each
goal? This is a question of aligning metrics with the inputs and
outputs.
How
Are we doing things right? How does the organisation intend to
achieve each goal? This is a question of aligning metrics with the
process.
Where/
When/
Who
The Process of
innovating

And in the right place? Within which context does the organisation
innovate? This is a question of aligning the metrics with the
structure and context.
References
Adams, R., J. Bessant and R. Phelps. 2006. "Innovation management measurement: A review." International
Journal of Management Reviews 8(1):21-47.
249

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
Alves, J., M. J. Marques, I. Saur and P. Marques. 2007. "Creativity and innovation through multidisciplinary and
multisectoral cooperation." Creativity and Innovation Management 16(1):27-34.
Amabile, T. M. 1999. "How to Kill Creativity." Harvard Business Review 76(5).
Archibugi, D. and M. Planta. 1996. "Measuring technological change through patents and innovation surveys."
Technovation 16(9):451-468.
Arundel, A., K. Smith, P. Patel and G. Sirilli. 1998. "The future of innovation measurement in Europe. Concepts,
problems and practical directions." Step Group.
Becheikh, N., R. Landry and N. Amara. 2006. "Lessons from innovation empirical studies in the manufacturing
sector: A systematic review of the literature from 1993-2003." Technovation 26(5-6):644-664.
Bhme, R. and F. Freiling. 2008. "On metrics and measurements." Dependability metrics:7-13.
Cagan, J. and C. M. Vogel. 2002. Creating breakthrough products: innovation from product planning to program
approval: FT Press.
Carroll, S. J. and D. J. Gillen. 1987. "Are the classical management functions useful in describing managerial
work?" Academy of Management Review 12(1):38-51.
Cassiman, B. and R. Veugelers. 2002. "Spillovers and R&D cooperation: some empirical evidence." American
Economic Review 92(4):11691184
Christensen, C. M. 2006. Using Disruptive Innovation to Create New Growth: J. Willard Marriott Library,
University of Utah.
Collins, J. and D. Smith. 1999. "Innovation metrics: A framework to accelerate growth." PRISM-CAMBRIDGE
MASSACHUSETTS-:33-48.
Daft, R. 1998. "Organisation Theory and Design International." 6th Edition Edition. Cincinnati, Ohio, US:
Thompson Publishing.
Davila, A., G. Foster and M. Li. 2009. "Reasons for management control systems adoption: Insights from product
development systems choice by early-stage entrepreneurial companies." Accounting, Organizations and
Society 34(3-4):322-347.
Ditillo, A. 2004. "Dealing with uncertainty in knowledge-intensive firms: the role of management control systems
as knowledge integration mechanisms* 1." Accounting, Organizations and Society 29(3-4):401-421.
Hagedoorn, J. and M. Cloodt. 2003. "Measuring innovative performance: is there an advantage in using multiple
indicators?" Research Policy 32(8):13651379
Hansen, M. T. and J. Birkinshaw. 2007. "The innovation value chain." Harvard Business Review 85(6):121.
Hoch, D. J., C. Roeding, S. K. Lindner and G. Purkert. 2000. Secrets of software success: Harvard Business
School Press Boston.
Jaworski, B. J. 1988. "Toward a theory of marketing control: environmental context, control types, and
consequences." The Journal of Marketing:23-39.
Kandybin, A. and M. Kihn. 2004. "Raising your return on innovation investment." Strategy And
Business(Spring):38-49.
Kerssens van Drongelen, I. and J. Bilderbeek. 1999. "R&D performance measurement: more than choosing a set
of metrics." R&D Management 29(1):35-46.
Koen, P. A. 2004. "The Fuzzy Front End for Incremental, Platform, and Breakthrough Products." The PDMA
Handbook of New Product Development, John Wiley & Sons, New York:8191.
Koen, P., G. Ajamian, R. Burkart, A. Clamen, J. Davidson, R. D'Amore, C. Elkins, K. Herald, M. Incorvia and A.
Johnson. 2001. "Providing clarity and a common language to the" fuzzy front end"." Research-Technology
Management 44(2):46-55.
Kogut, B. and U. Zander. 1996. "What firms do? Coordination, identity, and learning." Organization Science
7(5):502-518.
Linder, J. C. 2006. "Does innovation drive profitable growth? New metrics for a complete picture." Journal of
Business Strategy 27(5):38-44.
Mathiassen, L. and A. M. Vainio. 2007. "Dynamic capabilities in small software firms: A sense-and-respond
approach." Engineering Management, IEEE Transactions on 54(3):522-538.
Mockler, R. J. 1970. Readings in management control: Appleton-Century-Crofts and Fleschner Publishing
Company.
Muller, A., L. Vlikangas and P. Merlyn. 2005. "Metrics for innovation: guidelines for developing a customized
suite of innovation metrics." Strategy & Leadership 33(1):37-45.
Nambisan, S. 2002. "Software firm evolution and innovation-orientation." Journal of Engineering and Technology
Management 19(2):141-165.
O'Raghallaigh, P., D. Sammon and C. Murphy. 2010. "Theory-building using Typologies - A Worked Example of
Building a Typology of Knowledge Activities for Innovation." In 15th IFIP WG8.3 Conference on DSS, ed.
Various. University of Lisbon, Portugal: IOS Press.
Osterwalder, A., Y. Pigneur and C. L. Tucci. 2005. "Clarifying business models: Origins, present, and future of
the concept." Communications of the association for Information Systems 16(1):1-25.
Sawhney, M., R. C. Wolcott and I. Arroniz. 2006. "The 12 different ways for companies to innovate." MIT Sloan
Management Review 47(3):75.
Shapiro, A. R. 2006. "Measuring innovation: beyond revenue from new products." Research-Technology
Management 49(6):42-51.
Smith, K. 1998. Science, technology and innovation indicators: a guide for policymakers: STEP Group.

250
Using Focus Groups to Evaluate Artefacts in Design
Research
Paidi ORaghallaigh, David Sammon and Ciaran Murphy
Business Information Systems, University College Cork, Cork, Ireland
paidioreilly@gmail.com
dsammon@afis.ucc.ie
cmurphy@afis.ucc.ie

Abstract: Evaluation in design research continues to be ad hoc and poorly performed. It is one of the single
biggest weaknesses in existing design research. Part of the problem is undoubtedly disagreement around the
nature of design research and the highly complex process around evaluating its scientific claims. These issues
demand our collective attention. This paper proposes that evaluation in design research must answer two key
questions regarding the artefact under consideration - does the artefact work and why does it work. This paper
moves beyond the traditional approach to evaluation of artefacts and instead describes an interactionist
approach. Focus groups are proposed as an appropriate method for answering the above questions. Guidelines
for the use of focus groups as an interactionist approach to evaluation are provided. Up to now the use of focus
group methods to evaluate and refine design artefacts has remained relatively new to the IS field.

Keywords: design science, design research, design, evaluation, focus groups
1. Introduction
March and Smith (1995) identify the purposeful building of artefacts and the subsequent evaluating of
those artefacts as two main tasks in design research. The artefacts have to be evaluated in order to
conclude if any progress has been made. While evaluation remains a headline issue in design
research and there continues to be a need for rigorous evaluation methods (Hevner et al. 2004), there
remains little guidance in the literature concerning what is rigorous evaluation and how to choose and
design a rigorous evaluation strategy (Pries-Heje et al. 2008). There are few widely-shared
frameworks to guide how to make reasoned choices when planning an evaluation design (Mark
1999). Instead evaluators from different camps bring vastly different views of what type of evaluation
should be done in a given situation (ibid). This results in a seemingly unlimited number of approaches
and evaluators are having trouble knowing what is what (Mark 1999).

Despite calls for more rigorous design research, evaluation continues to be ad hoc rather than a
systematic practice (Avgerou 1995). The result is that scholars struggle with evaluation, which is
poorly performed (Pries-Heje et al. 2008). Venable (2010) observes how evaluation is one of the
single biggest weaknesses in existing design research. Part of the problem is undoubtedly the highly
complex process that results from the various interrelated factors that have to be considered when
carrying out an evaluation (Cleven et al. 2009, np). This complexity demands a structured
proceeding towards designing an evaluation strategy (Cleven et al. 2009; Avgerou 1995). The
objective of this paper is to investigate some of the issues facing the design research scholar and to
propose an interactionist approach to evaluation with design theory at its core. This is followed with a
detailed look at focus groups as an appropriate interactionist method.

While focus groups are now one of the most widely used research tools in the social sciences
(Tremblay et al. 2010), relatively little was published about them until more recent years (Rezabek
2000). Indeed the use of focus group methods to evaluate and refine artefacts remains relatively new
to the IS field (Tremblay et al. 2010). In addition, the majority of the literature that is available tends to
be written by non-designers (McDonagh-Philp et al. 2000). The objective of this section is, therefore,
to investigate the use of face-to-face and computer-mediated focus groups as approaches to
interactionist evaluation for design research.
2. Background to design research
It is necessary to distinguish between the practice of design and the science of design. Both are
problem-solving activities but the difference lies in their contributions to the body of design knowledge.
Artefact construction through applying existing knowledge is the prime focus of design practice while
knowledge generation through artefact construction is paramount for design research (Niehaves
2007). However, there are serious misgivings among scholars as to the required nature of this
knowledge. This paper argues that design science must generate knowledge of a theoretical nature
251

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
that explains both the how and why of artefact construction. The design knowledge makes
contributions to the academic knowledge base in the form of design theories (Walls et al. 1992).
Design theories give explicit prescriptions on how to design an artefact and in addition, they draw on
kernel theories in order to explain why a design should work (ibid). Iivari (2007, p. 49) considers
the existence of a kernel theory to be a defining characteristic of a design theory [and] without a
sound kernel theory it is not justified to speak about design theory. In summary the primary role of
design research is, therefore, the generation of design theories and the corresponding kernel
theories, while the emerging artefact is no more than a secondary (albeit a necessary) output of the
research. However, we now look at the implications this has on how design research is evaluated.
2.1 Evaluation in the design research literature
Design research consists of activities to design an innovative artefact for a specific purpose and to
subsequently evaluate how well it performs in relation to this purpose (March and Smith 1995). These
activities are typically iterated a number of times before the final artefact is arrived at (Markus et al.
2002). Here evaluation provides evidence of how well the artifact supports a solution to the
problem (Peffers et al. 2007, p. 56). March and Smith, (1995, p. 258) express this state of affairs as
follows: We build an artifact to perform a specific task. The basic question is, does it work? .... We
evaluate artifacts to determine if we have made any progress.

But this paper claims that on its own this is insufficient for a scientific contribution. A clear distinction
must be made between, on the one hand, evaluation focused on the artefact and its utility, and, on the
other hand, evaluation focused on the resulting theory and the explanations it provides. The former
addresses the questions of does the artefact work and how well does it work. The latter addresses the
questions of why does the artefact work and why does it work so well. In other words evaluation of an
artefact should contribute observation-based insights that improve the explanatory function of the
design theory but it should also contribute observation-based justifications for the explanations offered
by the design theory. This dovetails nicely with the work of Goldkuhl and Lind (2010), who advocate
that design theories should be justified through both empirical and theoretical groundings. Now we
turn our attention to addressing an issue related to how evaluation is performed.
2.2 An interactionist approach to evaluation
Evaluation, including that in the field of IS, has been dominated by overly positivistic and scientific
paradigms (Serafeimidis and Smithson 2003). For instance, March and Smith (1995, p. 258) state
that: Evaluation requires the development of metrics and the measurement of artifacts according to
those metrics. Metrics define what we are trying to accomplish. They are used to assess the
performance of an artifact. Lack of metrics and failure to measure artifact performance according to
established criteria result in an inability to effectively judge research efforts. The result is a highly
formal and rational approach that views evaluation (for the most part) as reaching an objective
judgement on the techno-elements of a discrete system while ignoring the considerable socio-
elements that are part of the wider conception of many systems (Symons, 1991). As a result, many
scholars (Barrow and Mayhew 2000; Guba and Lincoln 1989; Iivari 1988; Smithson and Hirschheim
1998; Symons 1991) are moved to argue that evaluation should instead adopt a more interactionist
approach that strives to incorporate various stakeholder interests and perspectives when determining
the value of socially embedded systems. Avgerou (1995) advocates that evaluation should involve
extensive stakeholder participation in a dialectic process, where the aspects of the artefact to be
evaluated, as well as the criteria to be applied in the evaluation, are those deemed important and
emerge from the concerns, claims and views expressed by the stakeholder groups. Such qualitative
evaluation has a better possibility to describe why goals and criteria are fulfilled . An interactionist
approach, therefore, seems sensible when evaluating the contribution of design research to
answering questions of why. Next we look at how an interactionist approach might be implemented.
3. An evaluation approach using focus groups
Much of the emphasis in qualitative research during the 20th century was on participant observation
and individual interviews (Rezabek 2000). But falling in the continuum of qualitative research between
these methods, focus groups consist of semi-structured question sessions in which a moderator
promotes interaction among a collection of participants that have been brought together to discuss
and shed light on a particular topic, issue or concern (Hansen and Hansen 2006; McDonagh-Philp et
al. 2000; Powell and Single 1996; Rezabek 2000; Tremblay et al. 2010). For example, Tremblay et al.
(Tremblay et al. 2010, p. 600) state that focus groups are valuable to gain shared understandings
252

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
but yet allows for individual differences of opinion to be voiced. The key characteristic which
distinguishes focus groups from other approaches is the level of interaction between participants and
the synergy within the group (Gibbs 1997; Kitzinger 1995). The unit of analysis is, therefore, mainly
(but not exclusively) at the group level rather than the individual participant level (Krueger and Casey
2000).
3.1 Strengths and weaknesses of focus groups
Drawing on this discussion, as well as the work of Gibbs (1997), McDonagh-Philp and Bruseberg
(2000), Bruseberga and McDonagh-Philp (2002), Kontio et al. (2004), Mazza, and Berr (2007), and
Tremblay et al. (2010), the strengths of focus groups as a research method are outlined in Table 1.
Table 1: Main strengths of focus groups
Strength Description
Fast and cost
effective method
Because several subjects can be interviewed at the same time, focus groups are a
fast and cost effective means of obtaining attitudes, feelings, and beliefs.
Provides broad
content
Focus groups allow for an open format and they are flexible enough to handle a
wide range of topics.
Provides in-depth
content
Focus groups allow in-depth exploration of the reasons why the participants think
the way they do and often provide insights that can be difficult, time consuming, or
expensive to capture using other methods.
Builds new content
through interaction
Focus groups provide participants with the opportunity to react to, reflect on, and
build on the experiences of others. This can generate new ideas that might have
not been uncovered in individual interviews. It also provides the scholar with the
opportunity to clarify responses, ask follow-up questions, and receive contingent
answers to questions.
Empowers
participants
The opportunity to work collaboratively with scholars can be empowering for many
participants. In addition, participants build on their own knowledge through gaining
from other participants, benchmarking experiences and practices between
companies, and increasing networking contacts.
Drawing from the same sources the weaknesses of focus groups as a research method are outlined
in Table 2.
Table 2: Main weaknesses of focus groups
Weakness Description
Difficult to assemble Focus groups can be difficult to assemble and to get representative samples to
work with. In addition, certain people, such as those who are not very articulate or
confident, and those who have communication problems or special needs, may be
discouraged from participating.
Loss of control over
the data produced
Other than keeping participants focused on the topic, the scholar has less control
over the data produced as participants must be allowed to talk to each other, ask
questions, and express doubts and opinions.
Responses from group
members are not
independent of one
other
It cannot be assumed that participants are expressing their personal views but
instead they may be expressing those that they perceive to be most acceptable to
the group. In addition, a limited number of participants may dominate proceeding
and bias the result, while more reserved participants are hesitant to express their
views.
Results are not fully
confidential or
anonymous
The discussions taking place in focus groups are never fully confidential or
anonymous because the content is being shared with others in the group. Some
participants may, therefore, decide to withhold some relevant information because
of confidentiality concerns.
Limited generalisability Because of difficulties in putting together a representative sample, getting all
participants heard, and also due to some participants possibly having hidden
agendas the results may be biased and it may be difficult to generalise from them.
Analysis of the data
can be difficult
The open-ended nature of the questions and the wide ranging responses can
make the analysis of the data difficult.
Time constraints and
complexity of
discussions
The time available for discussions in a focus group is limited and this means that
some complex issues may not receive sufficient time and may not be covered
appropriately.
Requirement for a
skilled and
experienced
moderator
A skilled and experienced moderator is needed for an effective research study




253

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
A review of the role of focus groups in design research is now provided.
4. Focus groups within design research
Focus groups are suited to design research in that they offer designers a flexible range of techniques
that can be applied at the various stages of an iterative design process, thereby providing a consistent
and encompassing research method supporting the full design process (Nielsen 1997). Two distinct
forms of artefact evaluation are generally performed within a design initiative exploratory evaluation
that takes place during the build/evaluate design cycle to clarify the stakeholder needs and to refine
the design of the artefact, and confirmatory evaluation that takes place after the design cycle for the
field testing of an instantiation of the artefact in a particular environment (Hevner 2007). Tremblay et
al. (2010), therefore, propose two distinct types of focus groups for design research: exploratory focus
groups (EFG), which are used for the rapid incremental design and refinement of an artefact and
confirmatory focus groups (CFG), which are used to provide confirmatory evidence of an artefacts
utility in the field.

But there is little clarity in the literature regarding an appropriate number of focus group sessions. At
the minimum, Tremblay et al. (2010) suggest that there should be one pilot focus group, two EFGs,
and at least two CFGs. The pilot session is informal and used to understand timing issues and any
kinks in the questioning route . There should be at least two design cycles whereby each is driven by
the findings from an EFG. Finally, as the unit of analysis is the focus group, it would be difficult to
make a compelling argument for the utility of the designed artifact with just one CFG and they,
therefore, recommend at least two . There is also a lack of transparency in the literature regarding the
appropriate number of focus group participants. The lower boundary for the number of participants is
about four and the upper boundary is twelve (Tremblay et al. 2010). While it can seem easier (and
less expensive) to divide participants into fewer but larger focus groups, this lowers the sample size
as there are then fewer groups across which to compare results . In addition, the dynamics of larger
groups tend to be very different to that of smaller groups in that less interaction is required from each
participant with the result that larger groups can lead to social loafing . On the other hand, a diversity
of participants usually triggers more creative and broad discussions (and perhaps more conflict), but
segregation of participants based on skills and knowledge may provide more in-depth discussions .

Unfortunately, the procedural literature about focus group is very light in detail regarding the conduct
of focus groups and it is not tailored to the specific needs of designers or design research (Bruseberg
and McDonagh-Philp 2002). We now examine how focus groups may be planned and run.
4.1 Planning and organising focus groups
The workload and responsibility on the moderator and facilitator can be onerous and the pre-planning
of focus groups is, therefore, critical to their success. Such planning can take weeks and even months
of preparation effort. Due to the open-ended nature of focus groups, moderation can be complex and
Krueger et al. (2000) identify the following attributes that they deem important when moderating a
focus group: (1) presenting a friendly manner and a sense of humour, (2) involving and allowing all
participants the opportunity to express their views, (3) challenging participants to draw out differences
in opinions and to tease out a diverse range of meanings, (4) communicating clearly, both orally and
in writing, and (5) listening to the views of others, while controlling personal views. In addition, the
moderator should have a clear understanding of various aspects of the artefact being evaluated and
be comfortable presenting it to focus group participants (Tremblay et al. 2010).

The questioning route is also central to the success of the focus groups and ought to be closely
aligned with the research problem being addressed. Tremblay et al. (2010) recommend that there be
no more than twelve questions for a two-hour session and that the questions be ordered, firstly, from
the most general to the more specific and, secondly, by the relative importance of the question to the
research agenda. So for a given artefact, they suggest beginning with an explanation of its purpose,
followed by an outline of different scenarios in which it could be utilised, a description of its design,
training on its use, and finishing with a task where focus group participants are asked to utilise and
evaluate the artefact. A promising evaluation approach can be to use an exercise within the focus
group, whereby participants are asked to collectively complete a task without the artefact and again
with the artefact. The ensuing discussion should revolve around how the artefact was used and how
the completion of the task was altered by its use.

254

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
Drawing from this discussion, Table 3 summarises the steps that should be considered when
organising focus groups for design research purposes.
Table 3: Steps for organising focus groups in design research (after: Gibbs 1997; Gibson and Arnott
2007; Tremblay et al. 2010)
Step Description
Defining the
research
problem and
the type of
focus group.
The appropriateness of the two types of focus group - exploratory focus groups (EFG) and
confirmatory focus groups (CFG) - depends on the research goals of the design initiative.
Determining
the number
and the
duration of
focus groups
The series of focus groups should continue until nothing new is being learned from further
sessions. But this is an extremely difficult and arbitrary decision and instead the scholar
may need to accept that the time has come to move forward even though something new
can always be learned from further sessions.
Determining
the number
and type of
participants in
each focus
group session
The identification of representative, insightful and motivated participants is critical to the
success of focus groups. Putting a group together should not involve a random selection,
but should instead be based on the characteristics of the participants in relation to the
research problem and the solution being evaluated. The lower boundary for the number of
participants is about four and the upper boundary twelve.
Identifying the
moderator
Due to the open-ended nature of focus groups, moderation is complex and places
demands on the required personality, nature, and abilities required of a moderator. In
addition, the moderator should have a clear understanding of various aspects of the
artefact being evaluated and be comfortable presenting it to focus group participants.
Identifying a
questioning
route
The questioning route is the agenda for the focus group and sets the broad direction for
the group discussion and it, therefore, should be closely aligned with the research
problem.
Once all the planning and organising of the focus group has taken place, the day arrives when a focus
group session needs to take place.
4.2 Running the focus group sessions
A typical focus group session can last about two hours and generally consists of the moderator
guiding anywhere between four to twelve people through a focused discussion of a specific topic.
During this time the moderator must expertly ensure that the focus-group session should feel free-
flowing and relatively unstructured, but in reality, the moderator must follow a preplanned script of
specific issues and set goals for the type of information to be gathered. During the group session, the
moderator has the difficult job of keeping the discussion on track without inhibiting the flow of ideas
and comments. The moderator also must ensure that all group members contribute to the discussion
and must avoid letting one participants opinions dominate (Nielsen 1997, p. 95). The seating
arrangements for (face-to-face) focus group sessions can be important. Tremblay et al. (2010)
recommend that a good approach may be to get to know the participants before the questioning route
begins and to place the participants in an U-shape arrangement with the most assertive and expert
participants next to the moderator, while the least talkative are seated directly across from the
moderator. During the session new topics may emerge requiring the moderator to think on his or her
feet and ask further probing follow-up questions of the participants, while at the same time not losing
sight of the focus of the session (Rezabek 2000). In addition, the moderator may need to encourage
some participants to contribute to the session, while at the same time safeguarding against assertive
participants dominating the conversation.

Focus groups sessions can be recorded using video and/or audio tapes. Other times an observer
might be used to take notes of exchanges and also to record any strong reactions, facial expressions,
and/or general tone in the interactions (Tremblay et al. 2010). In some cases data analysis can be
as simple as having the moderator write a short report summing up the prevailing mood in the group,
illustrated with a few colorful quotes. You can also do more detailed analyses, but the unstructured
nature of the groups make this difficult and time consuming (Nielsen 1997, p. 95). When analysing
and reporting the results, the contents of the discussions should be examined for their meanings and
their implications for the research questions. Scholars should look for common themes and variations
within the transcripts that provide rich descriptions of the participants reactions to design features.




255

Paidi ORaghallaigh, David Sammon and Ciaran Murphy
Short quotes may be used to aid in the specific points of interpretation and longer passages of
quotation can be used to give a flavour of the original discussions. Summary tables can be very
helpful, displaying both evidence and counter-evidence of the utility of the solution by focus group.
Drawing from this discussion, Table 4 outlines the steps that should be considered when moderating,
recording and reporting on focus groups.
Table 4: Steps for moderating, recording, and reporting focus groups in design research (after: Gibbs
1997; Gibson and Arnott 2007; Tremblay et al. 2010)
Step Description
Conducting
the focus
group session
The focus group session must be carefully managed for time while still ensuring that all
main contributions can be made during the allocated time.
Capturing and
recording the
focus group
session
Relying on moderator notes may not be sufficient as being a moderator is a full-time job
in a focus group session. Focus groups may be video and/or audio taped.
Analysing and
reporting the
focus group
session
The methods for analysing the focus group data have many of the same challenges in
demonstrating rigor that all qualitative research encounters share. Techniques that are
used for qualitative data analysis and that emphasise the reliability and replicability of
the observations and results should be considered and used where appropriate.
While much of this discussion has assumed that focus groups involve all participants being located in
the same physical space and contributing at the same time, this is not always the case. An example is
computer-mediated focus groups, which can demand some necessary changes in how the focus
groups are moderated and facilitated.
5. Conclusions
Despite the recent emphasis on the importance of evaluation in design research, it continues to be an
ad hoc rather than a systematic practice. Evaluation in design research becomes even more complex
when the position of theory is taken into consideration. This study adopts the position that creating
novel or effective artefacts without complementary theory-building is neither rigorous nor scientific.
According to this view, design research must generate abstract design knowledge about artefacts in
the form of design theories and that artefacts are no more than tests of design theories. Evaluation
must fill two roles in that it must concurrently address the questions of does the artefact work but
also why does it work. But to-date evaluation has been dominated by overly positivistic paradigms.
Instead, this study advocates focus groups as an interactionist approach that strives to incorporate
various stakeholder interests and perspectives in order to achieve a consensus on these two
questions. Focus groups consist of semi-structured question sessions in which a moderator promotes
interaction among a collection of participants that have been brought together to discuss and shed
light on a particular topic, issue or concern. While focus groups are now one of the most widely used
research tools in the social sciences, their use to evaluate and refine design artefacts remains
relatively new to the IS field and, in addition, most of the literature that is available tends to be written
by non-designers. Focus groups do allow participants to express a range of opinions that may not
have been obvious if simply observing their behaviour. Focus groups are suited to design research in
that they can offer designers a flexible range of techniques that can provide a consistent and
encompassing research method supporting the full design process, including the determination of
user needs (at the pre-concept stage), evaluation of prototypes (during the design stage), and finally
the testing of the final solutions (at the post-design stage). But the difficulty in facilitating and
moderating focus groups should not be underestimated and planning can take weeks and even
months of effort. To fill this gap, the paper provided steps that should be followed when planning and
organising focus groups as well as those steps that should be followed during and after a session.
References
Avgerou, C. 1995. Evaluating Information Systems by Consultation and Negotiation. International Journal of
Information Management 15 (6):427-436.
Barrow, P. D. M., and P. J. Mayhew. 2000. Investigating Principles of Stakeholder Evaluation in a Modern IS
Development Approach. Journal of Systems and Software 52 (2-3):95-103.
Bruseberg, A., and D. McDonagh-Philp. 2002. Focus Groups to Support the Industrial/Product Designer: A
Review Based on Current Literature and Designers' Feedback. Applied Ergonomics 33 (1):27-38.
Cleven, A., P. Gubler, and K. M. Havner. 2009. Design Alternatives for the Evaluation of Design Science
Research Artifacts. In The 4th International Conference on Design Science Research in Information
Systems and Technology (DESRIST). Malvern, PA, USA., 19.
256

Paidi ORaghallaigh, David Sammon and Ciaran Murphy




Cronholm, S., and G. Goldkuhl. 2003. Strategies for Information Systems Evaluation - Six Generic Types.
Electronic Journal of Information Systems Evaluation 6 (2):65-74.
Gibbs, A. 1997. Focus Groups. Social research update 19 (8).
Gibson, M., and D. Arnott. 2007. The Use of Focus Groups in Design Science Research. In The 18th
Australasian Conference on Information Systems (ACIS), edited by M. T. e. al. The University of Southern
Queensland, Toowoomba, Australia, 327- 337.
Goldkuhl, G., and M. Lind. 2010. A Multi-Grounded Design Research Process. In The 5th International
Conference Ion Design Science Research in Information Systems and Technology (DESRIST) edited by R.
Winter, J. L. Zhao and S. Aier. St.Gallen, Switzerland: Berlin: Springer, 45-60.
Guba, E. G., and Y. S. Lincoln. 1989. Fourth Generation Evaluation. California, USA: Sage Publications, Inc.
Hansen, K., and R. S. Hansen. 2006. Using an Asynchronous Discussion Board for Online Focus Groups: A
Protocol and Lessons Learned. In The 2006 College Teaching & Learning Conference. Orlando, FL, USA 1-
8.
Hevner, A. R. 2007. A Three Cycle View of Design Science Research. Scandinavian Journal of Information
Systems 19 (2):87-92.
Hevner, A. R., S. T. March, J. Park, and S. Ram. 2004. Design Science in Information Systems Research. Mis
Quarterly 28 (1):75-105.
Iivari, J. 1988. Assessing IS Design Methodologies as Methods of IS Assessment. In Information Systems
Assessment: Issues and Challenges, edited by N. Bjrn-Andersen and G. B. Davis. Amsterdam, The
Netherlands: North-Holland, 59-78.
. 2007. A Paradigmatic Analysis of Information Systems As a Design Science. Scandinavian Journal of
Information Systems 19 (2):39-64.
Kitzinger, J. 1995. Qualitative Research: Introducing Focus Groups. British Medical Journal 311 (7000):299-302.
Kontio, J., L. Lehtola, and J. Bragge. 2004. Using the Focus Group Method in Software Engineering: Obtaining
Practitioner and User Experiences. In The International Symposium on Empirical Software Engineering
(ISESE). Redondo Beach, USA.
Krueger, R. A., and M. A. Casey. 2000. Focus Groups: A Practical Guide for Applied Research. Thousand Oaks,
CA., USA: Sage Publications, Inc.
March, S. T., and G. F. Smith. 1995. Design and Natural Science Research on Information Technology. Decision
Support Systems 15 (4):251-266.
Mark, M. M. 1999. Toward an Integrative Framework for Evaluation Practice. American Journal of Evaluation
20:177-198.
Markus, M. L., A. Majchrzak, and L. Gasser. 2002. A Design Theory for Systems That Support Emergent
Knowledge Processes. Mis Quarterly 26 (3):179-212.
Mazza, R., and A. Berre. 2007. Focus Group Methodology for Evaluating Information Visualization Techniques
and Tools. In 11th International Conference of Information Visualization. Zrich, Switzerland, 74-80.
McDonagh-Philp, D., H. G. Denton, and A. Bruseberg. 2000. The Use and Evaluation of Focus Group Technique:
The Undergraduate Industrial Designer Experience. Journal of the National Association for Design
Education 8:17-26.
Niehaves, B. 2007. On Epistemological Diversity in Design Science: New Vistas for a Design-Oriented IS
Research? In The 28th International Conference on Information Systems (ICIS). Montreal, Canada 133.
Nielsen, J. 1997. The Use and Misuse of Focus Groups. Software, IEEE 14 (1):94-95.
Peffers, K., T. Tuunanen, M. A. Rothenberger, and S. Chatterjee. 2007. A Design Science Research
Methodology for Information Systems Research. Journal of Management Information Systems 24 (3):45-77.
Powell, R. A., and H. M. Single. 1996. Focus Groups. International Journal for Quality in Health Care 8 (5):499.
Pries-Heje, J., R. Baskerville, and J. R. Venable. 2008. Strategies for Design Science Research Evaluation. In
The 16th European Conference on Information Systems (ECIS) Galway, Ireland.
Rezabek, R. J. 2000. Online Focus Groups: Electronic Discussions for Research. Forum Qualitative
Sozialforschung/Forum: Qualitative Social Research. Available from http://www.qualitative-research.net/fqs-
texte/1-00/1-00rezabek-e.htm 1.
Serafeimidis, V., and S. Smithson. 2003. Information Systems Evaluation as an Organizational Institution -
Experience from a Case Study. Information Systems Journal 13 (3):251-274.
Smithson, S., and R. Hirschheim. 1998. Analysing Information Systems Evaluation: Another Look at an Old
Problem. European Journal of Information Systems 7 (3):158-174.
Symons, V. J. 1991. A Review of Information Systems Evaluation: Content, Context and Process. European
Journal of Information Systems 1 (3):205-212.
Tremblay, M. C., A. R. Hevner, and D. J. Berndt. 2010. Focus Groups for Artifact Refinement and Evaluation in
Design Research. Communications of the association for Information Systems 26 (1):27.
Venable, J. 2010. Design Science Research Post Hevner et al.: Criteria, Standards, Guidelines, and
Expectations. In The 5th International Conference on Design Science in Information Systems and
Technology (DESRIST). St. Gallen, Switzerland, 109-123.
Walls, J. G., G. R. Widmeyer, and O. A. El Sawy. 1992. Building an Information System Design Theory for
Vigilant EIS. Information Systems Research 3 (1):36-59.

257
Realizing the Business Value of Service-Oriented
Architecture: The Construction of a Theoretical Framework
Ronan OSullivan, Tom Butler and Philip OReilly
University College Cork, Ireland
ronan.osullivan@umail.ucc.ie
TButler@afis.ucc.ie
Philip.OReilly@ucc.ie

Abstract: Service-oriented computing (SOC) has emerged over the past decade as an alternative and powerful
approach to application development and has sparked an increasing shift from inflexible proprietary software to
more open service-oriented computing environments. These service-oriented environments focus on harnessing
the power of the Internet and delivering business functionality through services. Organizations in many industries
have turned to service-oriented computing environments through the adoption of service-oriented architecture
(SOA). Services are the fundamental elements of SOA and are based on Internet standards and represent
specific business functions. SOA is transporting organizations from the old world of inflexible and expensive
traditional IT architecture to a brave new world where applications are provided in the form of standardized
services. Despite the increasing adoption of SOA within academia and practice, an analysis of the extant
literature by this study reveals a clear lack of research on the business perspective of SOA and in particular on
the business value of SOA. Indeed, the business value of IT (BVIT) research area - a fundamental area of
research within the IS discipline - is considered by many as being under-researched and in need of an expanded
research agenda. This study constructs a theoretical framework and develops a set of propositions and
hypotheses to investigate how the business value of SOA is realized. It illustrates that the combination of a SOA
implementation and complementary resources enable the creation of SOA-enabled resources via multiple
enablers. These SOA-enabled resources produce emergent SOA capabilities which realize business value at the
process level and at the level of the organization.

Keywords: service-oriented architecture, business value of IT, business value of SOA, dynamic capabilities,
complementary resources
1. Introduction and rationale
Over the past decade, a fundamental shift in the way applications are developed and deployed has
originated with the emergence of SOA which is at the heart of the e-business revolution (Chen 2008)
and is hailed by Maurizio et al (2008) as a key disruptive technology in the pursuit of competitive
advantage from IT. In a 2010 survey, TechTarget and Forrester Research (2010) describe SOA as
being entrenched in todays business world and indicates that almost half of respondents are
working in organizations where SOA projects are underway.

Despite the increasing rate of SOA adoption, there is a significant lack of research on the business
perspective of SOA and on the business value of SOA. A heavy focus on the technological
perspective of SOA has rendered the business perspective under-researched (Biemborn et al, 2008;
Luthria and Rabhi 2008; Luthria et al, 2007). Luthria and Rabhi (2008) capture the essence of the gap
by stating that the technological perspective has been appropriately addressed but the business or
practical use of SOA has not. Most organizations who have adopted SOA are using it as a
technological initiative rather than as a business transformation tool (Merrifield et al, 2008) and as a
result, most organizations that adopt SOA do not fully understand the business potential (Luthria and
Rabhi 2008).

Scratching deeper under the surface, a significant body of researchers lament the lack of research on
the business value of SOA and call for it to be investigated (e.g. Luthria and Rabhi 2008; Viering et al,
2009; Biemborn et al, 2008; Kontogiannis et al, 2008). Viering et al (2009) call for theories to be
applied and extended to understand how SOA improves an organizations capabilities to realize
business value, while both Kontogiannis et al (2008) and Biemborn et al (2008) call for the
development of a comprehensive framework for understanding the business value of SOA.

This study addresses these gaps by proposing a theoretical framework to understand how the
business value of SOA is realized. The key objective of the study is to embark on a theoretical
trajectory and leverage previous work in the BVIT research area in order to construct a theoretical
framework for understanding how SOA realizes business value.

258

Ronan OSullivan, Tom Butler and Philip OReilly
This study begins with an analysis of the SOA and BVIT literature and develops thorough
conceptualizations of both research areas. The next section is devoted to the construction of a
theoretical framework and the development of a set of hypotheses to explain how SOA is leveraged to
realize business value. In the first part, the theoretical perspective which comprises the resource-
based view of the firm (RBV), dynamic capabilities and Net-Enabled Business Innovation Cycle
(NEBIC) theory is explained and justified. The second part of the section presents the framework
which focuses on the creation of SOA-enabled resources from the combination of a SOA
implementation and complementary resources, and the emergence of SOA capabilities. Finally, a
conclusion is presented which also provides the findings from a preliminary analysis and an agenda
for future research.
2. An analysis of the SOA literature
The recent rise in popularity of SOA has fuelled the explosion of academic and practitioner literature.
Various conceptualizations of SOA have been produced and key similarities exist throughout. First,
there is consensus that SOA represents an IT architecture that presents an alternative approach to
application development through the utilization and interaction of standardized services (e.g.
Merrifield et al, 2008; Baskerville et al, 2005; Haki and Forte 2010). For instance, Merrifield et al
(2008) and Baskerville et al (2005) describe it as a new way of designing and deploying software that
supports business activities, while Haki and Forte (2010) label it a new way of developing systems
that promotes a shift from writing software to assembling and integrating services (Haki and Forte
2010).

The second similarity contends that services, which play an integral role in service-oriented
architecture, are based upon business functions. SOA allows business functions to be developed as
modular services which can be called upon and reused when they are needed within business
processes (Maurizio et al, 2008). This provides organizations with the opportunity for firms to
fundamentally redesign their operations (Merrifield et al, 2008) and develop new functionality to
satisfy the diverse requirements of potential service consumers (Chen 2008).

Thirdly, the SOA literature highlights the key characteristics of SOA (e.g. Hau et al, 2008, Merrifield et
al, 2008). Table 1 describes each characteristic.
Table 1: The characteristics of SOA
Characteristics Description References
SOA is based on
Internet
Standards
SOA is a net-enabling technology as services are
built upon Internet standards such as HTTP,
SOAP, WSDL, and UDDI which allow services to
be easily integrated and reconfigured.
Baskerville et al (2005),
Hau et al (2008),
Merrifield et al (2008)
Service
integration
The standardized services which comprise SOA
facilitate platform- and language-independent
integration with each other.
Baskerville et al (2005),
Hagel and Seely Brown (2001),
Papazoglou et al (2007)
Service
modularity
Services are modular (or autonomous) and can be
combined and rearranged in a flexible and timely
manner without any unpredictable effect on other
services.
Hau et al (2008),
Maurizio et al (2008),
Chen (2008)
Service
reusability
SOA allows and encourages service consumers to
reuse services in different business contexts.
Baskerville et al (2005),
Hau et al (2008),
Merrifield et al (2008)
Contract-
orientation
Service contracts publicly establish the terms of
engagement between services and the
collaborating partners such as specifying the
functionality of a particular service.
Hau et al. (2008),
Maurizio et al. (2008),
Chen (2008)

From the SOA descriptions and characteristics in Table 1, this study defines SOA as an architectural
style that offers an alternative approach to application development and delivery through the
utilization and combination of standardized, modular and reusable services. These services represent
specific business functions which are the primary focus for service-oriented development.

The benefits of SOA have been a subject of interest for SOA authors (e.g. Merrifield et al, 2008;
Baskerville et al, 2005). Table 2 describes each of the key business benefits of SOA.


259

Ronan OSullivan, Tom Butler and Philip OReilly
Table 2: The business benefits of SOA
Business
Benefits
Description References
Reduced IT
Costs
Reduced Integration Costs: SOA provides the ability to easily
and cost-effective and ubiquitously share software modules
with internal and external clients.
Reduced Application Development Costs: SOA reduces
development costs through the reuse of modular services in
separate applications and in different contexts.
Merrifield et al (2008),
Baskerville et al
(2005),
Maurizio et al (2008),
Papazoglou et al
(2007),
Enhanced
Innovation
SOA enables new, innovative service-oriented business
solutions to be developed through the combination of existing
and different services and the opportunities for new
functionality.
Hagel and Seely
Brown (2001),
Merrifield et al (2008)
Hau et al (2008),
Increased
Flexibility
Service-oriented applications can be quickly and cost-
effectively assembled and reassembled to respond to changing
market conditions or demand.
Bieberstein et al
(2005), Merrifield et al
2008)
3. An analysis of the BVIT literature
Since the late 1980s a significant amount of literature has explored the contribution of IT to business
performance, so-much-so that by the late mid-to-late 2000s the BVIT had solidified its place at the
core of the IS research field (Kohli and Grover 2008; Nevo and Wade 2010; Agarwal and Lucas
2005). The importance of the research area is emphasized by Barua et al (1995) who contend that
measuring the economic contribution of IT investments is a key activity that can shape the very
nature of business through its influence on corporate strategies and future investments in technology
(Barua et al, 1995). Nevo and Wade (2010) agree with this sentiment as they refer to the integral and
strategic role of IT in modern organizations. In general, the importance of IT for business is brought
across by the consensus among authors and practitioners that IT improves business performance and
creates business value (e.g. Kohli and Grover 2008; Brynjolfsson and Hitt 2000).

The BVIT literature highlights several characteristics that define the BVIT research area. Table 3
indicates these characteristics and provides a description for each.
Table 3: Characteristics of the BVIT research area
Characteristic Explanation
The area is under-
researched
The IS field is not doing enough to explain how organizations are realizing
business value through IT (Kohli and Grover 2008; Nevo and Wade 2010).
The domination of
Firm-level research
Firm-level research is considered to be the most effective way of demonstrating a
positive relationship between IT investments and organizational performance
(e.g. Brynjolfsson and Hitt 2000).
The difficulty of
measuring BVIT
BVIT measurement represents an ever-present problem to the BVIT research
area as it is very difficult to attribute value to IT (Melville et al, 2004; Barua et al,
2004; Brynolfsson and Hitt 2000). Research has typically focused on traditional
accounting measures however, these measures are now considered inadequate
in isolation as indirect or intangible factors are difficult to measure through
traditional accounting techniques (Kohli and Grover 2008; Melville et al, 2004).
The importance of
complementary
resources
Complementary resources are central to the realization of BVIT as the IT asset
cannot be considered in isolation in the pursuit of improved business
performance (Melville et al, 2004; Nevo and Wade 2010; Barua et al, 2004).
The dominant
application of the RBV
in the research area
The importance of complementary resources to BVIT research is a key factor for
the widespread application of the RBV in the research area (Barua et al, 2004;
Melville et al, 2004; Kohli and Grover 2008).
This study broadly conceptualizes BVIT as the contribution of IT to business performance and adopts
the BVIT definition of Melville et al (2004): the organizational performance impacts of information
technology at both the intermediate process level and the organization wide level, and comprising
both efficiency impacts and competitive impacts (Melville et al, 2004).

The next section is devoted to the construction of a theoretical framework in order to understand how
SOA realizes business value

260

Ronan OSullivan, Tom Butler and Philip OReilly
4. Toward a theoretical framework for understanding the business value of
SOA
This section begins with a detailed rationale for the theoretical perspective that encompasses the
RBV, dynamic capabilities, and NEBIC theory. The rationale for each is based upon their relationships
with each other, their connections to the concept of SOA, and the contentions that SOA is a net-
enabling technology and net-enablement is a dynamic capability. A theoretical framework is then
presented which applies tenets from each of the three theories and builds primarily upon the work of
Nevo and Wade (2010) who contend that the combination of an IT asset and complementary
resources can ultimately create business value.
4.1 The rationale for the theoretical perspective
The rationale for this studys theoretical perspective which encompasses the resource-based view of
the firm, dynamic capabilities and Net-Enabled Business Innovation Cycle (NEBIC) theory is
explained below.
4.1.1 The RBV
The RBV is a theory that emphasizes distinctive resources are the basis for achieving and retaining
competitive advantage (Barney 1991; Wernerfelt 1984) and is extremely useful in the BVIT research
area (e.g. Melville et al, 2004; Kohli and Grover 2008). The RBV is considered to be the base-
theoretical perspective for the study for three key reasons: first, SOA is considered by many as being
conceptually close to RBV as SOA sees the leveraging of existing applications into services which
can be rapidly recombined into new business solutions, while the RBV contends that competitive
advantage can be attained through the combination of heterogeneous resources (e.g. Chen 2008;
Choi and Ramamurthy 2011). Second, this study asserts that the firms resources are of central
importance as the combination of SOA and complementary resources create SOA-enabled resources
which are proposed as sources of business value (see Figure 2). Finally, both the dynamic
capabilities and NEBIC theory are inherently based on the RBV.
4.1.2 Dynamic capabilities
Dynamic capabilities provide the ability to integrate, reconfigure, add and dispose of internal and
external resources, and enable firms to perform these actions in response to continuously changing
business environment conditions (Eisenhardt and Martin 2000; Teece et al, 1997). The development
of the dynamic capabilities approach stems from the key limitations of the RBV: the failure to: (a)
explain how resources are developed, integrated, and released, and (b) explain how firms achieve
competitive advantage in unstable or dynamic business environments. Table 4 identifies and explains
the different roles of dynamic capabilities in enabling firms to increase competitiveness. Examples of
dynamic capabilities include net-enablement (Wheeler 2002) and product development routines
(Eisenhardt and Martin 2000).
Table 4: The roles of dynamic capabilities
Role Description
Integration The efficient and effective integration of internal and external resources, skills and
activities allows firms to develop such dynamic capabilities which enable them to
increase competitiveness in dynamic business environments (Teece et al, 1997;
Eisenhardt and Martin 2000).
Learning Learning processes are vital to the dynamic capabilities approach as they enable tasks
to be performed better and more quickly, create knowledge, and identify new
opportunities (Teece et al, 1997; Eisenhardt and Martin 2000).
Reconfiguration The reconfiguration of a firms resources and capabilities is vital in order to enable a firm
to change their bundle of resources in order to adapt to change. (Teece et al, 1997;
Eisenhardt and Martin 2000).
The dynamic capabilities approach is regarded as the most critical theoretical perspective for two key
reasons: first, there is an consensus that SOA is heavily influencing and enabling the development of
dynamic capabilities (e.g. Luthria et al, 2007; Choi and Ramamurthy 2011; Luthria and Rabhi 2008).
For instance, Luthria et al (2007) describes SOA as the technology infrastructure required to
implement a firms dynamic capabilities (Luthria et al, 2007). Second, the central IT artifact, the SOA
implementation, is a net-enabling technology and net-enablement is declared by Wheeler (2002) as a
261

Ronan OSullivan, Tom Butler and Philip OReilly
dynamic capability. It is for these reasons that the dynamic capabilities approach can be considered
extremely suitable for studying SOA.
4.1.3 NEBIC theory
The explosion of Internet technologies and in particular, net-enabled business transactions (e.g. Weill
and Vitale 2001) has fuelled the development of NEBIC theory - an applied dynamic capabilities
theory which integrates the fields of strategic management and IS research through the dynamic
capability of net-enablement (Wheeler 2002). Net-enablement enables organizations to leverage
pervasive digital networks (i.e. the Internet) in order to reconfigure their resources and exploit
business opportunities in dynamic business environments. NEBIC theory contends that enabling
technologies create or reveal economic opportunities which can be transformed through business
innovation into customer value (Wheeler 2002). Figure 1 demonstrates the thesis of NEBIC theory.

Figure 1: Wheelers (2002) NEBIC theory
NEBIC theory is leveraged within the theoretical perspective for three key reasons. First, NEBIC is
specifically developed for the IS research field, and in particular is suited to the BVIT research area as
the objective is creating value. Second, net-enablement is decreed as a dynamic capability, and third,
SOA is a net-enabling technology as one of its key characteristics is that it is based on Internet
standards (Baskerville et al, 2005; Merrifield et al, 2008).
4.1.4 Applying the theoretical perspective to the theoretical framework
The theoretical perspective is applied to the framework in the following way. The RBV stresses the
importance of resources to firm-competitiveness and this is recognized and applied to the framework
as resources are vital in association with IT assets and are key sources for realizing BVIT. NEBIC
theory and dynamic capabilities are applied in conjunction with each other as NEBIC Theory contends
that net-enablement is a dynamic capability and because SOA is deemed net-enabling, the
framework applies the dynamic capabilities approach through its key roles of integration, learning and
reconfiguration in the development of specific resources.
4.2 A theoretical framework for understanding the business value of SOA
Based on a literature review of SOA and BVIT and the theoretical perspectives explained above, a
theoretical framework has been constructed as illustrated in Figure 2. In this framework, the
implementation of SOA and complementary resources are combined to create SOA-enabled
resources via multiple enablers. Complementary resources constitute organizational and
environmental resources which are applied in conjunction with an IT asset in order to achieve
organizational performance impacts (Melville et al, 2004; Barua et al, 2004; Nevo and Wade 2010).
262

Ronan OSullivan, Tom Butler and Philip OReilly
Examples of complementary resources and factors include: training (Kohli and Grover 2008),
business processes (Melville et al, 2004), and IT management skills (Mata et al, 1995). An example of
a SOA-enabled resource is the partnership between SOA and BPM (Business Process Management)
with SOA providing the capabilities for manipulating standardized, modular and reusable services,
and BPM providing the ability to optimize business processes in a service-oriented manner (Bajwa et
al, 2009; Brahe, 2007).

The framework then proposes that SOA-enabled resources produce emergent SOA capabilities which
realize business value at the process level and at the level of the organization. Emergent SOA
capabilities refer to outcomes or actions which are classed as either predictable or unpredictable and
technical or strategic in nature (Nevo and Wade 2010). Examples of emergent SOA capabilities
include: reduced application development and maintenance time and costs, the reuse of service, and
to rapidly and easily reconfigure information systems to adapt to change (e.g. Merrifield et al, 2008).

This description of the theoretical framework is based upon an analysis of the SOA literature and
Nevo and Wades (2010) argument that the full extent of IT assets business value may not become
apparent until they are placed in a relationship with organizational resources and used to create IT-
enabled resources (Nevo and Wade 2010).

Figure 2: A theoretical framework for understanding the business value of SOA
4.2.1 The creation of SOA-enabled resources
An IT asset cannot be considered in isolation but rather in combination with complementary resources
in the pursuit of improved business performance (Brynjolfsson and Hitt 2000; Nevo and Wade 2010;
Kohli and Grover 2008; Barua et al, 2004). Through the combination of technological, organizational
and environmental resources, a set of capabilities can be developed to improve operational and
financial performance (Barua et al, 2004). Similarly, Nevo and Wade (2010) claim IT assets can be
combined with organizational resources to create synergistic IT-enabled resources and consequently,
emergent capabilities in order to achieve and retain competitive advantage. This study contends that
SOA-enabled resources are the result of the combination of the SOA implementation and
complementary organizational and environmental resources.

In order to create IT-enabled resources, Nevo and Wade (2010) emphasize two specific enablers -
compatibility and integration. Compatibility refers to the ability of an organizational resource to apply
an IT asset in its regular activities and routines (Nevo and Wade 2010), and integration refers to
activities taken by the organizations management to support, guide and assist the implementation of
the IT asset within the organizational resource (Nevo and Wade 2010). This study contends that the
key roles of dynamic capabilities (see Table 4) are potential enablers of SOA-enabled resources as
SOA is a net-enabling technology and net-enablement is a dynamic capability. Similar to compatibility
263

Ronan OSullivan, Tom Butler and Philip OReilly
and integration, these roles are made up of routines and activities and enable firms to increase
competitiveness. Therefore, the following proposition is developed:

Proposition 1 - The creation of SOA-enabled resources through the combination of a SOA
implementation and complementary resources is enabled through the compatibility, integration,
reconfiguration of resources and learning.

We refine Proposition 1 by specifying four hypotheses as illustrated in Figure 2:

H1a - The creation of SOA-enabled resources through the combination of a SOA implementation and
complementary resources is enabled through the compatibility of resources.

H1b - The creation of SOA-enabled resources through the combination of a SOA implementation and
complementary resources is enabled through the integration of resources.

H1c - The creation of SOA-enabled resources through the combination of a SOA implementation and
complementary resources is enabled through the reconfiguration of resources.

H1d - The creation of SOA-enabled resources through the combination of a SOA implementation and
complementary resources is enabled through learning.
4.2.2 The realization of business value of SOA through emergent SOA capabilities
The emergence of capabilities from the relationship between complementary resources and IT are
regarded as sources of business value (Barua et al, 2004; Nevo and Wade 2010). According to Nevo
and Wade (2010), specific capabilities emerge from synergistic IT-enabled resources which can be
used to attain and sustain competitive advantage, while Barua et al, (2004) claim these emergent
capabilities have the ability to realize business value. Furthermore, the definition of BVIT provided by
Melville et al, (2004) contends that the realization of business value occurs at the process-level and at
the level of the organization. Therefore, this study proposes the following propositions:

Proposition 2 - SOA-enabled resources produce SOA capabilities which directly realize business
value.

Proposition 3 - The SOA capabilities which emerge from the SOA-enabled resources realize business
value at the process level and at the level of the organization.

We refine Proposition 3 by specifying two hypotheses as illustrated in Figure 2:

H3a - The SOA capabilities which emerge from the SOA-enabled resources realize business value at
the process level.

H3b - The SOA capabilities which emerge from the SOA-enabled resources realize business value at
the level of the organization.
5. Conclusion and future research
Despite a significant body of research on service-oriented computing and SOA, there is a dearth of
research on the business perspective of SOA and particularly the business value of SOA. The
objective of the study is to address this gap in the literature by constructing a theoretical framework for
understanding for the business value of SOA

This paper makes a number of theoretical contributions to the extant literature. Firstly, it presents a
theoretical framework for understanding the business value of SOA. In developing this framework,
RBV, dynamic capabilities and NEBIC theory is used in conjunction with previous work that focuses
on the combination of the IT asset and complementary resources (e.g. Nevo and Wade 2010; Barua
et al, 2006). The result is a framework which suggests that the combination of a SOA implementation
and complementary resources enable the creation of SOA-enabled resources via multiple enablers.
Seen as net-enablement is a dynamic capability, the framework extends Nevo and Wades (2010)
enablers from integration and compatibility to include reconfiguration and learning (Teece et al, 1997;
Eisenhardt and Martin 2000). These SOA-enabled resources produce emergent SOA capabilities
264

Ronan OSullivan, Tom Butler and Philip OReilly
which realize business value at the process level and at the level of the organization. Furthermore,
the theoretical value of the paper is that it applies a different type of resource-based theoretical
perspective to the work carried out primarily by Nevo and Wade (2010). The RBV is heavily
incorporated into Nevo and Wades (2010) work however, the theoretical perspective incorporated by
this study is expanded and is based on the RBV, dynamic capabilities and NEBIC theory. Dynamic
capabilities is most prominent as the framework transforms its key roles (see Table 4) into enablers
for the creation of SOA-enabled resources which produce emergent SOA capabilities that are direct
sources of business value.

This paper also has great potential value for practitioners. Through utilization of the theoretical
framework, it potentially offers an insight into how organizations can optimally manage their SOA
implementation in order to maximize value from their SOA. It can also be used to inform practitioners
of the nature of the business value which arises as a result of a SOA implementation.

The researchers are now calling on further research to empirically test and validate the presented
model. One potential approach would consist of a two-stage multi-method approach, comprising of a
combination of case studies and an organization survey of SOA practices.
References
Agarwal, R. and Lucas, H.C. (2005) "The Information Systems Identity Crisis: Focusing on High-Visibility and
High-Impact Research", MIS Quarterly, Vol. 29, No. 3, September, pp 381-398.
Bajwa, I.S. et al (2009) SOA and BPM Partnership: A Paradigm for Dynamic and Flexible Process and I.T.
Management, International Journal of Human and Social Sciences Vol. 4, No. 7, pp 554-560.
Barney, J.B. (1991) Firm Resources and Sustained Competitive Advantage, Journal of Management, Vol. 17,
No. 1, March, pp 99-120.
Barua, A., Kriebel, C. and Mukhopadhyay, T. (1995) "Information Technologies and Business Value: An Analytic
and Empirical Investigation", Information Systems Research, Vol 6, No. 1, March, pp 3-23.
Barua, A. et al (2004) An Empirical Investigation of Net-Enabled Business Value, MIS Quarterly, Vol. 28, No. 4,
December, pp 585-620.
Baskerville, R. et al (2005) Extensible Architectures: The Strategic Value of Service-oriented Architecture in
Banking, Proceedings of the 13th European Conference on Information Systems, Regensburg, Germany,
Vol. 341, No 1-2, May, pp 106-111.
Beimborn, D., Joachim, N. and Weitzel, T. (2008) "Drivers and Inhibitors of SOA Business Value -
Conceptualizing a Research Model", AMCIS 2008 Proceedings, Paper 227.
Bieberstein, N. et al (2005) Impact of Service-oriented Architecture on Enterprise Systems, Organizational
Structures, and Individuals, IBM Systems Journal, Vol. 44, No. 4, pp 691-708.
Brahe, S. (2007) BPM on Top of SOA: Experiences from the Financial Industry, Proceedings of the 5th
International Conference on Business Process Management, Germany, pp 96-111.
Brynjolfsson, E. and Hitt, L.M. (2000) "Beyond Computation: Information Technology, Organizational
Transformation and Business Performance, Journal of Economic Perspectives, Vol. 14, No. 4, Autumn, pp
23-48.
Chen, H.M. (2008) Towards Service Engineering, Service Orientation and Business-IT Alignment, Proceedings
of the 41st Hawaii International Conference on System Sciences, USA, January.
Choi, J. and Ramamurthy, K. (2011) Service-Oriented Architecture and IT-Business Alignment, Proceedings of
the 2011 International Conference on Industrial Engineering and Operations Management, Kuala Lumpur,
Malaysia, January pp 480-485.
Eisenhardt, K.M. and Martin, J.A. (2000) "Dynamic Capabilities: What Are They?", Strategic Management
Journal, Vol, 21, No. 10-11, October-November, pp 1105-1121.
Hagel III, J. and Seely Brown, J. (2001) Your Next IT Strategy, Harvard Business Review, Vol. 79, No. 9,
October, pp 105113.
Haki, M. and Forte, M. (2010) Proposal of a Service Oriented Architecture Governance Model to Serve as a
Practical Framework for Business-IT Alignment, Proceedings of the 4th International Conference on New
Trends in Information Science and Service Science, May, pp 410 417.
Hau, T. et al (2008) Where to start with SOA: Criteria for Selecting SOA Projects, Proceedings of the 41st
Hawaii International Conference on System Sciences, USA, January.
Kohli, R. and Grover, V. (2008) Business Value of IT: An Essay on Expanding Research Directions to Keep up
with the Times, Journal of the AIS, Vol. 9, No. 1, January, pp 23-37.
Kontogiannis, K., Lewis G.A. and Smith D.B. (2008) A Research Agenda for Service-oriented Architecture,
Proceedings of the 2nd International Workshop on Systems Development in SOA Environments, May,
Leipzig, Germany.
Luthria, H. and Rabhi, F.A. (2008) Service Oriented Computing in Practice - An Agenda for Research into the
Factors Influencing the Organizational Adoption of Service Oriented Architectures, Journal of Theoretical
and Applied Electronic Commerce Research, Vol. 4, No. 1, April, pp 39-56.
265

Ronan OSullivan, Tom Butler and Philip OReilly
Luthria, H., Rabhi, F.A. and Briers, M. (2007) Investigating the Potential of Service Oriented Architectures to
Realize Dynamic Capabilities, Asia-Pacific Service Computing Conference, The 2nd IEEE, December, pp
390-397.
Mata, F.J., Fuerst, W. L. and Barney, J. B. (1995) Information Technology and Sustained Competitive
Advantage: A Resource-Based Analysis, MIS Quarterly, Vol. 19, No. 4, December, pp 487-505.
Maurizio, A. et al (2008) Service Oriented Architecture: Challenges for Business and Academia, Proceedings of
the 41st Hawaii International Conference on System Sciences, USA, January.
Melville, N., Kraemer, K. and Gurbaxani, V. (2004) "Review: Information Technology and Organizational
Performance: An Integrative Model of IT Business Value", MIS Quarterly, Vol. 28, No. 2, June, pp 283-322.
Merrifield, R., Calhoun J. and Stevens D. (2008) The Next Revolution in Productivity, Harvard Business Review,
June, pp 73-80.
Nevo, S. and Wade, M.R. (2010) The Formation and Value of IT Enabled Resources: Antecedents and
Consequences of Synergistic Relationships, MIS Quarterly, Vol. 23, No. 1, March, pp 163-183.
Papazoglou, M.P. et al (2007) Service-oriented Computing: State of the Art and Research Challenges,
Computer 2007, Vol. 40, No. 11, November, pp 38-45.
TechTarget and Forrester Research (2010) State of SOA 2010,
http://cdn.ttgtmedia.com/searchSOA/downloads/TTAG-State-of-SOA-2010-execSummary-working-
523%5B1%5D.pdf
Teece D., Pisano, G. and Shuen, A. (1997) Dynamic Capabilities and Strategic Management, Strategic
Management Journal, Vol. 7, No. 18, August, pp 509-533.
Viering, G., Legner, C. and Ahlemann, F. (2009) "The (Lacking) Business Perspective on SoaCritical Themes in
Soa Research", Wirtschaftinformatik Proceedings 2009. Paper 5.
Wernerfelt, B. (1984) "A Resource-based View of the Firm", Strategic Management Journal, Vol. 5, No. 2,
April-June, pp 171-180.
Wheeler, B. (2002) "NEBIC: A Dynamic Capabilities Theory for Assessing Net-enablement", Information
Systems Research, Vol. 13, No. 2, June, pp 125-146.
266
The Identification of Service Oriented Architecture-Specific
Critical Success Factors
Ian Owens and John Cunningham
Cranfield Defence and Security, Shrivenham, UK
i.owens@cranfield.ac.uk

Abstract: This paper reports on a research project that sought to determine whether it is possible to identify
Service Oriented Architecture (SOA) specific critical success factors (CSFs). SOA is an approach to designing
interoperable information systems based on a set of design principles and the concept of loosely coupled
services. In recent years SOA has become the de facto method for designing distributed interoperable
information systems. Despite the widespread use of SOA design principles, it remains not only technically difficult
to implement, but also presents a substantial challenge to systems architects and managers. Our hope is that
SOA-specific CSFs will enable project managers involved in SOA implementations to best allocate resources to
those areas that are critical to the success of SOA-based projects. We conducted a comprehensive systematic
review of the SOA literature, and identified five SOA-specific CSFs which we believe may be critical for realizing
the benefits of SOA. To externally validate the CSFs identified from our literature review, we surveyed project
managers and implementers in a department in a large defence-related world-wide organization which is
currently implementing SOA-based systems. The results of our research confirmed the validity of the SOA
specific CSFs identified through our literature review. We recommend that project managers use both our SOA-
specific CSFs and generic project CSFs in combination to manage SOA projects.

Keywords: SOA, CSFs, project management, enterprise architectures
1. Introduction
In the past, large organizations developed or commissioned the development of a broad range of
software applications from a variety of vendors. The majority of these applications were procured for a
specific purpose, often in isolation from other concurrent developments. Over time, heterogeneous
information systems (IS) and applications increased in both number and complexity the result was an
enormous amount of duplication and consequently a great deal of unnecessary and excessive
investment (Tsai 2005). Much of that duplication of capability was attributable to the fact that many of
these applications were incompatible with other IS and thus the number of users of the application
was restricted to those within that department or using the same hardware or operating system. The
phrase tightly coupled was often used to describe applications that were highly reliant on a particular
system. Conversely, the term loosely coupled implies no or very little reliance on a particular system
(Kaye 2003, Tsai 2005).

Given the cost and commitment of resources required for developing bespoke software, re-use and
systems interoperability have been primary goals of many organizations, especially those that rely
heavily on computer networks (Lim and Wen, 2003). Service Oriented Architecture (SOA) is an
architectural pattern that can be used to build distributed IS that allows the use, and most importantly
the re-use, of existing applications. SOA not only leverages on existing assets but, by virtue of the
loosely coupled applications, SOA facilitates the adding or removing of individual applications and
thus makes implementing changes to support changing business requirements simple, quick and
relatively inexpensive.
1.1 Aims of the research
The aims of this research were twofold. Firstly we wanted to determine whether it is possible to
identify SOA specific critical success factors (CSFs). CSFs are an important tool for those planning,
managing and implementing projects. There has been a lot of published work in the field of identifying
CSFs for influencing IS project management outcomes, notably the work of Pinto and Prescot (1988).
Our second aim was to build on the work of Pinto and Prescott and empirically test SOA-related CSFs
that we derived from theory, on a real SOA implementation project.
1.2 Structure of the paper
The paper is structured as follows. We begin by describing SOA and critical success factors. This is
followed by an identification of SOA-specific CSFs. We then describe our empirical study and discuss
our results. The concluding section discusses the relevance of our work and makes some
recommendations based on our findings.
267

Ian Owens and John Cunningham
2. Service oriented architecture (SOA)
SOA is a well established approach to designing information systems based on concepts such as
reuse and loose coupling of services that promote and enable good IT management. The three main
components of a SOA system are service providers, service consumers and service
brokers/registries. Service providers are the hosts of services, service consumers are the users of
services, and service brokers/registries help to facilitate the discovery and advertisement of services,
by utilising a registry (Foster et al. (2002), Papazoglou. and Georgakopoulos (2003), Bucur and
Bardram (2007), Yang and Joy (2010)). Figure 1 illustrates the relationship between the three
components:

Figure 1: Three components of a SOA-based system.
Van Halteren and Pawar (2006) define SOA as, essentially a collection of services that communicate
with each other to achieve a common goal. In OASIS (2006) SOA is defined as: SOA is a paradigm
for organizing and utilizing distributed capabilities that may be under the control of different ownership
domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce
desired effects consistent with measurable preconditions and expectations.
2.1 Critical success factors
The emergence of CSFs as a concept can be traced back to the work of Daniel (1961) who
introduced the idea of success factors into management literature. In the early 1970s Anthony, et al,
(1972) introduced the idea of tailoring critical success factors, and Rockart, (1979) combined the work
of these authors to describe a study of three organizations in the same sector. Rockarts study
confirmed that organizations in the same industrial sector shared similar CSFs. Rockart decribes
CSFs as:
Critical success factors thus are, for any business, the limited number of areas in which
results, if they are satisfactory, will ensure successful competitive performance for the
organization. They are the few key areas where "things must go right" for the business to
flourish. Rockart, (1979)
Rockarts definition is particularly relevant as it has stood the test of time. Even today, Rockart is
routinely quoted in numerous articles related to CSFs for example, Jenster, (1987), Chen, (1999),
Fortune & White, (2006). The 1990s saw the development of CSFs from strategic decision making to
the specific domain of Project Management (PM). Slevin and Pinto (1986), developed the Project
Implementation Profile (PIP) which identified 11 CSFs relevant to a multitude of projects within the
business arena. These CSFs were later tested empirically by Pinto and Prescott (1988); Pintos work
was subsequently incorporated into the Project Management Institutes, (1996), Body of Knowledge
(PMBOK) publication. Slevin and Pintos PIP has been tested empirically by other authors, including
Finch (2003) who evaluated the PIP in an information systems project.

Our contention is that CSFs are an established concept for achieving successful outcomes to
complex problems and with SOA being a significant undertaking for any organization it is perhaps
also fair to assume that there will be CSFs associated solely with implementing a SOA. The next
section describes how we examined existing theory relating to SOA-based IS implementations to
derive a set of SOA-specific CSFs.
268

Ian Owens and John Cunningham
3. The discovery of SOA-specific CSFs
In order to establish a definitive as possible list of CSFs specific to SOA, We followed the advice of
Webster and Watson (2002) and created a concept matrix to identify recurring concepts relating to
SOA implementations. We conducted a search using standard databases such as SCOPUS and Web
of Science. In order to decide which of the concepts are critical, a mechanism for organizing them
into priority order was required. To achieve this, we subjected the findings from the literature search to
a process of content analysis and frequency analysis; and from this applied weightings to the results.
Michael and Lewis, (1994) describe content analysis as seeking to draw valid reasoning from text
based on data and context. Seaman, (1994) commends frequency analysis as a method of drawing
quantitative data from qualitative data which affords empirical evidence a degree of statistical
analysis.

This analysis took the form of awarding two scores out of five for each of the candidate CSFs based
on the number of occurrences they had and the degree of importance they were given in a number of
articles pertaining to the implementing a SOA. In the case of frequency, the score was based on
either the frequency with which a particular CSF was discussed, the percentage of text devoted to
that CSF, or a combination of both. For example, Bieberstein et al, (2008), devotes an entire chapter
to governance and continually refers to governance in the other chapters; this merits a score of five
out of five. On the other hand, Roshen et al, (2009) makes no reference whatsoever to governance
and thus merits a score of zero. For the content analysis, the mark out of five depends on how much
emphasis was given to the task in question being critical to the successful implementation of SOA.
For example; Lee, et al., (2005), actually state governance as a SOA specific CSF. This therefore
merits a content score of five out of five in order to reflect the very strong opinion that governance is
indeed a CSF. In the absence of a statement of criticality, the overall score was based on the degree
of implied importance the authors of the article appeared to give the task. Content analysis was
therefore heavily weighted by our own understanding of what was being articulated in the literature.
The findings of the sub-review are shown in Table 1. However, for ease of reference a summary of
the findings is shown in Table 2
Table 1: Critical success factors for implementing a SOA
Critical Success Factors for Implementing SOA
Author

B
o
n
n
e
t
,

e
t

a
l
.
,

(
2
0
0
9
)
B
r
o
w
n
,

(
2
0
0
8
)

E
r
l
,

(
2
0
0
7
)

R
o
s
e
n
,

(
2
0
0
8
)

B
i
e
b
e
r
s
t
e
i
n
,

e
t

a
l
.
,

(
2
0
0
8
)
R
o
s
h
e
n
,

(
2
0
0
9
)

E
r
i
c
k
s
o
n

a
n
d

S
i
a
u
,

(
2
0
0
8
)
M
a
n
s
u
k
h
a
n
i
,

(
2
0
0
5
)
.
K
a
v
i
s
,

(
2
0
0
7
)

L
a
w
l
e
r
,

e
t

a
l
.
,

(
2
0
0
7
)
F
i
g
l
i
n
,

(
2
0
0
8
)
.

V
a
r
a
d
a
n
,

e
t

a
l
.
,

(
2
0
0
8
)
.
L
e
e
,

e
t

a
l
.
,

(
2
0
1
0
)
.

A
n
t
i
k
a
i
n
e
n

a
n
d

P
e
k
k
o
l
a
,
(
2
0
0
9
)
P
a
p
a
z
o
g
l
o
u

&

v
a
n

d
e
n

H
e
u
v
e
l
,

(
2
0
0
7
)
G
u
l
l
e
d
g
e
,

&

D
e
l
l
e
r
,

(
2
0
0
9
)
.
L
i
m
,

W
e
n
,

(
2
0
0
3
)

H
o
c
h
s
t
e
i
n
,

(
2
0
0
5
)
,

S
u
b

T
o
t
a
l
s

T
o
t
a
l
s

R
a
n
k

F 5 2 4 5 5 4 1 4 0 1 5 2 1 0 1 3 2 1
4
6
Implementa
tion
Methodolog
y
C 5 1 5 4 5 4 3 5 0 1 5 2 1 0 2 4 2 3
5
1

9
7

4
F 5 5 5 3 3 4 3 4 3 4 4 2 3 1 3 5 4 4
6
5
Business
Processes
Modelling C 5 4 4 3 4 4 3 4 4 4 4 3 3 2 1 5 2 5
6
4

1
2
9

3
F 2 3 2 4 3 0 4 3 5 5 5 4 5 5 4 3 3 5
6
5 Organizatio
nal Change
C 4 4 1 3 5 0 5 4 5 5 5 3 5 5 4 3 5 5
7
1

1
3
6

2
F 3 5 5 5 5 0 5 5 5 5 5 4 5 5 4 3 5 5
7
8 Governanc
e
C 3 5 3 5 5 0 5 5 5 5 5 5 5 5 5 5 5 5
8
1

1
5
9

1
F 2 3 3 3 3 1 2 2 1 0 1 3 1 2 3 4 3 2
3
9 Re-use /
Leverage
C 2 2 2 4 3 2 2 3 1 0 1 4 1 3 3 2 1 2
3
8

7
7

5


269

Ian Owens and John Cunningham
Table 2: Top 5 SOA-Specific CSFs
CSF Points Rank
Governance 169 1
Organizational Change 136 2
Business Process
Modelling/Management
129 3
Implementation Methodology 97 4
Re-use/Leverage 77 5
4. Company X survey
In order to evaluate our SOA-specific CSFs listed in Table 2 we conducted an empirical study with an
organization who are in the process of implemented a large-scale, multi-national SOA-based IS
project. For the purposes of this paper the organization is referred to as company X. Company X is
involved in military-related logistics and they are implementing SOA to rationalise and manage their
heterogeneous legacy systems environment. Their chosen approach is to first establish their
information needs, and then match the need to their current legacy systems. They will then decide
what current systems they want to keep and which are to be discarded. The systems that remain will
be wrapped and the relevant functionality will then be exposed as services. New services can then be
developed over time to replace those provided by the legacy systems.

We conducted a survey of the team in company X who are responsible for managing the
implementation of the SOA approach. A total of 96 questionnaires were sent to staff in company X.
This yielded a response of 55 completed surveys, or a 57% response rate. The results of this survey
are presented and discussed below.
5. Results and discussion
Unfortunately there is not enough space in this report to present the results of the entire survey. For
the purposes of this paper we will focus our discussion on the results of two questions that sought to
test the validity of our CSFs and to rank the CSFs in order of priority. Table 3 shows the results from
question 3 which sought to determine whether the SOA-specific CSFs identified from the literature
were relevant CSFs. Over 75% of respondents either agreed or strongly agreed that the suggested
tasks were indeed CSFs for SOA implementation, for all variables except Re-use & Leverage which
scored 62.5%.
Table 3: Results of question 3

How much do you agree that the following activities are Critical Success Factors for the
implementation of a SOA in a large enterprise such as Log NEC?


SOA Critical
Success Factor
Strongly Agree Agree Neutral Disagree Strongly
Disagree
Total
Imp
Met
90
(18)
100
(25)
45 0 0 135 lementation
hodology (15)
Bus
Mod
120
(24)
80
(20)
27
(9)
0 0 227 iness Process
elling
Gov

155
(31)
68
(17)
27
(9)
0 0 250 ernance

Org
Cha
95
(19)
96
(24)
30 0 0 221 anizational
nge (10)
Reu
/ Le
55
(11)
76
(19)
72 0 0 203 se
verage (24)
Table 4 shows the results from Question 4 which sought to rank the CSFs in order of importance; the
coded scores are shown in bold and the actual number of votes for each CSF are shown in brackets.
The 55 respondents each had five votes which gave a maximum of 275. In some cases, respondents
gave a particular priority to more than one CSF; therefore, each column has a different total.

The results of the survey compared with the results of our analysis of the literature are shown in table
5.

270

Ian Owens and John Cunningham
Table 4: Results of question 4
Please rank these proposed SOA Critical Success Factors in order of importance; 1 = most important, 5 =
least important. You may award some, or all, equal priority:


SOA Critical
Success Factor
1 2 3 4 5 Dont Kn ow Total
Implementatio
Methodology
3
(
1
(
1
( 140
n 50 24
(10) (6)
9
13)
4
7)
3
13)
6
Business Pro
Modelling )
3
(1
1
(
0
( 181
cess 45 96
(9) (24
0
0)
0
5)

0)
7
Governance
(24) (11)
2
(
8
(
1
(
6
200
110 44 7
9)

4)

1)
Organizationa
Change
4
(1
3
(1
4
( 131
l 40 8
(8) (2)
5
5)
4
7)

4)
9
Reuse
/ Leverage
4
)
2
(8
2
(1
1
(1 108
40
(8) (1
4
)
2
1)
8
8)
9

Table 5 Company X compared with the results of the literature review
CSF Company X Rank Literature Rank
Governance 1 1
Business Process Modelling 2 5
Organizational Change 3 2
Implementation Methodology 4 4
Re-use/Leverage 5 3
The results of the survey provide some credible empirical evidence to support the validity of the CSFs
we identified from the literature. The difference in the rank of importance of these CSFs is minor and
can be explained by the unique position of company X. We would expect all SOA implementations to
differ in terms of maturity of the project, scale of the project, and experience of the project
implementation team. Therefore, we would expect different teams to place different importance on the
CSFs, furthermore we would expect rankings of importance to change as the project itself matures.
We also acknowledge that our coding methodology for the literature review was inherently subjective
and this can also explain the differences in the rankings.
6. Conclusions and recommendations
We set out to determine whether it was possible to identify SOA-related CSFs from the literature, and
to externally validate these CSFs using an empirical study of an implementation of a SOA based
system. In conclusion we can say that we have fulfilled these two objectives. Our systematic study of
the literature identified five CSFs that were specific to SOA implementation projects, as opposed to
general project management CSFs which are applicable to all projects. Our survey of implementation
professionals in Company X has provided some external validation of these CSFs. Our research adds
to the rich body of work on project management CSFs and we hope that our SOA-specific CSFs will
provide useful guidance for project managers responsible for future SOA implementation projects. We
would recommend that additional work is required to replicate and further validate our findings. The
systematic literature survey should be repeated by an independent team of researchers and we would
welcome a repeat of our survey on other implementation projects.
References
Anthony, R.N., Dearden, J. and Vancil, R.F., (1972). Key Economic Variables, Management Control Systems,
Homewood, Irwin, IL, pp. 138-43.
Antikainen, J. and Pekkola, S. (2009). Factors Influencing the Alignment of SOA Development with Business
Objectives, 17 th European Conference on Information Systems.
Bieberstein, N., Laird, R. G., Jones, K. And Mitra, T., (2008). Executing SOA: A Practical Guide for the Service-
Oriented Architect, IBM Press.
Bonnet P., Detavernier J.M. and Vauquier D., (2009). Sustainable IT Architecture, Wiley, London.
Brown, P., (2008). Implementing SOA: Total Architecture in Practice, Addison-Wesley. Upper-Saddle River, NJ.
Bucur, D. and Bardram, J. E. (2007), "Resource discovery in activity-based sensor networks", Mob.Netw.Appl.,
vol. 12, no. 2-3, pp. 129-142.
Chen, T. Y., (1999). Critical Success Factors for Various Strategies in the Banking Industry, International Journal
of Bank Marketing, Volume 17, Issue No 2, pp.83-92.
Daniel, D.R. (1961), "Management information crisis", Harvard Business Review, September/October, pp.111-21.
271

Ian Owens and John Cunningham
Erl, T., (2007). SOA: Principles of Service Design. Upper Saddle Creek, NJ, Prentice Hall.
Ericson, J and Siau, K., (2008). Critical Success Factors in SOA Implementation, Americas Conference on
Information Systems, Proceedings, Paper 107.
Figlin, O., (2008). Enterprise SOA Made Easy: Key Success Factors, Available at:
http://www.sap.com/community/webcast/2008_06_10_NetWeaver_CIS/20-
08_06_10_NetWeaver_18_Figlin.pdf. Accessed, 2 July 2011).
Finch, P. (2003). Applying the Slevin-Pinto project management profile to an information systems project. Project
M anagement Journal, 34(3), 32-9.
Fortune, J. and White, D., (2006). Framing of Project Critical Success Factors by a Systems Model, International
Journal of Project Management, Vol. 24, pp. 53-65.
Foster, I., Kesselman,C., Nick,J., Tuecke, S.. (2002). Grid Services for Distributed System Integration. Computer
35, 6 (June 2002), 37-46.
Gulledge, T. and Deller, G., (2009). Service-Oriented Concepts: Bridging Between Managers and Technologists.
Industrial Management & Data Systems, Volume 109, Issue No 1, pp.515.
Hochstein, A., Tamm, G. and Brenner, W. (2005). Service-Oriented IT Management: Benefit, Cost and Success
Factors. Paper presented at the Thirteenth European Conference on Information Systems, Regensburg,
Germany.
Jenster, V., (1987). Using Critical Success Factors in Planning. Long Range Planning, Volume 20. Issue No 4,
pp.102-109.
Kavis, M., (2007). SOA Critical Success Factors. Available at: http://it.toolbo-x.com/blogs/madgreek/soa-critical-
success-factors-21189, (Accessed: 30 June 2011).
Kaye, D., (2003). Loosely Coupled: The Missing Pieces of Web Services. RDS Ptes, CA, 2003.
Lawler, J. P., Benedict, V., Howell-Barber, H. and Joseph, A., (2009). Critical Success Factors in the Planning of
a Service-Oriented Architecture (SOA) Strategy for Educators and Managers, Information Systems
Educational Journal, Volume 7, Issue No 94.
Lee, J. H., Shim, H. and Kim, K. K., (2010). Critical Success Factors in SOA Implementation: An Exploratory
Study, Information Systems Journal, Volume 27 Issue No 2, p123-145
Lim, B., Wen, J., 2003. Web Services: An Analysis of the Technology, its Benefits, and Implementation
Difficulties. Journal of Information Systems Management, Spring, pp.4957.
Papazoglou, M. P. and Georgakopoulos, D. (2003), "Service-oriented computing", Communications of the ACM,
vol. 46, no. 10, pp. 25-28.
Mansukhani, M. (2005). Service Oriented Architecture - Whitepaper, Hewlett-Packard Dev. Comp., Tech. Rep.
Available at http://h20219.www2.hp.co-m/services/downloads/soa_wp_062005.pdf. (Accessed 23 June
2011).
Michael, S. and Lewis, B. 1994. Research practice: International handbooks of quantitative application in social
sciences, Sage, 1994
OASIS. Reference model for service oriented architecture 1.0 OASIS standard, 12 october 2006. C. Matthew
MacKenzie, Ken Laskey, Francis McCabe, Peter F. Brown, and Rebekah Metz (eds.), http://docs.oasis-
open.org/soa-rm/v1.0/soa-rm.pdf
Papazoglou, M. P. and Van Den Heuvel, W. (2006). Service oriented design and development methodology. Int.
J. Web Eng. Technol. 2, 4 (July 2006), 412-442.
Pinto, J.K., Prescott, J.E., (1988). Variations in Critical Success Factors over the Stages in the Project Life Cycle,
Journal of Management, Volume14 , Issue No1, pp.518.
Pinto, J.K., & Slevin, D.P., (1987). Critical Factors in Successful Project Implementation, IEEE, Transactions on
Engineering Management, Volume 34, pp.22-27.
Rockart, J.F., (1979). Chief Executives Define Their Own Data Needs, Harvard Business Review, Vol. 57, Issue
No. 2, pp.238-41.
Rosen, M., Lublinski, B., Smith, K. and Balcer, M., (2008). Applied SOA: Service Oriented Architecture and
Design Strategies, Wiley, IN, USA.
Roshen, W., (2009). SOA-BASED Enterprise Integration, McGraw-Hill, USA
Seaman, C.B. (1994), "Using the OPT Improvement Approach in the SQL/DS Development Environment". In
Proceedings of CASCON'94, (CD-ROM version), Toronto, Canada, October.
Tsai, W. T., (2005). Service-Oriented System Engineering: A New Paradigm. IEEE International Workshop on
Service-Oriented System Engineering (SOSE 05), pp.36, October 2005.
van Halteren, A.T. and Pawar, P. (2006) Mobile Service Platform: A Middleware for Nomadic Mobile Service
Provisioning. In: IEEE International Conference on Wireless and Mobile Computing, Networking and
Communications, 2006 (WiMob'2006)., 19-21 Jun 2006, Montreal, Canada. pp. 292-299. IEEE Computer
Society. ISBN 1-4244-0494-0
Varadan, R., Channabasavaiah, S., Simpson, K., Holley, A. and Allam, A., (2008). Increasing Business Flexibility
and SOA Adoption through Effective SOA Governance, IBM Systems Journal, Volume 47, Issue No. 3,
pp.473488.
Webster, J. and Watson, T. R., (2002). Analyzing the Past to Prepare for the Future: Writing a Literary Review,
MIS Quarterly, Volume 26, Issue No 2.
Yang, S. and Joy, M. S. (2010), "Service Advertisement and Discovery", in Griffiths, N. E. and Chao, K. (eds.)
Springer, pp. 21-46.
272
Treasure Hunting in the 21
st
century: A Decade of
Geocaching in Portugal
Teresa Santos, Ricardo Mendes, Antnio Rodrigues and Srgio Freire
e-GEO, Faculdade de Cincias Sociais e Humanas, FCSH, Universidade Nova
de Lisboa, Lisboa, Portugal
teresasantos@fcsh.unl.pt
rnmendes@fcsh.unl.pt
amrodrigues@fcsh.unl.pt
sfreire@fcsh.unl.pt

Abstract: The present study looks at geocaching, a popular location-based mobile game, where the goal is to
use a Global Navigation Satellite System (GNSS), usually the Global Positioning System (GPS) to hide and seek
containers placed anywhere in the field. People who engage in this activity, the geocachers, constitute a
geographically distributed community that makes use of mobile and Web 2.0 technologies to coordinate and
document their activities. Consequently, this treasure-hunting game, besides being a ludic activity, associated
with a strong social networking element, also promotes new ways of exploring, interacting and communicating
experiences and perceptions of the geographical environment where the activity occurs. The majority of existing
literature analyzes geocaching from a social point of view, and little reference is made to the geographical context
of this activity. The aim of this study is to fill that gap and thus characterize the phenomenon in terms of its
temporal and spatial distribution. Observation instruments are proposed based on motorization indexes built from
available data attributes. Such attributes reveal behaviors and patterns of geocachers (individuals) and
geocaches (objects). The methodology is based on spatial data analysis; this can play an important role in
exploring social phenomena that have a strong geographic component. Through the analysis of the freely
available dataset that is voluntarily maintained by people engaged in the geocaching activity, a new dimension is
explored: the spatial dimension. When, where and why this activity occurs was used as the framework for the
analysis in this paper. The final output is an overall picture of the geocaching activity in mainland Portugal in this
decade. In a later stage, environmental characteristics are used as possible explanations for observed patterns. It
is shown, using spatial model specifications, that a small number of regressors are able to highlight important
characteristics in the data. A final discussion underlines the potential of geocaching to encourage social
interaction, and promote cultural and natural heritage; in short, it has some paramount attributes of an
economically sound and sustainable sector.

Keywords: geocaches, geocaching, GPS, Web 2.0, spatial analyses
1. Introduction
The Web 2.0 has brought a huge change in human interactions in the first decade of the 21
st
century.
One of the recent social phenomena of Web 2.0 is geocaching. This 10 year-old activity brings
together a treasure hunting game that: 1) looks like a true sport/open air activity, 2) involves hi-tech
gadgets (from handled GPS units to fancy smartphones or tablets), and 3) includes the Internet!
Altogether, geocaching is a true grownups' playground, where individuals, groups of friends or entire
families can interact. The goal of geocaching is to find a hidden container, named geocache, in a
public place and then sign the logbook to record the visit. The caches information is published in the
official geocaching website (www.geocaching.com), and it includes a GPS position, along with some
clues and most often information regarding the place where the cache is hidden. Anyone can register
for free, and the only requirements are a GNSS unit or a GNSS-enabled device. Usually, but not
always, the log is later registered in the official website and experiences are shared with other
geocachers.

Created in 2000, geocaching reached Portugal in February 2001. Ten years and 18,500 geocaches
later, almost 2,000,000 logs were done, bringing together a community of over 17,000 registered
individuals just in Portugal.

Geocaching can be seen as a touristic activity. Many caches are placed in heritage sites, and have
detailed description and photos of the monument/site in their webpage. Furthermore, numerous logs
are posted by foreigners. The most visited cache in Portugal has been found 1577 times and is
located in Centro Cultural de Belm, followed by a cache located in Parque das Naes (both in
Lisbon), with 1492 findings. The two places are well known touristic spots. In this way, geocaching is
a volunteer activity that can also be used as an attraction factor for travelers to visit certain
destinations, combining holiday time with the enjoyment of local history, culture and landscape.
273

Teresa Santos et al.
Several touristic projects that include geocache trails were already developed in the United States of
America and Canada, among other countries.

There are several types of caches (Traditional, Multi, Mystery, Event, Letterbox, Virtual, Webcam,
Wherigo
TM
and Earthcache), of different sizes (from micro, or nano-caches to large, virtual or
unknown). Normally each cache contains a logbook or logsheet, and geocachers are invited to
exchange personal objects or symbolic gifts that travel from cache to cache. Some of these gifts,
called travel bugs, can even be traceable. The event and mega-event type, are a particular social
phenomenon. These caches require a large group of people (up to 500 geocachers or more) to be
present in a specific day for the event; afterwards, the cache is archived. Mystery or virtual caches
may involve puzzles or brainteasers. Other caches, when hidden relatively close or in one particular
trek or trail, are named power trail. Whereigo
TM
are particular caches that use a toolset for creating
and playing GPS-enabled adventures. These toolsets are already included in many GPS units.
Earthcaches are dedicated to earth sciences or special geological features. Most geocaches are
totally free for the owner to hide in any public place; others, like Earthcaches, should follow specific
rules in order to be officially published.

Although geocaching promotes the exploration and contact with nature and outdoor activity, it can
also result in high pressure on more fragile environments. In fact, geocaching is a volunteer activity
that is not yet subject to any type of control, and its implications and effects on, for example, protected
areas recommend a deeper discussion about such outdoor activities. In Portugal, several such
situations have been detected during this analysis: 8 caches (with over 500 logs 1 per week) have
been hidden in the total protection area of the Arrbida Natural Park, 50 km south of Lisbon. Other
caches hidden in the vicinity of nesting colonies of endangered species were also identified and
reported to local authorities. Newsome et al (2011) present a study on the environmental impacts of
adventure racing in Australia in protected areas. These include soil erosion, loss of vegetation or
alteration of feeding patterns of local fauna, among others. Also, human waste can play a negative roll
on these habitats. Monz et al (2010) look at outdoor recreation as an agent of ecological change with
the potential to affect soil, vegetation, wildlife, and water quality. The authors conclude that the current
research on environmental recreation needs to be extended and include, besides local impact
indicators, aspects of spatial and temporal analysis that will allow improving and bring the study to the
landscape level. As a response to these critics, Geocaching.com attempts to introduce some ethics
and rules on the geocaching activity by profiling the participants as virtual defenders of the
environment, and providing guidelines on how to navigate in nature without harming the environment
(Gram-Hansen, 2009).

In the literature, Geocaching has been studied as a social phenomenon (Gram-Hansen, 2009), that
can be seen as an informal learning tool (Clough, 2010), or as a deviant behavior (Hawley, 2010),
along with its practice and motivations (OHara, 2008). However, besides being a social activity,
geocaching is also a location-based experience. The study of the territory and its influence in this
outdoor activity requires therefore the application of spatial analysis' methodologies.

In this study, the temporal and spatial dissemination of geocaches in Portugal is analyzed. Key
questions emerged: Are caches distributed the same way as natural, anthropic and social
attributes?, As an outdoor activity, are there relatively more caches hidden in non-urban areas?,
Which are the regions with more/less geocaches?, What is the visiting frequency and intensity per
region?, Which land-uses relate to the location where most popular caches are placed?, Are the
caches difficulty levels related with the landscape?. In sum, all these questions help to examine what
are the spatial characteristics of the geocaches that make them attractable.

Spatial data analysis can play an important role in studying social phenomena that have a strong
geographic component like geocaching. Through the analysis of the freely-available dataset, web-
forums and web pages that are voluntarily maintained by geocachers, a new dimension is explored:
the spatial dimension, allowing new insights regarding the relationships between people and the
surrounding urban and rural environment.
2. Study area and data sets
The area selected to study the geocaching activity is mainland Portugal (Figure 1). The country is
divided in several administrative levels. According to the Nomenclature of Territorial Units for
Statistics (NUTS), mainland Portugal comprises 28 NUTS III regions. Portuguese landscape and
274

Teresa Santos et al.
climate invite to outdoor activities, such as geocaching. Portugal has a very diverse landscape,
varying from mountains in the center to flat plains in the south, and from deep green valleys with
vineyards in the north, to beaches in the coast. Population density is higher near the coastline,
especially in the metropolitan areas of Oporto and Lisbon. Landscape becomes more rural as we
move to the interior. The forest area covers about 38% of the territory, with agriculture land and
farming being the second most significant land use. Portugal has a Mediterranean climate and is one
of the European countries with the highest levels of annual solar radiation, a factor which favors
outdoor activities.

Data for this study was collected from www.geopt.org (one of the Portuguese geocaching forums) on
February 2012. The geocaches subset includes 18026 caches, hidden between February, 2
nd
, 2001
and February, 8
th
, 2012. In order to make the analyses more feasible, only caches located on the
mainland were considered. For the temporal analysis, the caches with no visiting activity or misplaced
caches (with coordinates outside of the study area) where removed from the data set. The total
number of caches equals 17291. For the geographical analysis, archived caches (caches disabled by
the owner), were also deleted from the database. The final database for geographic analysis consists
in 13553 operational caches. This set allows evaluating geocaching for a specific time-stamp. The
attributes included in the data set are cache name, cache owner, location, difficulty, terrain, cache
type, status, size, hidden date, average size of logs, total number of logs, number of findings, no-
findings, notes, photos, traceable items and votes.

In order to analyze the relation between landscape and geocaching activity, two maps were used. The
Land Use / Land Cover Map of Continental Portugal (Cartografia de Ocupao do Solo de Portugal
Continental, IGP 2011) is available for the whole country for 2007, with a minimum mapping unit of 1
ha, and 2 levels of thematic detail. The Natura 2000 network is composed of areas of community
importance for the conservation of habitats and species, in which human activities must be compatible
with the preservation of those natural values (EC, 2012). For the study of population distribution, the
census tracts for 2011 (preliminary results) were used (INE, 2011).

Figure 1: Study area and data set used for studying the geocaching phenomena at NUTSIII level
3. Methodology
Geocaching is a social networking location-based game. In the present study, a Geographic
Information System (GIS) was used to model and analyze the spatial distribution of this activity in
Portugal. In order to take into account the neighborhood effect of geographical data, Exploratory
Spatial Data Analysis (ESDA) and spatial statistics are used (Lloyd, 2011; Olaya, 2012). The former
are particularly important in the present context, since cross spatial variation is studied in order to
evaluate the relation between location of the caches and the social and natural context of the areas
where they are placed. Hence, geographical datasets are aggregated for particular regions, which
raise common analytical questions related to lattice data (Rogerson, 2001).

275

Teresa Santos et al.
Geographical phenomena are generally point-based. This means that associated to most actions,
there is a pair of geographical coordinates which pin-point their location relative to a particular model
of the Earth. When such data is aggregated and summarized for specific areas, there is a loss of
information. Spatial stationarity must be assumed since intra-regional variations are not known (Lloyd,
2011). Resulting mis-interpretations are generally related to ecological fallacy (Freedman, 2001). The
other common issue which should be taken into account when dealing with lattice datasets is that
aggregates are highly dependent on the shape of each spatial unit; this is what is known as the
Modifiable Areal Unit Problem (MAUP) In most cases, this may not be controlled, but should always
be acknowledged (Rogerson, 2001).

Assuming spatial phenomena occurs in a continuous - although irregular surface, it is a too strong
assumption to analyze observations as independent (Anselin, 1988; Fingleton, 1999). Exploratory
Spatial Data Analysis (ESDA) has the merit of making space endogenous. Neighborhood relations
are quantified and serve as weights in the construction of spatially lagged variable transformations. As
an illustration, for a given variable X, its i
th
observation ( ) is given by the weighted average of X
observed in the i's k set of neighbors. Formally:

,

where represent the neighborhood between spatial units i and j.

Spatial Lag transformations have various uses, which range from making endogenous spatial effects
in a regression allowing estimated coefficients to be BLUE (Best Linear Unbiased Estimators)
(Anselin, 1988), to the smoothing of series, facilitating the identification of geographical clusters. Their
importance increases with the level of global or partial spatial autocorrelation (Getis, 2007). Spatial
autocorrelation may be generally understood as the existing co-variation between observations along
any given spatial surface. This is in general terms the result of spatial dependence and heterogeneity.
In the present article, spatial autocorrelation is taken into account in two stages: first, spatial lags for
variables considered of significant importance are used to visually analyze patterns at the municipality
level. Second, when testing the functional relationships which may explain the distribution of the geo-
caching phenomenon, spatial dependence is made endogenous. In this latter case, spatial
autoregressive and spatial error forms (SAR and SEM) are estimated using maximum likelihood. In
the SAR specifications, the spatial lag of the dependent variable is used as a regressor; in the SEM
specifications, spatial dependence is assumed to be captured in the error term. Formally:

SAR: ,

SEM: ,

where is the dependent variable, a matrix of regressors, its regressor, a spatial weights matrix
and the spatial autocorrelation coefficients, and white noise. A set of appropriate tests will be
conducted in order to infer on robustness (R
2
, Log-likelihood, Schwarz and Akaike Information
Criteria) and test the existence of heteroscedasticity (Breusch-Pagan test). Finally, a Likelihood-Ratio
statistic will infer on spatial dependence in the series.

In order to explain the distribution of caches over Mainland Portugal, the gross number will not be
used, since this is obviously related to two mass measures associated with municipalities: physical
area and resident population. Hence, two density metrics are used: number of caches per resident
(C[pop]) and per population density (c[dpop]). As regressors, two sets of variables are used: one
related to geographical position in relation to centripetal nodes, and another related to land-use.

In terms of geographical position, two measures of peripherality are used; one quantifies position and
mass in relation to the coast (p[coast]) and the other in relation to the two main Portuguese urban
nodes, Lisbon and Oporto (p[lp]). The metrics used are taken from Rodrigues (2001). The reasoning
for the use of these two measures is the role of the Atlantic coast as a decisive pooling factor which
helps to explain the existing coastal/interior divide (Ferro, 2002); also, the Lisbon and Oporto
276

Teresa Santos et al.
metropolitan areas concentrate most of the country's economic activity which make these two nodes
the most important pooling centers of the country.

The second set of regressors tries to explain the distribution of caches as being partially dictated by
land-use. The main hypothesis is that users may show preference towards urban areas (which would
be partially explained by their easier accessibility) or alternatively, towards less densely populated
parts of the study-area. With this in mind, the Land Use / Land Cover Map of Continental Portugal
was used. For each municipality, the proportion of urban area in respect to total area was used as a
regressor (COS[urb]). The same method was used to calculate the proportion of green area
(COS[green]).The exact land-use classes were chosen based on the preliminary exploratory analysis
presented below. Finally, Natura 2000 network was used as a final measure of landscape with natural
interest. Again, this was weighted by total area for each municipality (NAT).
4. Results
4.1 When: Temporal analysis of geocaching activity in Portugal
Although geocaching in Portugal has started in 2001, it took 5 years for the activity to take off. After
2006 it has truly exploded, reaching almost 18,000 caches spread all over the country (Figure 2). The
mean growth rate from 2001 to 2011 (2012 was not used in this analysis) is 213% per year. This rate
includes all status of caches: active, needs maintenance (caches that are momentarily not available)
and archived.

Figure 2: Yearly evolution of geocaches hidden in Portugal
4.2 Where: Spatial distribution of geocaches in Portugal
The spatial characterization of caches in the study area was performed by overlaying the caches
location with administrative and thematic maps and summarizing their totals.

The spatial distribution of caches in Continental Portugal by NUTS III shows that there is a strong
presence along the coast. The Great Lisbon area has the highest percentage of the total caches
(14%) followed by the Do-Lafes area, with 6,7% (Figure 3). The area with the lowest amount of
caches is Serra da Estrela, the highest mountain range in continental Portugal, with nearly 1%.

The spatial distribution of geocaches shows some clear patterns. Figure 4 shows: (1) the distribution
of the number of caches weighted by resident population, (2) by total area and (3) by population
density. The original point-dataset was aggregated at the municipalities level in order to analyze the
cross-variation taking into account the three mass indicators just mentioned. The resulting densities
were spatially weighted using a row-standardized binary contiguity spatial weights matrix. The result
is a set of spatial lagged variables.

277

Teresa Santos et al.
When caches are weighted by resident population, what is observed is a strong corridor running from
the coastal area just north of Lisbon, following the north-east direction (northern mountain system
Montejunto-Estrela). When weighted by area, concentration along the coast is clear, justified by the
increasing surface area of municipalities as we move inland.

Figure 3: Distribution of Geocaches (NUTSIII and municipalities)

Figure 4: Density of geocaches
278

Teresa Santos et al.
Regarding land cover, caches are placed preferably in natural or semi-natural areas, covered with
scrubs and/or herbaceous vegetation (22,86%), followed by urban areas (19,50%) and forests
(18,85%) (Table 1). This confirms that geocaching is mostly an outdoor activity conducted outside of
urban centers. Although most people live in artificial areas, most of the caches are placed in natural
environments. Based on this analysis, two indicators were constructed and used as regressors:
classes 1.1 (Urban fabric), on one hand, 3.1 (Forests) and 3.2 (Scrub and/or herbaceous
vegetation associations) on the other, are used and their area per municipality calculated
representing the dichotomy between urban and natural landscapes (represented by COS[urb] and
COS[green]).
Table 1: Distribution of geocaches by land use / land cover class
Geocaches
Land Use / Land Cover Map of 2007 Number Percentage
1.1 Urban fabric 2643 19,50
1.2 Industrial, commercial and transport units 710 5,24
1.3 Mine, dump and construction sites 132 0,97
1.4 Green urban areas, sports and leisure facilities 228 1,68
2.1 Arable land 1210 8,93
2.2 Permanent crops 606 4,47
2.3 Pastures 142 1,05
2.4 Heterogeneous agricultural areas 1157 8,54
3.1 Forests 2555 18,85
3.2 Scrub and/or herbaceous vegetation associations 3098 22,86
3.3 Open spaces with little or no vegetation 407 3,00
4.1 Inland wetlands 17 0,13
4.2 Maritime wetlands 56 0,41
5.1 Inland waters 291 2,15
5.2 Marine waters 301 2,22
4.3 Why: Environmental conditioning
As described above, besides exploring the distribution of the phenomenon, the goal of the study was
to explain the observed patterns. Tables 2 and 3 show the econometric results from the spatial lag
(SAR) and the spatial error specifications (SEM). The variables used were first standardized,
subtracting the mean and dividing by two times the standard deviation, as proposed by Andrew
Gelman (2007). When the number of caches weighted by resident population is used as the
dependent variable (Model 1), peripherality in relation to Lisbon and Oporto (p[lp]) is significant with a
negative sign in the estimated coefficient. COS[urb] is also significant with an opposite sign.
Table 2: Estimation results (SAR specification)

This shows some tendency of caches to be distributed near urbanized areas. On the other hand,
natural heritage is also positively rated (positive significant coefficient associated with NAT).

When the total number of caches is weighted by population density (Model 2), results are similar, yet
with some notable differences. First, robustness increases; this is shown by a drastic increase in the
R-squared statistic, smaller absolute value of the Log likelihood function and Akaike Information
Criteria and Schwarz Criterium. The Likelihood Ratio test also corroborates these results. Second,
results of the Breusch-Pagan test indicate that heteroscedasticity is no longer an issue. In relation to
the estimated coefficients, COS[green] and NAT are both significant with a positive sign, peripherality
279

Teresa Santos et al.
is no longer relevant whilst COS[urb] is only significant at the 90% level. In order to get rid of any
multicollinearity, one peripherality measure together with COS[green] were dropped and the model
estimated one last time.

The results are clear and quite attractive: location of the caches is stronger near the coast. Natural
heritage (NAT) is particularly important with by far the highest coefficient. Yet, although caches tend
to be located within municipalities with attractive landscapes, they generally tend to be in or near
urban areas. Finally, Model 3 excludes the measure of peripherality with the least significance (p[lp]),
as both are highly correlated. For the same reason, COS[green] is also not included. The estimation
results confirm previous inferences. Although the percentage of variance explained is smaller (an
expected result given the smaller number of regressors), the unexplained variance is captured by the
autocorrelation coefficient. Results from the SEM specification (table 3) confirm the SAR estimation
results.
Table 3: Estimation results (SEM specification)

5. Conclusions
This study intended to analyze the national scenario of the geocaching activity in Portugal, in terms of
temporal and spatial distribution. The results allowed inferring the evolution of geocaching in mainland
Portugal; as well as the places where the activity is most prevalent and provided clues to what are the
caches characteristics that make them more popular than others.

Due to the lack of literature, this research proposes a methodology and some indexes for monitoring
the geocaching phenomenon, allowing for comparison with other countries and other time periods.
Results for the Portuguese dataset show that geo-cachers prefer locations with significant natural
heritage sites although, on average, they tend not to travel far for their treasure-hunting activities.
Cross-variation with other datasets should provide important and highly innovative insights about the
traveling behavior of individuals during their adventures into the wild.

Future studies should include other factors that will help to fully understand the phenomena. Local
and regional trends are surely affected by other aspects like individual motivations, expectations and
perceptions; social networking or physical aspects of places where caches are hidden (landscape,
scenic views, cultural heritage, natural phenomena, and so on). Other variables like the presence or
absence of structured supply of outdoor activities such as hiking and mountain biking trails, climbing
or speleology, bird watching and other soft nature activities may also explain what seems to be the
huge success of geocaching in Portugal. Naturally those assumptions require testing and
confirmation. Yet, care should be taken when adding specific variables with specific local trends, as
these normally add non-stochastic (deterministic) trends in the residuals.
References
Anselin, L. (1988). Spatial Econometrics and Methods, Kluwer Academic Publishers.
Clough, C. (2010). Geolearners: Location-Based Informal Learning with Mobile and Social Technologies. IEEE
Transactions on Learning Technologies, Vol. 3, No. 1, pp 33-44.
EC European Commission (2012). Natura 2000 network.
http://ec.europa.eu/environment/nature/natura2000/index_en.htm
Ferro, J. (2002). "Portugal, trs geografias em recompilao: Espacialidades, mapas cognitivos e identidades
territoriais. Lusotopie, Vol. 2, pp.151-158.
280

Teresa Santos et al.
Fingleton, B. (1999). "Spurious Spatial Regression: Some Monte Carlo Results with a Spatial Unit Root and
Spatial Cointegration". Journal of Regional Science, Vol. 39, No.1, pp.1-19.
Freedman, D.A. (2001). Ecological Inference and the Ecological Fallacy. In Neil J. Smelser and Paul B. Baltes
(eds), International Encyclopedia of the Social & Behavioral Sciences. Vol.6, pp. 4027-4030.
Getis, A. (2007). "Reflections on spatial autocorrelation". Regional Science and Urban Economics, Vol.37,
pp.491-496.
Gelman, A, (2007). Scaling regression inputs by dividing by two standard deviations. Statistics in Medicine, Vol.
27, Wiley Online Library, pp: 2865-2873.
Gram-Hansen, L.B. (2009). Geocaching in a persuasive perspective. Proceedings of the 4th International
Conference on Persuasive Technology, April 26-29, Claremont, California, USA.
Hawley, F.F. (2010). Agon and Ecstasy: Transgression, Transformation, and Transcendence in Competitive
Geocaching. Deviant Behavior, Vol. 31, No. 3, pp 225-250.
IGP Instituto Geogrfico Portugus. (2011). Cartografia de Ocupao do Solo de Portugal Continental para
2007, [online] http://www.igeo.pt/produtos/CEGIG/Cos2007.htm.
INE - Instituto Nacional de Estatstica, I.P (2011). "Censos 2011 Resultados Provisrios". INE, Lisboa.
Lloyd, C. R. (2011). Local Models for Spatial Analysis, 2nd Edition. CRC Press, Boca Raton.
Monz, C.A., Cole, D. N., Leung, Y.F., Marion, J.L. (2010). Sustaining visitor use in protected areas: Future
opportunities in recreation ecology research based on the USA experience. Environmental Management,
Vol. 45, pp 551-562.
Newsome, D., Lacroix, C., Pickering, C. (2011). Adventure Racing Events in Australia: context, assessment and
implications for protected area management, Australian Geographer, Vol. 42, No.4, pp 403-418.
OHara, K. (2008). Understanding Geocaching Practices and Motivations. CHI 2008 Proceedings On the Move,
April 5-10, Florence, Italy.
Olaya, V. (2012). Sistemas de Informacin Geogrfica. (http://sextante.googlecode.com/files/Libro_SIG.pdf)
Rodrigues, A. (2001). "The State, Market and Accessibility Triangle: Insights Into the Manufacturing Landscape
of Northern Portugal". PhD. Thesis, Faculty of Urban and Regional Studies, University of Reading.
Rogerson, P. (2001). Statistical Methods for Geography. SAGE Publications.
281
Intelligent Decision Support Systems Development Based
on Modern Modeling Methods
Elena Serova
St. Petersburg State University of Economics and Finance, St. Petersburg,
Russia
serovah@gmail.com

Abstract: Agent based modeling (ABM) is a new modeling paradigm and one of the most advanced practical
developments in modeling. ABM promises to have far-reaching effects on the way that business practitioners and
academic researchers use information communication technologies to support decision making at different levels
of management. Modern design models and architectural structures are opening up new possibilities and new
application areas are coming to the foreground. Multi-agent systems as systems of distributed artificial
intelligence are now having a significant influence on information systems design, simulation and analysis. This
paper focuses on the various modeling methods and technologies that are employed in the development of
intelligent decision support systems. Its goal is to evaluate the role of the agent based modeling in the design of
management decision processes. The paper considers the main features of intellectual agent modeling
methodology, and discusses the different types modelling categorization. It does so from research base that
draws from theoretical underpinnings as well as international and domestic industry practices. The basic
principles of agent-based modeling are first introduced and areas of application are then discussed from
perspective of real-world applications: flow simulation, organizational simulation, market simulation, and diffusion
simulation. The classification of modeling types is discussed, together with and business application simulation
frameworks.

Keywords: modeling, management, information systems, decision support systems, intellectual agent, multi-
agent systems
1. Introduction
The use of modern modeling methods and technologies are now essential components for developing
management decision process that will enable companies to succeed in a rapidly changing
environment. It is noteworthy that simulation modeling is now considered an essential feature of
decision making in companies that actively employ modern information technologies.

Modern modeling tools should facilitate mutual understanding at different organizational levels when
making strategic management decisions thus bridging the gaps between a strategic vision and its
implementation. On multi-agent systems (MAS) which, as a class, have developed rapidly over the
last decade. The advantage of a multi-agent approach relates to the economic mechanisms of self-
organization and evolution that become powerful efficiency drivers and contribute to enterprises
development and prosperity. New intellectual data analysis can be created, through MAS which is
open aimed at flexibly adaptive problems solving, and deeply integrated in decision support systems.
Modern business simulation modeling tools use special software, programming languages and
systems to develop models of business processes, relations between people and areas for
optimization in the organizational structure as a whole.
2. Classification of modeling types
Modelling is widespread as a means to represent reality. Establishing a classification of all possible
types of modeling is difficult since the notion of a "model" is used broadly in science and technology,
art, and in everyday life. It is nonetheless possibly to distinguish the following types of modeling:
Conceptual;
Physical;
Structured-functional;
Mathematical and
Simulation.
All these types of modeling can be employed to study complex systems simultaneously, or in certain
combinations. Traditionally computer modelling or computer simulation falls within the domain of
simulation and is concerned with the analysis or syntheses of complex systems in order to support
problem or decision analysis activities. The focus of computer modeling can include: the economic
282

Elena Serova
activity of a company or bank, industrial enterprise, data-processing network, technological process or
any real object or process.

Computer or simulation models of management information systems display all major factors and
correlations characterizing real situations, criteria and limitations. Models should be universal enough
to describe the phenomena in question, simple enough to permit research at reasonable cost, and
achieve the following objectives:
Reduce the number of functional roles and management levels, and specifically mid-level
workers;
Rationalize solutions to management problems by implementing mathematical methods of data
processing, using simulation and artificial intelligence systems;
Create a modern, dynamic organizational structure, improving enterprises flexibility and
manageability;
Reduce administrative costs;
Reduce time spent to planning activities and decision making;
Increase competitive advantage.
To clarify the role of computer simulation in modern management the structure function approach to
solving business problems should be noted. The essence of computer modeling in business is to
obtain quantitative and qualitative results from the existing model. Qualitative results allow discovery
of previously unknown features of a complex system including issues such as structure, development
trends, sustainability, integrity, and so on. Most quantitative results help forecast certain future values
of variables that characterize the system being modeled.

The essential difference between computer simulation and structure-function analysis is that the
former yields both qualitative and quantitative results (Serova, 2009).

Another well-known application of computer modelling is aimed at solving management problems
through mathematics and logic and, as a rule employs Excel spread sheets. Problems susceptible to
this approach include stock management as well as transport, industrial and marketing logistics
(Gorshkov et al., 2004). The same is possible with problems of linear and multiple regression
forecasting, resource utilization review, and so on.

The Computer model used for managerial decision making must, as far as possible, encompass the
main factors and interrelations that characterize real situations, and their parameters. The model must
be both sufficiently broad so as to include the specificities of management objects and economical in
its actions. With this in mind the following are recommended as appropriate areas for exploiting the
benefits of computer simulation:
Where there are incomplete or incorrect formulations of the managerial issues involved and the
modelling process involves apprehending the nature of the object to be modelled. Simulation in
this case is used in order to study of the phenomena;
Where analytical methods exist, but mathematical procedures are so complex and labour-
intensive that simulation is the simplest path to decision making;
When observation for behaviour of managerial systems components is required;
Where simulation is the only way to study a managerial system because it is impossibility to
observe the target phenomena in real condition;
Where new situations are studied in complex systems that are relatively unknown to researches.
In this case simulation is used for preliminary checks on a new strategy and decision rules before
undertaking experiments on real systems;
Where the model is used for prediction of bottlenecks in management systems and other
obstacles, which may appear as a result of the introducing new components.
It is obvious, that the modeling methods listed above (conceptual, simulation, mathematical logic and
structure-function) are not mutually exclusive and that they can be applied to management systems
research simultaneously or in combination.

283

Elena Serova
3. Structure-functional approach for decision support
The most intuitive and popular example of the structure-function computer modeling in modern
management is the business process modeling.

Modern management benefits from business-process improvement in that it obtains a comprehensive
view of the way a company conducts business; managers should thus know how to conduct process
simulation and analysis using the capabilities of modern software packages and platforms.

The market situation most modern companies operate in is quite unstable obliging them to respond to
change quickly and accurately. Sooner or later, businesses must adapt and restructure, and
managers will rethink business processes in order to improve the enterprises operations. Thus, a
manufacturer may wish to reconsider purchasing, ordering or delivery. Business process
reengineering is tied to alterations in the architecture of information systems. The key to success with
a reengineering project is close cooperation with of all the groups interested in solving the problem,
primarily IT specialists and experts in the business area. This is achieved by building structure-
function computer models that reflect business processes which are understandable for all
participants. Such models should simultaneously help formalize the current state of affairs, and find
room for improvement. There are several computer technologies aimed at automating such structure
modelsthe CASE (Computer Aided Software Engineering) tools which involve various utilities for
analysis and modelling that, together represent just a small fraction of the larger class. Organization
and structure changes in a company involve serious risks especially when they involve the
implementation of an Enterprise Resource Planning System (ERP). The implications of such changes
should be carefully studied and analysed before beginning a project. ERPs such as SAP ERP ECC,
BAAN, ROSS iRenaissance, and related use methods and tools that are time tested, minimize risks
and resolve issues that arise from the reorganization of business processes, including those linked to
the implementation of modern IT systems. Todays approach to business process design suggests
continuous improvement and modification, analysis and prognosis, as well as timely changes to the
business model. The diagnosis should adequately reflect the current state of affairs to lay a
comprehensive foundation under business development strategy and business automation. There are
steps are recommended for business development or modification (Figure 1)

Figure 1: Main steps for business development (or modification)
There are several techniques used when modeling business processes with the most popular being
Business Process Modeling, Work Flow Modeling and Data Flow Modeling (Ananiev, Serova, 2008).
284

Elena Serova
First discussed in the 1970s by Douglas Ross, the Structured Analysis and Design Technique (SADT)
is a foundation for the IDEF0 business process modeling standard (FIPS 183, 1993). The AllFusion
Process Modeler 4.1 (aka BPwin 4.1) introduced by Computer Associates (CA) is a modelling tool
that is fully compliant with IDEF0 and allows analysing, documenting and planning changes in
complex business processes scenarios (Maklakov, 2003).

Another actively used process description methodology is the Work Flow Modeling application the
IDEF3 standard for building process models as job time sequences of jobs (functions, operations).
The ARIS source environment provided by IDS Scheer AG which creates methodological and work
instructions with eEPS (extend Event-driven Process Chain) models, is based on IDEF3.

DFD (Data Flow Diagramming) notations allow one to portray job sequences within a process and the
information flows circulating between different jobs processes. The DFD methodology minimizes the
subjectivity of business process analysis and can be efficient when implementing a process approach
to organizational management.

The developing UML (Unified Modeling Language) methodology is also widely used. It embraces a
series of diagrams (e.g., the Activity Diagram) that can be used to describe business processes, even
though business modeling is not what UMLs primary objective.

Along with the techniques listed above, there are others offered by various software developers.
Corporations as IBM and Oracle offer their own business process description and modeling tools for
example Oracles Workflow technology which is used to automate job flows features tools for process
description and formalization. The most popular state-of-the-art business process management
standard is BPEL (Business-Process Execution Language) which allows for the creation of an integral
platform for all applications. Public and private institutions throughout the world are switching to
BPEL. Certain pilot projects have been carried out in Russia which successfully solved IT
infrastructure optimization problems.
4. The simulation modeling business application framework
The major approaches (or methods) in simulation modeling are: System Dynamics (SD), Discrete
Event (DE) and Agent Based (AB). While SD and DE are traditional approaches, AB is relatively new.
The Dynamic Systems (DS) also exists, but as a rule used to model and design physical systems.

If one considers the levels of abstraction of these methods, Dynamic Systems or physical modeling
is situated at the low level. System Dynamics dealing with aggregates is located at the highest level,
and Discrete Event modeling is employed at an intermediate level abstraction. Agent Based modeling,
is used across all levels of abstraction. Agents may model objects of very diverse nature and scale:
at lower levels, for example, pedestrians or cars or robots can be modeled; customer at
intermediate level, competing companies at the highest level. (Figure 2).
Methods Attributes Abstraction
Level (s)
Management
Level (s)
Areas of Application Simulation
Modeling
Software
System
Dynamics
(SD)
Aggregates,
Stock-and-Flow
diagrams,
Feedback
loops

High
(minimum
details,
macro level)
Strategic Population Dynamics,
Ecosystems,
etc.
VenSim,
PowerSim,
iThink
Agent-
Based
Modeling
(AB)
Active objects,
Individual
behavior rules,
Direct or
indirect
interaction,
Environmental
model

High
Middle
Low
Strategic
Tactical
Operational
Logistics,
Manufacturing,
IT systems/
Telecommunication,
Business Processes,
Services,
Asset Management,
Project Management,
Finance,
Market place &
competition,
HRM,
etc.
AnyLogic,
Academic
tools: Swarm,
RePast,
NetLogo,
ASCAPE
285

Elena Serova
Methods Attributes Abstraction
Level (s)
Management
Level (s)
Areas of Application Simulation
Modeling
Software
Discrete
Event
Modeling
(DE)
Entities
(passive
objects),
Flowcharts,
Resources
Middle
(medium
details, meso
level)
Low
Tactical
Operational
Business Processes,
Manufacturing,
Services,
Warehouse,
etc.
Arena,
GPSS,
ExtendSim,
SimProcess,
AutoMod,
Promodel,
Enterprise
Dynamics
Dynamic
Systems
(DS)
Physical state
variables,
Block diagrams
and/or
algebraic-
differential
equations
Low
(maximum
details, micro
level)
Operational Automotive control
systems,
Traffic micro level,
etc.
MATLAB,
LabView,
VisSim
Figure 2: Simulation modeling business application framework (National Simulation Society (Russia)
and authors own elaborations)
System Dynamics is the study of information-feedback characteristics of industrial activity to show
how organizational structure, amplification (in policies), and time delays (in decisions and actions)
interact to influence the success of the enterprise (Forrester, 1961). The range of SD applications
includes also urban, social, ecological types of systems. In SD the real-world processes are
represented in terms of stocks (e.g. of material, knowledge, people, money), flows between these
stocks, and information that determines the values of the flows. SD abstracts from single events and
entities and takes an aggregate view concentrating on policies. To approach the problem in SD style
one has to describe the system behavior as a number of interacting feedback loops, balancing or
reinforcing. One of the well-known examples of classic SD model is Bass Diffusion Model.

Discrete Event modelling may be considered as definition of a global entity processing algorithm, with
stochastic elements. This modelling approach roots to 1960s when Geoffrey Gordon conceived and
evolved the idea for GPSS (General Purpose Simulation System) and brought about its IBM
implementations (Gordon, 1961). The term Discrete Event modelling or Discrete Event simulation are
commonly used for the modelling method that represents the system as a process, i.e. a sequence of
operations being performed over entities such as customers, parts, documents, etc. These processes
typically include delays, usage of resources, and waiting in queues. Each operation is modelled by its
start event and end event, and no changes can take place in the model in between any two discrete
events. The term discrete has been in general use for decades to distinguish this modelling method
from continuous time methods, such as SD. With the emergence of Agent Based modelling the term
Discrete Event modelling in its traditional sense created confusion since in most agent based models
actions are also associated with discrete events, but there may be no processes, entities, or
resources.

Compared to SD or DE models, there is no such place in AB model where the global system
behaviour (dynamics) would be defined. Instead, the modeller defines behaviour at individual level,
and global behaviour emerges as a result of many individuals, each following its own behaviour rules,
living together in some environment and communicating with each other and with the environment
(Borshchev, Filippov, 2006).
5. Agent-based modeling concept
Agent technologies offer various types of agents, model of their behavior and characteristics, through
a range of architectures and components libraries. The notion Agent has developed from the well-
known concept of object which is an abstraction from a collection of real-world items with the same
qualities and behavioral rules.

Among the various classifications of agents, the most widely known is that of (Kalchenko, 2005):

Intellectual Mobile Stationary.

286

Elena Serova
Agent qualities are determined by their classification. Intellectual agents have the most
comprehensive set of qualities; their intellectual capacity allows them to build virtual worlds where
they form action plans. Minimum set of basic characteristics for any agent includes qualities such as
(Gavrilova, Muromtsev, 2007):
Activity the ability to organize and carry out actions;
Autonomy (semi-autonomy) relative independence from the environment and a certain freewill
given a good supply of behavioural resources;
Sociability created by the necessity to carry out tasks in cooperation with other agents and
supported by communication protocols;
Purpose innate sources of motivation, or more generally special intentional characteristics.
This concept is close to one of the most popular definitions of agent by Wooldridge (2002).

In addition to characteristics we can add:
Adaptability the ability to learn and reason. Agents may possess partial knowledge or inference
mechanisms, as well as specialize knowledge in a subject matter;
Reactivity functional perception of the environment and adaptation to changes therein. This
includes basic knowledge, creeds, wishes, commitments and intentions.
The technologies that have been used to successfully develop agents and multi-agent systems
include (Kalchenko, 2005):
Knowledge-based systems;
Neuron networks;
Clustering algorithms;
Fuzzy logic;
Decision trees;
Bayes theorem;
Genetic algorithms;
Natural language processing.
Multi- (or multiple-) agent systems (MAS), or agent-oriented programming represent a step forward
from object-oriented programming (OOP) and integrate the latest advances in the areas of artificial
intelligence, parallel computing and telecommunications. Unlike common objects in OOP, an agent is
an autonomous object which implies that its behavior is dictated by goals, and that it has competence
to achieve them. Agents cannot be called subprograms (or methods in OOP), because they have their
own states and continuously work to achieve their goalsmuch like co-programs that can pass
control to one another at any time. Thus, they can only be offered new tasks, which they may accept
or decline depending on whether the task meets their goals and interests. To ensure their autonomy,
agents can react to events, make and reconsider decisions, and interact with other agents.

As a rule, software implementation of a traditional system is centralized, has a hierarchical structure
and executes predetermined algorithms. The code clearly states what, when and how to complete an
action. Multi-Agent System is a self-organizing network of agents (software objects) that work
continuously and simultaneously on establishing and reconsidering links. This system is
decentralized: every agent is autonomous and strives to achieve its goals. Changing an agents goal
makes other agents adapt their behavior and change their links.
Every MAS consists of the following components:
A set of organizational units with a subset of agents and objects;
A set of tasks;
An environment - a space where agents and objects exist;
A set of relations between agents;
A set of agent actions (operations on objects).
287

Elena Serova
There are various approaches to designing Multi-Agent Systems and three levels can be defined:
conceptual description, initial design and detailed design. At the first level one should describe the
organizational structure, goals, business processes and information support all of which act as a
foundation for the next levels ontology. On the next two levels these elements form the organizational
visualization - the virtual world where agents act using the ontology to achieve their goals and carry
out the set of tasks.

Multi-Agent Systems distribute tasks among agents, each being considered a member of a group or
organization. The distribution of tasks suggests that each member of a group is assigned a role,
responsibilities and behavioral requirements.

Agent technologies normally use certain agent typologies and models, as well as MAS architectures,
and are based on agent libraries and development support tools for different types of Multi-Agent
Systems.

The worlds best known and most widely approaches to Multi-Agent System development are OMG
MASIF (Object Management Group), which is based on the concept of mobile agents; specifications
by FIPA (Foundation for Intelligent Physical Agents) based on an agents assumed intellectuality; and
standards by the Defence Advanced Research Projects Agency (DARPA) such as Control of Agent
Based Systems.

FIPA is an organization that produces software standards specifications for heterogeneous and
interacting agents and agent based systems in order to promote agent-based technology and the
interoperability with other technologies. FIPA members include such companies as Avaya, Boeing,
Cisco, Siemens, Toshiba, and various universities and public institutions. FIPA specifications aim to
ensure interaction between intellectual agents through standardized communications and content
languages. In addition to general communications, FIPA also works on ontology and negotiation
protocols to support interaction in certain applied areas (transportation, manufacturing, multimedia,
and network communications).

The OMG MASIF standard seeks to create conditions for the migration of mobile agents from one
multi-agent system to another through standardized CORBA IDL interfaces.

DARPA initiated the Knowledge Sharing Effort that divided agent programming languages into syntax,
semantics and pragmatics:
KIF Knowledge Interchange Format (syntax);
Ontolingua a language for defining sharable ontologies (semantics);
KQML (Knowledge Query and Manipulation Language) a high-level interaction language
(pragmatics).
An important element for creating multi-agent systems is the Agent Communication Language. This
determines the types of messages that agents will exchange. Inter-agent communications are
developed through ACL, a language of content and ontology that determines a set of basic concepts
to use in cooperative messages. Ontology here is synonymous to the API (Application Programming
Interface) concept and determines a particular interface for intellectual agents.
6. Multi-agent approach to decision making
As systems of distributed artificial intelligence Multi-agent Systems have the following advantages for
intellectual supporting decision making:
They speed up task fulfilment through parallelism and reduce the volume of data transmitted by
passing high-level partial solutions to other agents;
They are flexible since agents of various capacities are used to carry out a task dynamically
cooperatively;
They are reliable given that functions that one agent is unable to carry out will be passed to other
agents.
The integration of Multi-agent Systems in a decision support system can offer the following benefits:
An information system specifically adapted to enterprise needs;
288

Elena Serova
More flexibility and ability to adapt to the external environment, especially under conditions of
uncertainty;
The ability to search and obtain unorthodox solutions;
The confirmation of suppositions that previously lacked information;
Faster decision-making when modeling negotiations;
The ability to find and resolve potential conflicts of interests in both the external and internal
environments;
More reliable decisions owing to the ability to pass functions from one agent to another and
redistribute responsibilities, which is not always possible in real life;
Optimized access to information for all employees.
A significant advantage of the Multi-agent Systems approach relates to the economic mechanisms of
self-organization and evolution which become powerful efficiency drivers for the development and
success of an enterprise. The multi-agent approach allows the creation of a new intellectual data
analysis which can be open, flexible, and adaptive, and deeply integrated with other systems.

Experienced-based accounts concerning MAS applications point to the following areas of application:
Distributed or network enterprise management;
Complex and multi-functional logistics;
Virtual organizations and Internet portals that sell products and services;
Academic management of distance-learning systems;
Companies with developed distribution and transportation networks (e.g., Procter & Gamble);
Distribution channels management;
Users preferences simulation modelling (e.g., Ford Motor Company).
Large companies realize a number of advantages with the Multi-Agent approach including: faster
problem solving, less data transmission (since high-level partial solutions are passed to other agents),
and faster agreements and order placements.

Distributed companies find advantages in improved supply, supervision and coordination of remote
divisions and structures. Companies with a wide and variable products range can react flexibly to
changing consumer preferences and forecast changes. Service companies retain client interaction
scenarios vis-a-vis problem solutions with MAS technologies.
7. Conclusion
The increasing demand for optimisation of decision support systems development has caused leading
modelers to consider Agent Based modeling and combined approaches in order to obtain deeper
insights into complex and interdependent processes.

Multi-agent systems - as systems of distributed artificial intelligence - herald an era of networked
organizations that are supported by the interaction of intellectual robots. This facilitates the shift from
powerful centralized systems to fully decentralized ones, with hierarchical structure being replaced by
a networked organization. Rigid bureaucratic from top to bottom management is displaced by
negotiation, and planning with flexible arrangements. As a result, production volumes, profitability,
competitiveness and mobility are growing. A significant advantage of the Multi-Agent System
approach relates to the economic mechanisms of self-organization and evolution which become
powerful efficiency drivers for development and success of an enterprise. The Multi-Agent approach
allows the creation of new intellectual data analysis which can be open, flexible and adaptive, and
deeply integrated with other systems.

This does not mean however that Agent Based modeling is a replacement for System Dynamics or
Discrete Event modeling. There are many applications where SD or DE models can efficiently solve
the problems. If the problems requirements fit well with Discrete Event or System Dynamics
modelling paradigms using these traditional approaches is more appropriate. In cases where the
289

Elena Serova
system contains objects with timing, event ordering or other kinds of individual and autonomous
behaviour, then applying Agent Based or mixed approaches is more efficient.
Acknowledgements
The author gratefully acknowledges the contribution of colleagues from XJ Technology and the
opportunity of study and work with AnyLogic multi-method simulation tool.
References
Ananiev,I and Serova,E.( 2008) The Areas of IDEFO Notation Effective Application for Tasks of Business-
Processes Description, Vestnik of St. Petersburg State University, Management Seria, 2,St. Petersburg.
Borshchev, A. and Filippov, A. (2004) AnyLogic Multi-Paradigm Simulation for Business, Engineering and
Research, Paper read at the 6th IIE Annual Simulation Solutions Conference, Orlando, Florida, USA.
Borshchev, A. and Filippov, A. (2006) From System Dynamics and Discrete Event to Practical Agent Based
Modeling, [online], XJ Technologies, www.anylogic.com.
FIPS 183 (1993) Integration definition for function modeling (IDEF0), USA.
Forrester, J. (1961) Industrial Dynamics, Cambridge, MA: MIT Press.
Gavrilova, T.A. and Muromtsev, D.I. (2007) Intellectual technologies in management: tools and systems, GSOM
SPbSU, St. Petersburg.
Gordon, G. (1961) A general Purpose Systems Simulation Program, McMillan NY, Proceedings of EJCC,
Washington, pp 87-104.
Gorshkov A.F., Evteev B.V., Korshunov V.A.(2004) Computer modeling of management, Exam, Moscow.
Gorodetsky, V.I., Karsaev, O.V., Konyushy, O.V.and Samoylov, V.V. (2008) MAS-based simulation modeling for
flight traffic management, MSTU CA Scientific Journal, series Navigation and Flight Traffic Management.
Kalchenko, D. (2005) Agents are coming to help ComputerPress, [online], www.compress.ru.
Karpov, Yu.(2005) System simulation modeling. Introduction to modeling with AnyLogic, St. Petersburg: BHV, St.
Petersburg.
Katkalo, V.S. (2006) Strategic management theory evolution, St. Petersburg University Publishing House, St.
Petersburg.
Kazantsev, A.K.,Serova, Ye.G., Serova, L.S. and Rudenko, Ye.A. (2007) Information technology resources of
Russias economy, St. Petersburg University Publishing House, St. Petersburg.
Kouchtch, S.P. (2002) Network approach in marketing: Russias experience, SpbSU Vestnik, series
Management, Vol. 1, No. 8, pp. 81-107.
Law, A.M.and Kelton, W.D. (2000) Simulation Modeling and Analysis, 3rd ed. McGraw-Hill.
Magedanz, T. OMG AND FIPA standardization for agent technology: competition or convergence? CORDIS,
[online], cordis.europa.eu/infowin/acts/analysys/products/thematic/agents/ch2/ch2.htm.
Maklakov S.V. (2003) Modeling business-processes with AllFusion Process Modeler, DIALOGUE MYTHS,
Moscow.
Moshella, D. (2003) Customer-driven IT: how users are shaping technology industry growth, MA: Harvard
Business Press, Boston.
Pidd, M. (2004) Computer Simulation in Management Science, 5
th
Edition, Wiley.
Pospelov, D.A. (1998) Multi-agent systems present and future. Information Technologies and Computing
Systems, No. 1, pp.14-21.
Serova, E.G. (2009) The role of Multi-agent Approach in Building Information Infrastructure for a Modern
Company and Carrying Out Management Tasks, International book series Information Science and
Computing, Intelligent Information and Engineering Systems, Vol.3.
Serova, E.G. (2011) Enterprise Information Systems of New Generation, Electronic Journal Information
Systems Evaluation, Academic Publishing International Ltd, Volume 15, Issue 1, pp 116-126.
Wooldridge, M. (2002) Introduction to MultiAgent Systems, Wiley.
290
Integrating Sustainability Indicators in IT/IS Evaluation
Gilbert Silvius
HU University of Applied Sciences, Utrecht, The Netherlands
gilbert.silvius@hu.nl

Abstract: This paper explores the integration of indicators that reflect the concepts of sustainability into IT/IS
evaluation methods. It is based on the observations that sustainability is one of the most important challenges of
our time and that IT/IS can make a contribution to sustainable development. IT/IS evaluation methods should
reflect this contribution and include criteria for the assessment of sustainability aspects. Based on identification
IT/IS evaluation methods and an overview of frameworks for sustainability indicators, an analysis is made of the
inclusion of the indicators and principles of sustainability assessment in IT/IS evaluation methods. The analysis
will conclude that integrating sustainability considerations in IT/IS evaluation requires far more than a set of
additional criteria to be considered. Integrating sustainability considerations in IT/IS evaluation suggests a far
more holistic and elaborated perspective on IT/IS evaluation than the infamous IT productivity paradox that is
dominating the discussion on the value of IT/IS still today.

Keywords: sustainability, information technology, information systems
1. Introduction
Sustainability is recognized by the United Nations as one of the most important challenges of our time
(Glenn and Gordon, 1998). How can we develop prosperity without compromising the life of future
generations? The pressure on companies to broaden its reporting and accountability from economic
performance for shareholders, to sustainability performance for all stakeholders has increased
substantially (Visser, 2002). Proactively or reactively, companies are looking for ways to integrate
ideas of sustainability in their marketing, corporate communications, annual reports and in their
actions (Hedstrom et al., 1998; Holliday, 2001).

The growing concern about sustainability and the preservation of our planet is increasingly being
recognized by the information technology (IT) and information systems (IS) disciplines. CIOs identify
Green IT as an important strategic technology (Thibodeau 2007), but the green aspects of IT go
beyond the technology. Given ITs functional ability to improve, change and reinvent business
processes, it can also be an important contributor to more sustainable business practices (Kazlauskas
and Hasan, 2009). However, this Greening by IT perspective is not reflected in IT/IS evaluation
methods, as these methods tend to focus predominantly on an economic perspective. Probably
fuelled by the much quoted IT productivity paradox (Brynjolfsson, 1993), researchers and
practitioners have been challenged to proof that IT/IS brings economic value to the organization. And
although many evaluation models have been developed that also include other variables than Return
on Investment (Renkema and Berghout, 1996), the debate on the contribution of IS seems to be
dominated by the economic perspective (Silvius, 2010).

This paper explores the integration of indicators that reflect the concepts of sustainability into IT/IS
evaluation methods. The paper will present a brief overview if IT/IS evaluation methods and an
exploration of frameworks for sustainability reporting and evaluation. The paper will then analyse how
these two concepts, IT/IS evaluation and sustainability, fit, and make a number of observations on the
similarities and differences of the concepts.
2. IT/IS evaluation
Through research and in practice, a substantial number of evaluation methods to assess the
contribution of IS/IT to business performance was developed. After considering over 50 evaluation
methods Renkema and Berghout (1996) grouped these methods into four categories: Financial
methods, Multi-criteria methods, Ratio methods and Portfolio methods.
2.1 Financial methods
The Financial methods consider the valuation of an IT/IS investments as an economic issue for which
it is irrelevant whether the investment is in IT or in any other resource. As long as the effects of the
investment are understood, calculating the value of it is merely a financial technicality (Silvius, 2010).
However, in reality capturing value is not quite that straightforward. Financial valuation methods all
have assumptions and limitations. Table 1 provides an overview of these valuation methods.
291

Gilbert Silvius
Table 1: Overview of financial valuation methods (based on Silvius, 2010)
Valuation method Qualities Limitations
Return on investment Easy to calculate
Easy to interpret (a simple
percentage)
In line with the financial
administration
Outcome sensitive to amortization method
Ignores the time-value of money
Ignores risk
Pay-back period Quite easy
Intuitively coping with risk
Ignores part of the revenues
Simplistic, does not determine value
Internal Rate of
Return

Includes the time-value of money
Easy to interpret (a simple
percentage)
Based on cash-flows
Complex
Not in line with the financial administration
Ignores risk
Multiple outcomes, or none, possible
Discounted Cash
Flow / Net Present
Value

Includes the time-value of money
Based on cash-flows
Copes with risk
Complex
Complex to interpret
Not in line with the financial administration
Not conclusive in case of projects with different
durations
Economic Value
Added

Includes the opportunity value of
money
In line with shareholder value
Value calculation based upon one of the other
methods
Not in line with the financial administration
Real Options
Valuation
Includes optimality and
managerial flexibility in
investments

Complex
Complex to interpret
Data often not available
Not in line with the financial administration
Game theory Includes market developments
Adds a strategic perspective
Data often not available
Not in line with the financial administration
The limitations of these financial methods to capture the more qualitative aspects of IT/IS value and
impact led to the development of other methods.
2.2 Multi-criteria methods
Multi-criteria methods are a reaction to the problems of capturing the full value of IT/IS investments in
just financial metrics. These methods aim to identify different relevant aspects of value and risk in
order to enable a thorough discussion and an informed discussion (Frisk, 2007). The most influential
method using multiple criteria is Information Economics (Parker et al., 1988). This method is suited for
evaluating a single project as well as a portfolio of projects.

Information Economics identifies assessment criteria in two domains: business (IT/IS demand) and IT
(IT/IS supply). Criteria in the business domain are: Return in Investment, Strategic Match, Competitive
Advantage, Management Information, Competitive Response and Organisational Risk. Criteria in the
IT domain are: Strategic Information Systems Architecture, Definitional Uncertainty, Technical
Uncertainty and Infrastructure Risk. The importance or weight of the different criteria may not be
equal. Management therefore has to decide upon a weight factor for each criteria.

Based upon the set of criteria and weight factors each project or investment is given a score on all of
the criteria. It is crucially important that the scores are underpinned in this more objective way in order
to create acceptation for the results of the evaluation process.

The results of the evaluation process can be presented in a graphically attractive way. The scores on
the criteria Return in Investment, Strategic Match, Competitive Advantage, Management Information,
Competitive Response and Strategic Information Systems Architecture, are totalled to a score
representing the value of the investment. The scores on the criteria Organisational Risk, Definitional
Uncertainty, Technical Uncertainty and Infrastructure Risk, add up to a total risk score. Combining
the two scores in a two-dimensional graph provides management with a concise overview of the
investment portfolio.
2.3 Ratio methods
Different from the financial and multi-criteria methods are ratio methods not aimed at evaluating a
specific investment or project, but at finding the right level of total IT/IS costs in an organization. This
level is expressed as a ratio, e.g. IT costs / total revenue or IT costs / employee. The outcome of
292

Gilbert Silvius
these ratios should be considered relative to the same ratios at competitors or for one organization in
time. Lower or higher scores on these ratios than comparable organizations are not per-se right or
wrong, but should give reason for investigation and discussion.

The most prominent author on ratio methods is Paul Strassmann. He developed sophisticated ratios
for specific industries. Based on his research he remains sceptical about the value of IT investments.
Strassmann states that For 55% of U.S. firms the computer budget exceeds their economic value-
added. and The "right" level of spending for computers reflects the bureaucratic characteristics of a
firm, not revenue or profits. (Strassmann, 1997). A limitation to the applicability of the ratio methods,
however, is the availability of data required for the ratios.
2.4 Portfolio methods
In 1981 F. Warren McFarlan suggested to analyze and manage IT/IS investments and projects in
terms of revenues and risks using portfolio theory, as was done in the financial world (Warren
McFarlan, 1981). Portfolio theory referred to the modern portfolio theory as developed by Markowitz
(Markowitz, 1952). Although appealing, the use of this insight did not really take off until the Clinger-
Cohen Act. This Act states that the management of IT in US government institutions must reflect a
Portfolio Management approach and decisions to terminate or make additional investments are
based on performance much like an investment broker is measured and rewarded based on
managing risk and achieving results.

With its appeal on portfolio theory, the Clinger-Cohen Act aimed to bring transparency to IT/IS costs
and benefits. When applying portfolio theory to IT projects however, some issues may occur
regarding the scalability of the investments, the tradability of the investments, the unique character of
some investments, the exchangeability of benefits, the unfamiliarity of project risks, etc. Although the
difference in characteristics between financial investments and IT investments does imply limitations
to the applicability of portfolio theory, some useful insights could be derived (Van Rossum and Silvius,
2006).

An important insight in portfolio theory is the understanding that the value of an investment will be
influenced by other investments or assets in the portfolio. In other words, investment decisions are not
taken in isolation. Whereas all other evaluation methods study the value of an investment as an
autonomous value, portfolio methods study value of investments in conjunction to other investments
and assets. An insight that appeals to the common sense when considering architectural aspects.
Portfolio theory also points out the importance of having a structured process in place for the
continuous evaluation of the total portfolio of IT/IS investments and projects.
3. IT/IS and sustainability
Societal concerns about the balance between economic growth and social wellbeing has been around
as a political and managerial challenge for over 150 years (Dyllick and Hockerts, 2002). Propelled by
the World Commission on Environment and Development (1987) and the 1992 Rio Earth Summit, the
opinion that none of these three goals, economic growth, social wellbeing and a wise use of natural
resources, can be reached, without considering and effecting the other two, got widely accepted
(Keating, 1993). With this widespread acceptance, sustainable development became one of the most
important challenges of our time.

Sustainability in the context of sustainable development is defined by the World Commission on
Environment and Development (1987) as "forms of progress that meet the needs of the present
without compromising the ability of future generations to meet their needs". This broad definition
emphasizes the aspect of future orientation as a basic element of sustainability. This care for the
future implies a wise use of natural resources and other aspects regarding the environmental
footprint. However, sustainability requires not just an environmental green perspective, but also a
social one. Elkington (1997), recognizes this in his triple bottom line or Triple-P (People, Planet,
Profit) concept (Figure 1): Sustainability is about the balance or harmony between economic
sustainability, social sustainability and environmental sustainability (Elkington, 1997).

293

Gilbert Silvius

Figure 1: The triple-P concept of sustainability
In the debate on sustainability and IT/IS, a commonly used term is Green IT. As was stated in the
introduction, however, a distinction can be made between Green IT and Greening by IT, more
commonly referred to as Green IS (Kazlauskas and Hasan, 2009; Watson et al., 2010). In this
distinction, Green IT refers to the energy efficient utilization of IT equipment and Green IS to the use
of IS to enable more sustainable business processes (Boudreau et al., 2008).

The environmental impact of IT/IS has been subject to discussion. The promise of a decreased
environmental footprint because of paperless offices and tele-working has been opposed by claims of
increased power consumption and hazardous waste because of the use of IT needed to operate the
systems. Because of this diversity of direct and indirect indicators, the overall effect of IS/IT on
environmental sustainability is not easy to determine. Plepys (2002) concludes The discussion on
what the role is of IT and, particularly, Internet for sustainability is still going on and will hardly reach
any definite conclusion, as the environmental impacts of the new technologies will depend on how
they are used.

The concepts of sustainability mentioned above suggest that in the debate on the sustainability
aspects of IS/IT, not just the environmental effects should be considered, but also the social effects of
IS use. The United Nations (UN) concluded that IS has the ability to be a powerful enabler of social
sustainability and contributor to development goals, because of its unique characteristics to
dramatically improve communication and the exchange of information to strengthen and create new
economic and social networks (United Nations Development Program, 2001).

In some parts of the world, IT/IS is contributing to revolutionary changes in business and everyday
life. In other parts of the world, the lives of people have hardly been touched by these innovations. If
people in developing countries are unable to acquire the capabilities for using IS, they will be
increasingly disadvantaged or excluded from participating in the global information society. The social
and economic potential of these new technologies for development is enormous, but so too are the
risks of exclusion (Mansell, 1999). Economic research suggests a positive correlation between the
spread of IS and economic growth (Siegel, 2003). IS can contribute to income generation and
poverty reduction. It enables people and enterprises to capture economic opportunities by increasing
process efficiency, promoting participation in expanded economic networks, and creating
opportunities for employment.
4. Frameworks of sustainability indicators
Crucial for developing more sustainable business practices is the ability to evaluate the sustainability
aspects of different policies and projects, as well as to monitor progress. Or, as Jain (2005) argues:
"The ability to analyze different alternatives or to assess progress towards sustainability will then
depend on establishing measurable entities or metrics used for sustainability". The most frequently
used instruments in this context are frameworks or sets of sustainable development indicators (SDIs),
both as a way of measuring and evaluating (proposed) actions, and as a way of communicating this
information (Bell and Morse, 2003).

294

Gilbert Silvius
Many organizations have developed frameworks of indicators for this goal. In fact, the literature on
these models is a veritable jungle of different approaches and numerous case studies (Olsson et al,
2004). The International Institute for Sustainable Development (IISD) maintains an online directory of
SDI initiatives. This directory includes more than 600 initiatives at national and international levels by
governments, non-governmental organizations (NGOs) and individuals. It can therefore be concluded
that the use of SDI as an evaluative tool is still very much in its infancy (MacGillivray, 1995, Bell and
Morse, 2003) resulting in more questions than answers. What should be measured and what could be
excluded? What are the most effective indicators? How should they be organised? And how can the
indicators be communicated?

The following section gives an overview of some of the most influential frameworks for SDIs.
4.1 Natural step framework
One of the first initiatives to bring scientific principles to the assessment of sustainability was by
Swedish scientist Karl-Henrik Robrt. Robrt coordinated a consensus process to define and
operationalize sustainability. At the core of the process lies a consensus on what is called the Natural
Step framework. The Natural Step Framework is a holistic framework which helps organizations to
integrate sustainability principles into their business strategies. It provides a tool for developing a
shared vision, shared identity and shared goals among departments and along supply chains.
Foundation of the Natural Step Framework is the principle that a company should try to reduce its
negative impacts on the biosphere while enabling humans to fulfil their needs. It stimulates companies
to re-think production processes and product design and to find innovative alternatives for achieving
their business goals. The framework provides a good basis for both awareness raising as well as
strategy development.
4.2 IISD dashboard of sustainability
The IISD is a Canadian-based, public policy research institute, dedicated to advancing sustainable
development. The IISD developed a sustainability dashboard that illustrates the complex
relationships among economic, social and environmental issues (International Institute for
Sustainable Development, 2012). This Dashboard of Sustainability is intended for decision-makers
and others interested in sustainable development. It is based on the Millennium Development Goals
indicators for developing countries. These indicators help define Poverty Reduction Strategies and
monitor the achievement of the Millennium Development Goals.
4.3 WBCSD measuring impact framework
The World Business Council for Sustainable Development (WBCSD) is an organization of companies
that joined forces in order to create a sustainable future for business, society and the environment.
The WBCSD argues that "sustainable development is good for business and business is good for
sustainable development". This view is supported by some economists that state that, contrary to the
popular belief that sustainability requires a trade-off of economical and environmental/social benefits,
it is possible for the concepts of sustainable development and competitiveness to merge if enacted
wisely (Esty and Porter, 1998).

The WBCSD developed a framework, the Measuring Impact Framework, to assess the contribution of
business to the economic and broader development goals in the societies where business operates. It
is designed to help companies understand their contribution to society and use this understanding to
inform their operational and long-term investment decisions, and have better-informed conversations
with stakeholders.

The Measuring Impact Framework includes a 4-step methodology to help companies in any industry
operating in any part of the world to measure, assess and manage their impacts on society. In the
application of the methodology, an organization should adapted it to the specific company strategy
and development context in which the business operates;
4.4 UN global compact framework
The United Nations (UN) Global Compact (2010) is a framework of ten universally accepted
principles, developed by the UN and a number of large corporations. It covers the areas of human
rights, labour, environment and anti-corruption. Participating companies agree to comply with these
295

Gilbert Silvius
principles. They can use the framework as a platform for disclosure. This initiative has been created
because the UN realized that businesses are primary drivers for globalization and can help ensure
long-term value creation that can bring benefit to economies and societies all over the globe. In the
absence of global regulations, this voluntary code of conduct has been developed, hoping to stimulate
companies to more sustainable business practices.
4.5 UNCSD indicators of sustainable development
Following the 1992 Rio Earth Summit, the UN Commission on Sustainable Development (UNCSD)
started the development of the Indicators of Sustainable Development. This resulted in a set of 134
indicators of sustainable development. Country case studies and further discussion in the UNCSD led
to the rejection of a framework in lieu of themes and a more comprehensive set of core indicators.

The third, revised set of the UNCSD indicators was finalized in 2006 by a group of experts from
developing and developed countries and international organizations. This third edition of the indicator
set is based on the previous two (1996 and 2001) editions, which have been developed, improved
and extensively tested. It contains 96 indicators, including a subset of 50 core indicators. The
guidelines on indicators and their detailed methodology sheets are available as a reference for all
countries to develop national indicators of sustainable development.
4.6 ISO 26000 core subjects and issues
As a response to businesses growing interest and the increasing number of sustainability-related
institutions and frameworks, the International Organization for Standardization (ISO) launched ISO
26000, a comprehensive guideline on social responsibility, to help companies introduce more
sustainable practices. ISO 26000 is a guideline on social responsibility that is designed for all types of
organizations.ISO 26000 summarizes seven social responsibility core subjects: Organizational
governance, Human rights, Labour practices, The environment, Fair operating practices, Consumer
issues and Community involvement and development. These core subjects are further broken down
into issues, specific themes or activities a company should work on in order to contribute to
sustainable development.
4.7 GRI sustainability reporting guidelines
The Global Reporting Initiative (GRI) is a non-profit organization that pioneered the worlds most
widely used sustainability reporting framework, the Sustainability Reporting Guidelines (SRG).
Companies can use the SRG to indicate to shareholders and consumers their economic, social and
environmental performance. GRIs objective is to facilitate sustainability reporting for companies and
thereby stimulate them to operate more sustainably. The SRG framework consists of an extensive set
of indicators, from which companies can select a set that is relevant to their operations or industry.
4.8 Dow Jones sustainability indexes
The Dow Jones Sustainability Indexes (DJSI) are not a reporting tool, but a family of indexes
evaluating the sustainability performance of the largest 2,500 companies listed on the Dow Jones.
They are the longest-running global sustainability benchmarks worldwide. The DJSI is based on an
analysis of corporate economic, environmental and social performance, assessing issues such as
corporate governance, risk management, branding, climate change mitigation, supply chain standards
and labor practices. It includes general as well as industry specific sustainability criteria.

From this overview of SDI frameworks it should be concluded that, although many organizations have
offered meaningful lists of indicators, consensus on how to measure and assess sustainability has not
emerged yet. A recurring structure in many frameworks is the Triple-P concept mentioned in section
3. However, some frameworks, for example ISO 26000, adopt a completely different structure and
also different perspectives. Many specialists actually question whether or not a common list is even
possible, given the wide variety of conditions and the differences in values in different contexts.
In the so called Bellagio principles, a set of overarching principles for the assessment of sustainability
are formulated, thereby suggesting that a truly universal framework to measure sustainability may be
illusive. The Bellagio principles are (International Institute for Sustainable Development, 1997):



296

Gilbert Silvius
Principle 1: Guiding Vision and Goals

Assessment of progress toward sustainable development should:
Be guided by a clear vision of sustainable development and goals that define that vision.
Principle 2: Holistic Perspective

Assessment of progress toward sustainable development should:
Include review of the whole system as well as its parts.
Consider the well-being of social, ecological, and economic sub-systems, their state as well as
the direction and rate of change of that state, of their component parts, and the interaction
between parts.
Consider both positive and negative consequences of human activity, in a way that reflects the
costs and benefits for human and ecological systems, in monetary and non-monetary terms.
Principle 3: Essential Elements

Assessment of progress toward sustainable development should:
Consider equity and disparity within the current population and between present and future
generations, dealing with such concerns as resource use, over-consumption and poverty, human
rights, and access to services, as appropriate.
Consider the ecological conditions on which life depends.
Consider economic development and other, non-market activities that contribute to human/social
well-being
Principle 4: Adequate Scope

Assessment of progress toward sustainable development should:
Adopt a time horizon long enough to capture both human and ecosystem time scales thus
responding to needs of future generations as well as those current to short term decision-making.
Define the space of study large enough to include not only local but also long distance impacts on
people and ecosystems.
Build on historic and current conditions to anticipate future conditions.
Principle 5: Practical Focus

Assessment of progress toward sustainable development should be based on:
An explicit set of categories or an organizing framework that links vision and goals to indicators
and assessment criteria.
A limited number of key issues for analysis.
A limited number of indicators or indicator combinations to provide a clearer signal of progress.
Standardizing measurement wherever possible to permit comparison.
Comparing indicator values to targets, reference values, ranges, thresholds, or direction of trends,
as appropriate.
Principle 6: Openness

Assessment of progress toward sustainable development should:
Make the methods and data that are used accessible to all.
Make explicit all judgments, assumptions, and uncertainties in data and interpretations.
Principle 7: Effective Communication

Assessment of progress toward sustainable development should:
Be designed to address the needs of the audience and set of users.
297

Gilbert Silvius
Draw from indicators and other tools that are stimulating and serve to engage decision-makers.
Aim, from the outset, for simplicity in structure and use of clear and plain language.
Principle 8: Broad Participation

Assessment of progress toward sustainable development should:
Obtain broad representation of key grass-roots, professional, technical and social groups,
including youth, women, and indigenous people - to ensure recognition of diverse and changing
values.
Ensure the participation of decision-makers to secure a firm link to adopted policies and resulting
action.
Principle 9: Ongoing Assessment

Assessment of progress toward sustainable development should:
Develop a capacity for repeated measurement to determine trends.
Be iterative, adaptive, and responsive to change and uncertainty because systems are complex
and change frequently.
Adjust goals, frameworks, and indicators as new insights are gained.
Promote development of collective learning and feedback to decision-making.
Principle 10: Institutional Capacity

Continuity of assessing progress toward sustainable development should be assured by:
Clearly assigning responsibility and providing ongoing support in the decision-making process.
Providing institutional capacity for data collection, maintenance, and documentation.
Supporting development of local assessment capacity.
These principles provide guidance in the analysis of the impact of integrating sustainability indicators
in IT/IS evaluation, as reported in the next section.
5. Analysis
When confronting the methods of IT/IS evaluation identified in section 2 with the overarching Bellagio
principles derived from frameworks of sustainability indicators, a number of observations can be
made.
5.1 Integrating sustainability indicators suggests a multi-criteria approach
The principles Holistic Perspective and Essential Elements, prescribe for sustainability evaluation the
use of more perspectives than just the economic perspective. Most frameworks of sustainability
indicators adopt the triple-P concept and some frameworks take an even more holistic view. This
suggests that by definition an IT/IS evaluation approach based solely on the economic perspective is
inadequate for capturing the sustainability aspects of IT/IS.

From the four groups of methods identified by Renkema and Berghout, the multi-criteria group of
methods seems most appropriate to include the multiple perspectives that the concepts of
sustainability imply.
5.2 Inclusion of sustainability indicators makes sense
Multi-criteria methods for IT/IS evaluation, like Information Economics, typically include an indicator
for the contribution of IT/IS to the strategy of the organisation. IT/IS evaluation therefore links to
strategy (Silvius, 2010). And as more and more companies are integrating statements about
sustainability in their strategy (Hedstrom et al., 1998; Holliday, 2001), inclusion of sustainability
indicators in IT/IS evaluation makes sense. In the Bellagio perspectives this link is captured in the
principle Guiding Vision and Goals, that prescribes that assessment of sustainability aspects should
be guided by a clear vision of sustainable development and goals that define that vision.

298

Gilbert Silvius
5.3 A universal model for evaluating IT/IS is illusive
The recognition in the Bellagio principles that a sensible and meaningful set of sustainability indicators
is context specific, and that consensus should be sought on the level of principles rather than specific
indicators, suggests that a one size fits all approach to IT/IS evaluation may not be viable. This is
also recognized by principle Practical Focus that stated that standardizing of measurement should be
sought wherever possible, thereby suggesting that this is not always possible. For organizations this
would imply that working with a universal business case model, which most organizations do in order
to be able to compare IT/IS investments and projects, actually does not lead to optimal decision
making.
5.4 Including sustainability assessment expands scope
The logical unit of analysis of IT/IS evaluation is the organization that uses the technology or
systems or invests in them. This scope is based on economical reasoning and the concept of
ownership. IT/IS should bring benefits to the economical unit that invests in, pays for or owns the
technology or systems being assessed. In sustainability assessments, however, the sphere of
influence is not limited to economical units or ownership. This is covered in the Bellagio principles
Holistic Perspective and Adequate Scope. The principle Holistic Perspective mentions that
assessment of sustainability should include a review of the whole system as well as its parts. The
principle Adequate Scope prescribes that assessment of sustainability aspects should define the
space of study large enough to include not only local but also long distance impacts on people and
ecosystems.
5.5 Including sustainability assessment implies equality of time
The economic perspective, that is so dominant in all IT/IS evaluation methods, values short term
effects more than long term effects. This principle is most visible in the discounting of future cash
flows. In economic theory an immediately cash flow holds more value than a future cash flow, thereby
emphasizing the value of short-term benefits. However, social impacts or environmental degradation
because of business decisions, may not occur before the long-term. Also this aspect is mentioned in
the principle Adequate Scope, that states that assessing sustainability should adopt a time horizon
long enough to capture both human and ecosystem time scales thus responding to needs of future
generations as well as those current to short term decision-making.
5.6 Sustainability assessment implies continuous assessment and institutional
capacity
The Bellagio principles Ongoing Assessment and Institutional Capacity prescribe an institutionalized,
repetitive and iterative process to assess sustainability aspects. In IT/IS evaluation, this aspect is
covered in some of the IT/IS evaluation methods, most explicitly in the concept of portfolio
management. Portfolio management suggests a continuous process of monitoring, measuring,
evaluating and selecting investments or assets. In fact, also project management methodologies like
Prince2 include a continuous re-assessment of the business case of the project. Assessing
sustainability, however, goes even further than this and suggests that the technology or system at
hand is also continuously assessed during its exploitation. This could be compared with the business
case management in the post-implementation phase of a project.
5.7 Sustainability assessment implies openness and broad participation
The Bellagio principles Openness, Effective Communication and Broad Participation prescribe how
stakeholders are informed and engaged in the assessment of sustainability aspects. These principles
touch upon the way assessments are performed. And although these aspects are not explicitly
covered in the IT/IS evaluation methods, the graphical representations that are included in the
Information economics methodology do facilitate participation of and communication with key
stakeholders and decision makers. It is, however, debatable whether these formats and techniques
are an adequate operationalization of the principle Openness.
6. Conclusion
Sustainability is one of the most important challenges of our time. How can we develop prosperity,
without compromising our wellbeing or that of future generations? More and more companies
recognize this and take responsibility for their role in this challenge. IT/IS can make a contribution to
299

Gilbert Silvius
the sustainable development of organisations. It therefore makes sense to include an assessment of
sustainability aspects in the evaluation of IT/IS. And although some considerations of sustainability
can be found in the various methods of IT/IS evaluation, it has to be concluded that the integration of
sustainability indicators in IT/IS evaluation is just in its infancy.

In a contribution to the understanding of sustainability considerations in IT/IS evaluation, this paper
confronted the principles of sustainability assessment with the different methods of IT/IS evaluation.
This analysis did not result in a set of additional criteria to be considered, but in a set of observations
that form a foundation to reconsider IT/IS evaluation methods. These observations are:
Integrating sustainability indicators suggests a multi-criteria approach;
Inclusion of sustainability indicators makes sense;
A universal model for evaluating IT/IS is illusive;
Including sustainability assessment expands scope;
Including sustainability assessment implies equality of time;
Sustainability assessment implies continuous assessment and institutional capacity;
Sustainability assessment implies openness and broad participation.
The implications of these observations may be far fetching, as their impact suggests a far more
holistic and elaborated perspective on IT/IS evaluation than the infamous IT productivity paradox. The
operationalization of this holistic and elaborated evaluation perspective, however, is still subject to
further research.
References
Bell, S and Morse, S. (2003) Measuring Sustainability Learning from doing, Earthscan, London.
Boudreau, M., Chen, A. and Huber, M. (2008) Green IS: Building Sustainable Business Practices, in Watson, R.
(Ed.) Information Systems: a Global Text project.
Brynjolfsson, E. (1993) The productivity paradox of information technology, Communications of the ACM,
36(12), pp. 67-77.
Dyllick, T. and Hockerts, K. (2002) Beyond the business case for corporate sustainability, Business Strategy
and the Environment, 11, pp. 130-141.
Elkington, J. (1997) Cannibals with Forks: the Triple Bottom Line of 21st Century Business, Capstone Publishing
Ltc. Oxford.
Esty, D. C. and Porter, M. E. (1998) Industrial Ecology and Competitiveness: Strategic Implications for the Firm,
Journal of Industrial Ecology, 2(1), pp. 35-43.
Frisk, E. (2007) Categorization and overview of IT perspectives A literature review., Paper read at the
European Conference on Information Management and Evaluation, Monpellier.
Glenn, J. C. and Gordon, T. J. (1998) State of the Future: Issues and Opportunities, The Millennium Project,
American Council for the United Nations University, Washington, DC.
Hedstrom G, Poltorzycki S, Stroh P. (1998) Sustainable development: the next generation, Sustainable
Development: How Real, How Soon, and Whos Doing What?, Prism, 4: 519.
Holliday C. (2001) Sustainable growth, the DuPont way, Harvard Business Review, September, pp.129134.
International Institute for Sustainable Development (1997) Principles in Practice, International Institute for
Sustainable Development.
International Institute for Sustainable Development (2012) Dashboard for Sustainability,
http://www.iisd.org/cgsdi/dashboard.asp, accessed April 7
th
, 2012.
Jain, R. (2005) Sustainability: metrics, specific indicators and preference index, Clean Technologies and
Environmental Policy, May, pp. 71-72.
Kazlauskas, A. and Hasan, H. (2009) Web 2.0 Solutions to Wicked Climate Change Problems, Australiasian
Journal of Information Systems, 16(2).
Keating, M. (1993) The Earth Summits Agenda for Change, Centre for our Common Future, Geneva.
MacGillivray, A. And Zadek, S. (1995) Accounting for change: indicators for sustainable development, New
Economic Foundation, London.
Mansell, R. (1999) Information and communication technologies for development: assessing the potential and
the risks, Telecommunications Policy, 23, pp. 35-50.
Markowitz, H.M. (1952) Portfolio selection, Journal of Finance, 7 (1).
Olsson, J.A., Hilding-Rydevik, T., Aalbu H. and Bradley, K. (2004) Indicators for Sustainable Development,
Discussion paper, European Regional Network on Sustainable Development.
Parker, M.M., Benson, R.J. and Trainor, H.E. (1988) Information Economics, Linking Business Performance to
Information Technology, London: Prentice-Hall.
Plepys, A. (2002) The grey side of ICT, Environmental Impact Assessment Review, 22(5), pp. 509-523.
300

Gilbert Silvius
Renkema T.J.W. and Berghout, E.W. (1996) Methodologies for information systems evaluation at the proposal
stage: a comparative review, Information and Software Technology, Elsevier.
Rossum, R.B. van, and Silvius, A.J.G. (2006) ICT Portfolio Management in theorie en praktijk; Over slagkracht
en spraakverwarring, J. van Bon (Ed.), ITSM Best Practices deel 3, Van Haren Publishing (in Dutch).
Siegel, D. (2003) ICT, the Internet, and Economic Performance: Empirical Evidence and Key Policy Issues.
UNCTAD and UNECE Conference. Geneva.
Silvius, A.J.G. (2010) A Conceptual Model for Aligning IT Valuation Methods, International Journal of
IT/Business Alignment and Governance, 1(3), IGI Global, Hershey PA.
Silvius, A.J.G., Schipper, R., Planko, J., Brink, J. van der and Khler, A. (2012) Sustainability in Project
Management, Gower Publishing, Farnham.
Strassmann, P.A. (1997) The Squandered Computer, Information Economic Press.
Thibodeau, P. (2007) Gartners Top 10 Strategic Technologies for 2008, Computerworld, October 9.
United Nations Development Program (2001) Creating Value for All: Strategies for Doing Business with the Poor,
Growing Inclusive Markets initiative.
United Nations Global Compact (2010) UN Global Compact, Retrieved 22-10-2010, from globalcompact:
http://www.unglobalcompact.org/AboutTheGC/index.html.
Visser W.T (2002) Sustainability reporting in South Africa, Corporate Environmental Strategy, 9(1), pp.79-85.
Watson, R.T., Boudreau, M-C and Chen, A.J. (2010) Information Systems and Environmentally Sustainable
Development; Energy Informatics And New Directions for the IS Community, MIS Quarterly, 34(1).
Warren McFarlan, F. (1981) Portfolio Approach to Information Systems, Harvard Business Review.
World Commission on Environment and Development (1987) Our Common Future, Oxford University Press,
Great Britain.
301
The art of Shooting the Moving Goal Explorative Study of
EA Pilot
Nestori Syynimaa
Informatics Research Centre, Henley Business School, University of Reading,
UK
nestori.syynimaa@gmail

Abstract: Enterprise Architecture (EA) has been recognised as an important tool in modern business
management for closing the gap between strategy and its execution. The current literature implies that for EA to
be successful, it should have clearly defined goals. However, the goals of different stakeholders are found to be
different, even contradictory. In our explorative research, we seek an answer to the questions: What kind of goals
are set for the EA implementation? How do the goals evolve during the time? Are the goals different among
stakeholders? How do they affect the success of EA? We analysed an EA pilot conducted among eleven Finnish
Higher Education Institutions (HEIs) in 2011. The goals of the pilot were gathered from three different stages of
the pilot: before the pilot, during the pilot, and after the pilot, by means of a project plan, interviews during the
pilot and a questionnaire after the pilot. The data was analysed using qualitative and quantitative methods. Eight
distinct goals were recognised by the coding: Adopt EA Method, Build Information Systems, Business
Development, Improve Reporting, Process Improvement, Quality Assurance, Reduce Complexity, and
Understand the Big Picture. The success of the pilot was analysed statistically using the scale 1-5. Results
revealed that goals set before the pilot were very different from those mentioned during the pilot, or after the pilot.
Goals before the pilot were mostly related to expected benefits from the pilot, whereas the most important result
was to adopt the EA method. Results can be explained by possibly different roles of respondents, which in turn
were most likely caused by poor communication. Interestingly, goals mentioned by different stakeholders were
not limited to their traditional areas of responsibility. For example, in some cases Chief Information Officers' goals
were Quality Assurance and Process Improvement, whereas managers goals were Build Information Systems
and Adopt EA Method. This could be a result of a good understanding of the meaning of EA, or stakeholders do
not regard EA as their concern at all. It is also interesting to notice that regardless of the different perceptions of
goals among stakeholders, all HEIs felt the pilot to be successful. Thus the research does not provide support to
confirm the link between clear goals and success.

Keywords: enterprise architecture, stakeholders, goals, success
1. Introduction
The structure of the paper is as follows. Firstly, the problem area is introduced, including the key
concepts used in the paper. Secondly, the methodology and data collection are described. Thirdly,
results of the analysis and discussion are presented, and finally conclusions are presented.

Enterprise Architecture (EA) has a number of definitions in the current literature (see for example: CIO
Council 2001; TOGAF 2009; Zachman 1997). We shall adopt the definition of EA, which is based on
two common concepts shared by the EA definitions (Syynimaa 2010). Firstly, EA is a formal
description of an organisation at a specific time. Usually there are descriptions at least of two different
states of the organisation: current and future. Secondly, EA is a managed change between these
states. As a description, EA is usually described by using a four layer model (Pulkkinen 2006). These
layers are Business Architecture (BA), Information Architecture (IA), Systems Architecture (SA), and
Technology Architecture (TA).

Lately EA's usability and power in strategy execution has been recognised (Gregor et al. 2007; Ross
et al. 2006). The four layer model of EA uses a top-down approach (Pulkkinen 2006; TOGAF 2009).
Output of a higher level is input for a level below it: BA IASATA. Strategy of an organisation is
described on the BA level, so the future state of the organisation's BA includes possible changes in
the strategy. As such, EA can be used as a tool for closing the gap between the strategy and its
execution.

The current literature implies that for EA implementation to be successful, it should have clearly
defined goals (Iyamu 2009; Martin et al. 2004; Miller 2003). However, the goals of different
stakeholders are found to be different, even contradictory (van der Raadt et al. 2008). According to
the guide to the Project Management Body Of Knowledge (PMBOK), project goals are "the
quantifiable criteria that must be met for the project to be considered successful" (Duncan 1996, p.
52). Moreover, unquantifiable goals are of very high risk.

302

Nestori Syynimaa
The rationale of the research can be summarised as follows. EA has been found to be an important
tool in strategy execution. To implement EA successfully, one should have clear goal(s) set for the
implementation. These goals have been found to be different among stakeholders, even
contradictory. General project management literature suggests setting quantifiable goals. Thus in this
exploratory research, we seek answers to the questions: What kind of goals are set for the EA
implementation? How do the goals evolve during the time? Are those goals different among
stakeholders? How do they affect the success of EA implementation?

Empirical data was gathered from an EA implementation pilot conducted among eleven Finnish
Higher Education Institutions (HEIs) during 2011. The goals of the pilot were to start EA work in the
Higher Education field and to build a basis for a continuous EA function in HEIs. The pilot was
organised in six sub-groups having one or more participants, each focusing on a certain topic
(Riihimaa et al. 2011). For instance, one of the groups focused on co-operation in teaching and
student movement. In addition, each individual HEI had its own internal focus areas. The structure of
the pilot can be seen in Figure 1.

Figure 1: Pilot structure
2. Methodology
EA implementation is considered to be a process, where the initial state of an organisation is changed
from the state before EA t
1
, to the state where EA is implemented t
2
. The process is illustrated in
Figure 2. In this paper, we are performing exploratory research on an EA pilot. Methodologically EA
pilot is regarded as an instance of EA implementation, which is executed as a project. Goals are
objectives, targets, etc., set for the pilot. Goals can be measurable (quantifiable) or qualitative
(unquantifiable). Success of any implementation project is found to be difficult to measure
quantifiably, as the concept of success is too subjective (Cale et al. 1987). We accept this subjective
nature of the concept of success, and define it as the perceived feeling of success of participating
individuals.

Figure 2: EA implementation (Syynimaa 2012)
Data used in this paper consisted of a subset of data gathered from three different stages of the pilot
as part of a larger research. Goals before the pilot were gathered from the project plan of the pilot in a
textual form. Goals during the pilot were gathered from interviews, which were conducted as phone
interviews. Interviews were recorded and transcribed. Three different roles were interviewed in each
HEI: Chief Information Officers (CIOs), managers (president, rector, etc.), and Quality Assurance (QA)
staff. A semi-structured interview technique was used, where interviewees were given a certain theme
to answer. In the case of goals, the question/theme asked was (translated from Finnish): With regard
to the pilot, what are your or your institutions goals for the pilot? Goals after the pilot were gathered
from a questionnaire sent to the pilots project and steering group members two months after the pilot.
In the questionnaire the goals of the pilot were asked as an open ended question from four different
perspectives: personal goals, institutions goals, groups goals, and pilots goals. Also Most important
results and the success of the pilot were gathered from the very same questionnaire.
303

Nestori Syynimaa
All textual data was coded using the open-coding technique used in Grounded Theory (Glaser et al.
1967). As the purpose was simply to categorise similar goals under the same code, no axial or
selective coding was used. During the coding, a new category was added if the goal did not fit to any
existing goal. Thus, some codes may be overlapping, or even from different category layers. The
perceived success of the pilot was arrived at by using a Likert scale (1-5) question.
3. Results and discussion
The unit of analysis in this research is the sub-group. The sub-group level was used for two reasons.
First of all, there is data available from different states only at the sub-group level. Secondly, usage of
the sub-group level helps us to hide identities of HEIs, as some goals could be connected to a certain
HEI. One HEI formed a separate sub-group but was also a member of another sub-group having
multiple members. This HEI has been analysed as a part of the latter sub-group only, thus the total
number of analysed sub-groups is five.

During the coding eight distinct goals were found: Adopt EA Method, Build Information Systems,
Business Development, Improve Reporting, Process Improvement, Quality Assurance, Reduce
Complexity, and Understand the Big Picture. Adopt EA Method means goals related to adopting,
learning, and introducing the EA method. In these cases, the goal of the EA pilot is to adopt the EA
method per se, without any greater goal. Build Information Systems refers to goals for building an
information system. Business Development refers to goals for business development, for instance by
a comparison to other HEIs, or sustaining competitiveness by merging some functions with another
HEI. Improve Reporting refers to goals for improvement of reporting in terms of automation, quality
and ease of access. Process Improvement refers to goals for improvement of HEI's processes,
whether business or information and communication technology (ICT) processes. Quality Assurance
refers to goals related to QA function and its activities. Reduce Complexity refers to goals for reducing
complexity of either processes or information systems. Understand the Big Picture refers to goals for
understanding the big picture of the HEI as a whole, including information systems. Examples of the
codes can be seen in Table 1.
Table 1: Codes and examples of goals (translated from Finnish)
Code Goal
Adopt EA Method "Familiarising ourselves with EA framework"
Build Information Systems "To build a shared data warehouse for all Higher Education Institutions"
Business Development "Benchmarking to other HEIs"
Improve Reporting "To ease reporting"
Process Improvement "To support process based development of information systems"
Quality Assurance "To prepare for QA audit"
Reduce Complexity "Reducing overlapping work"
Understand the Big Picture "To better understand consequences of our decisions"
Results of the analysis can be seen in Table 2, where each row represents a sub-group. The three
next columns, Before, During, and After, refer to goals set for the pilot. The fourth column, Results,
refers to the most important results of the pilot. The last column refers to the perceived success of the
pilot on a scale of 1-5. The values of the success column are medians of respondents answers of the
particular sub-group. In the During column, respondents role(s) are also given. These roles are:
C=Chief Information Officer, M=Management (principal, rector, president), Q=Quality Assurance staff.
Table 2: Results
Before During After Results Suc.
Adopt EA method
Quality assurance
Adopt EA method (C
Build information systems (MQ
Business development (M
Process improvement (M
Reduce complexity (MQ
Adopt EA method
Business
development
Process
improvement
Adopt EA
method
4
Business
development
Adopt EA method (M
Improve reporting (M
Process improvement (C
Quality Assurance (M
Understand the big picture (C
Adopt EA method
Business
development
Process
improvement
Understand the big
picture
Adopt EA
method
Understand
the big picture 4
304

Nestori Syynimaa
Before During After Results Suc.
Adopt EA method Business development (CM
Reduce complexity (CM
Improve reporting (Q
Quality assurance (Q
Build information systems (M
Process improvement (M
Understand the big picture (M
Adopt EA method
Process
improvement
Improve reporting
Adopt EA
method
Process
improvement
3
Process improvement
Understand the big
picture
Adopt EA method (M
Build information systems (C
Business development (M
Improve reporting (M
Process improvement (CQ
Understand the big picture (CM
Adopt EA method Adopt EA
method
4
Build information
systems
Quality assurance
Quality assurance (C
Adopt EA method (CM
Business development (M
Process improvement (CQ
Understand the big picture (Q
Adopt EA method
Business
development
Process
improvement
Quality assurance
Understand the big
picture
Adopt EA
method
Business
development
3
Results summarised in Table 2 lead us to the following findings. Before the pilot, goals were mostly
related to the expected outcomes of the EA pilot. This was the case in four out of five sub-groups.
Adopting the EA method was mentioned in the goals of only two sub-groups. It should be noted that
the project plan, which was a source for before-the-pilot data, was composed mainly by CIOs.
Moreover, its purpose was to "sell" the project to HEIs' management.

During the pilot, there were a lot more goals mentioned than before the pilot. There were a number of
respondents, mainly managers and QA staff, who were not members of project groups. Thus their
view of the pilots goals was based solely on internal communication and publicly available material.
Variance of the answers can be explained by this to some degree. However, it does not explain why
goals mentioned by CIOs are different from those before the pilot. It is also interesting to note that in
some cases goals are not related to respondents own duties. For example, in some cases CIOs
goals were Quality Assurance and Process Improvement, whereas managers goals were Build
Information Systems and Adopt EA Method. This could be a result of a good understanding of the
meaning of EA, or that stakeholders do not regard EA as their concern at all.

Goals after the pilot were gathered from a questionnaire sent to the pilot's steering and project
groups. Thus all respondents should have been aware of the goals before the pilot. Still, most of the
goals mentioned were related to EA adoption. This was also the case when asking the most important
results of the pilot. All sub-groups mentioned the adoption of the EA method as one of the most
important results of the pilot. Two of the sub-groups mentioned only the EA method, while the rest of
the sub-groups also mentioned another goal. Goals after the pilot and most important results were
gathered on the same questionnaire, which explains their similarities as all of the results were also
mentioned as goals.

The most interesting finding is that there is no single sub-group which mentioned even a single goal in
all the stages and as the most important result. Moreover, in only two cases was one of the before-
the-pilot goals mentioned. This could be interpreted as a failure, but not a single sub-group perceived
the pilot as being a failure. Findings of the research can be summarised as follows. Goals set to EA
implementation evolve during the implementation project. There is also a notable variance of the
goals among different stakeholders. Development of the goals and their variance among stakeholders
does not seem to affect the perceived success of the implementation. It is fair to put the question why
the EA pilot was perceived as being a success, when all the participants felt the adoption of the EA
method was the most important result. Half of the participants also had some business results, but not
even one of those were original goals mentioned before the pilot. Does it mean that Enterprise
Architecture does not provide business results at all? Or does it mean that business outcomes are felt
to be so natural a result of the EA implementation, that only that was seen as important?
305

Nestori Syynimaa
4. Conclusions
Previous research on EA implementation has shown that clear goals set for the implementation are
one of the key success factors (Iyamu 2009; Martin et al. 2004; Miller 2003). Also communication
during the implementation has been found to be a very important factor (Gregor et al. 2007; Iyamu
2009; Kaisler et al. 2005; Richardson et al. 1990; Shupe et al. 2006; van der Raadt et al. 2009). The
research findings show that regardless of the different perceptions of goals among stakeholders, all
sub-groups felt the pilot was successful. Thus the research does not provide support to confirm the
link between clear goals and success. What it clearly indicates though is that communication plays a
key role in the implementation, which can be seen in the variance of goals mentioned during the pilot.
The author acknowledges the limitations of the research, especially in generalising the findings. The
exploratory nature of the research limits the applicability of the findings strongly to the context where it
was conducted. However, the power of exploratory research is in its ability to raise more questions
than it can answer. Research has therefore more scientific than practical implications. This research
for instance introduces some observations that are likely to be present also in a wider context, and
thus can provide an interesting area for further research. For instance, the effect of clear (or unclear)
goals to the success of EA implementation requires more systematic research.
References
Cale, E. G., and Curley, K. F. "Measuring Implementation Outcome: Beyond Success and Failure," Information &
Management (13:5) 1987, pp 245-253.
CIO Council "A Practical Guide to Federal Enterprise Architecture," Available at
http://www.cio.gov/documents/bpeaguide.pdf) 2001.
Duncan, W. R. A Guide to the Project Management Body of Knowledge PMI Publishing Division, Sylva, North
Carolina, USA, 1996, p. 182.
Glaser, B. G., and Strauss, A. L. The Discovery Of Grounded Theory, Strategies for Qualitative Research. Aldine
Publishers, Chicago, 1967.
Gregor, S., Hart, D., and Martin, N. "Enterprise architectures: enablers of business strategy and IS/IT alignment
in government," Information Technology & People (20:2) 2007, pp 96-120.
Iyamu, T. "Strategic Approach for the Implementation of Enterprise Architecture: A Case Study of Two
Organizations in South Africa," ICISO. 11th International Conference on Informatics and Semiotics in
Organisations, Beijing University of Technology, Beijing, China, 2009, pp. 375-381.
Kaisler, H., Armour, F., and Valivullah, M. "Enterprise Architecting: Critical Problems," HICSS-38. Proceedings of
the 38th Annual Hawaii International Conference on System Sciences, Waikoloa, Hawaii, USA, 2005.
Martin, N., Gregor, S., and Hart, D. "Using a common architecture in Australian e-Government: The Case of
Smart Service Queensland," ICEC'04. Proceedings of the 6th International Conference on Electronic
Commerce, ACM, Delft, The Netherlands, 2004, pp. 516-525.
Miller, P. C. "Enterprise architecture implementation in a state government," University of Phoenix, Phoenix,
Arizone, United States, 2003, p. 132.
Pulkkinen, M. "Systemic Management of Architectural Decisions in Enterprise Architecture Planning. Four
Dimensions and Three Abstraction Levels. System Sciences, 2006. HICSS06," 2006.
Richardson, G. L., Jackson, B. M., and Dickson, G. W. "A Principles-Based Enterprise Architecture: Lessons
from Texaco and Star Enterprise," MIS Quarterly (14:4) 1990, pp 385-403.
Riihimaa, J., and Syynimaa, N. "Enterprise Architecture Framework Adoption By Finnish Applied Universities'
Network," in: EUNIS 2011, Dublin, Ireland, 2011.
Ross, J. W., Weill, P., and Robertson, D. C. Enterprise architecture as strategy: Creating a foundation for
business execution Harvard Business School Press, Boston, Massachusetts, USA, 2006.
Shupe, C., and Behling, R. "Developing and Implementing a Strategy for Technology Deployment," Information
Management Journal (40:4) 2006, pp 52-57.
Syynimaa, N. "Taxonomy of purpose of Enterprise Architecture " in: 12th International Conference on Informatics
and Semiotics in Organisations, ICISO 2010, Reading, UK, 2010.
Syynimaa, N. "Measuring Enteprise Architecture Success: Tentative Model for Measuring EA Implementation
Success " in: IRIS 2012, Sigtuna, Sweden, 2012.
TOGAF TOGAF Version 9 Van Haren Publishing, 2009.
van der Raadt, B., Schouten, S., and van Vliet, H. "Stakeholder perception of enterprise architecture," ECSA
2008, Second European Conference on Software Architecture, Springer, Paphos, Cyprus, 2008, pp. 19-34.
van der Raadt, B., and van Vliet, H. "Assessing the Efficiency of the Enterprise Architecture Function," in:
Advances in Enterprise Engineering II. First NAF Academy Working Conference on Practice-Driven
Research on Enterprise Transformation, PRET 2009, held at CAiSE 2009, Amsterdam, The Netherlands,
June 11, 2009. Proceedings, E. Proper, F. Harmsen and J. L.G.Dietz (eds.), Springer Berlin Heidelberg,
Amsterdam, The Netherlands, 2009, pp. 63-83.
Zachman, J. A. "Enterprise architecture: The issue of the century," Database Programming and Design (10:3)
1997, pp 44-53.


306
Information Interaction in Terms of eCommerce
Kamila Tislerova
Technical University of Liberec, Liberec, Czech Republic
Kamila.tislerova@tul.cz

Abstract: This paper deals with some aspects of the information data resulting from the relationship between an
Internet enterprise (e-shop) and the customer. Analyzing and evaluating information which is provided and
required enables one to derive important recommendations for creating and managing a user-friendly and thus
probably successful Internet business. There are many recommendations in Customer Relationship Management
emphasizing the importance of maintaining post-purchase contact with customers; some businesses
automatically accompany the purchase by a questionnaire concerning customer satisfaction. Are they sure
customers welcome such a contact? Could some information interaction be annoying for the customers? There
are three research questions to be answered: firstly how the after-purchase contact is perceived by customers.
The second question relates to who the customers providing information to the business actually are; their
description concerns both their demographic description and their purchase behaviour. Thirdly, how significant in
reality the information provided by eCommerce businesses for the customers actually is? This research was
undertaken and involved more than 500 respondents. The questionnaire consists of both qualitative and
quantitative issues so that a large variety of results can be derived from it. The respondents are the only people
having experience with using eCommerce enterprises for their shopping. In the survey, they provide information
on their preferences, purchase behaviour, customs, perceived risk-taking and willingness to share information
with the enterprises. In addition to a common dependence search, a factor analysis was also carried out in order
to better understand and formulate the customer approach towards information interaction. Because this paper
also provides results pertaining to the identification of some attributes of user-friendly eCommerce enterprises
(examined from the customers point of view), its results should contribute to the general discussion concerning
the usage of information in business-customer relations. Academics and researchers should be placed at an
advantage by virtue of some of the conclusions arising out of a large survey among customers who are
accustomed to executing purchases via eCommerce businesses. The specific findings of this paper should be
helpful for marketing practitioners in their marketing and communication mix creation. It should help to establish a
system of customer care in terms of information interchange which should be highly appreciated by customers
as well as prove highly beneficial for enterprises.

Keywords: information interaction; eCommerce; customer relationship management; information sharing
1. Introduction
Information interaction between business and customer is a crucial matter for effective businesses
(Kotler, 2004). By sharing relevant information, the business is able to facilitate the consumer's
purchase process and thus to realize a so called co-creation of value. Value is created when a
consumer is offered useful information and gains understanding, reassurance and/or hedonic
fulfilment in the process (Grant et al., 2007). Value creation relies on an analysis of online consumer
behaviour to determine which information sources and formats are most likely to meet their needs at a
given point in time (Teo at al., 2004). The approach follows the view of Payne et al. (2008): that a
customer becomes a co-creator of value through the development of customersupplier relationships
based on information interaction and dialogue. Contributions given by customers to businesses can
be measured and quantified by several existing methods (Tislerova, 2011), so that customers
cooperation should be considered as a profitable item. Thus, communication and information
interaction become crucial issue for business development.
2. Information interaction in society
Information and communication technologies (ICT) have changed our lives dramatically. What
seemed impossible has become a crucial part of todays reality. Proper use of ICT can enhance
competitiveness not only of whole countries and regions, but also of companies and individuals.
Among others, there are two important frameworks depicting role of ICT in society. The first one was
defined by OECD and is called Information Society Statistics Conceptual Model (OECD, 2011). It
tries to describe the complexity of the information society and includes all major entities and relations
somehow related to ICT. Electronic trade is included in a group labeled ICT Demand (Users and
Use). This framework is also related to Information Society Impacts Measurement Model (OECD,
2008). It indicates areas of interest that are divided into two groupseasier (economic, positive,
short-term, direct, and narrow impacts) and harder (social, negative, macro, long-term, and
unintended impacts) to measure. The United Nations formulated an ICT Impact Relationships Model
307

Kamila Tislerova
that comprises relations belonging to the three main fields: economy, society and environment
(UNCTAD, 2011).
2.1 Information interaction in terms of eCommerce
Exploring information interaction in terms of eCommerce differs greatly in comparison with traditional
information interaction. E-business usually implies a rethinking of business models, of the network,
and system infrastructure. Therefore, only businesses with access to significant e-business
competency in information treatment can expect to succeed in their efforts (Daniel, Wilson, 2003).
Research on eCommerce and business-customer relationship indicates that both fields have
significant impacts on economic growth and wealth creation (Acs et al., 2004).

ECommerce is growing at an incredible pace. The accessibility of the Internet makes electronic
commerce a realistic possibility for economic growth and business development. As the amount of
business transacted over the web increases, the value of goods, services, and information exchange
over the Internet seems to double or triple each year around the globe (Kathuria, Joshi 2007). The
Internet has truly transformed the way consumers shop in a multitude of categories (e.g. travel,
books, videos) and the way most retailers do business with their suppliers as well as their customers.
More and more successful retailers are leveraging the power of the Internet by improving
effectiveness in information interaction between businesses and their customers, e.g. providing online
customization, publishing online flyers and promotions. On the other side, consumers use the Internet
to shop more effectively (Puccinelli et al. 2009).
2.2 Importance of customer feedback
It is vital for any business to have access to customer feedback in order to determine the overall
experience of those who are using their particular goods or services. Customer feedback enables a
company to learn what customers thought of their experience dealing with the company (Moon at al.,
2004). Equipped with this valuable information, companies can either change their policies to better
suit the customer experience, or build upon the strengths that are outlined in the feedback they
receive. The feedback can be collected by means of a customer survey, which might be expensive, or
by the most widely accepted and most effective form of collecting feedback regular e-feedback
accompanying each purchase. However, this contact might be also annoying or negatively perceived
by customers.

Also the information flow from company to customer has often been excessive. Marketers try to
deliver as much information as possible together with the product or services. Unfortunately, it is not
the most efficient way of communicating with customers. Some information is really valuable for
customers and required by customers but there is a huge volume of information explosion the
customer cannot absorb. Thus, a hierarchical approach is highly desirable according to the type of
customer; characteristics of the goods or services bought and in addition to this, many other aspects
should be taken into account.

In terms of eCommerce there is an easy way to acquire the customer feedback that businesses
require without overloading the customers or bombarding them with frequent feedback
questionnaires. Customers also have to confront the information explosion provided by businesses; a
proper balance should be established in order to gain an effective way of cooperation which leads to
the benefit of both sides.
2.3 ICT in transition economies
Sustained economic development requires a well-developed infrastructure and a substantial number
of high-value-added industries. Thus in developing economies, ICT is often regarded as an enabler
and catalyst for successfully shifting away from economic dependency on low-value-added industry
sectors, such as agriculture and raw materials extraction. ICT as a communication and collaboration-
enabling tool may be profitable in developed countries provides some additional advantages, aside
from saving cost. As much of the population in rural areas of developing countries is unskilled in the
use of modern technology, sharing such technology provides not only access to the technology itself,
but also user support for the technology. The strategic objectives for ICT investments in developing
economies are also often different from objectives in developed countries. In emerging economies
ICT is used to support the development of new products and services for a rapidly growing customer
308

Kamila Tislerova
base. In contrast, in developed, mature economies where the economic growth is rather modest, ICT
is primarily used for improvement of existing products and services and to manage the existing
customers more efficiently. Overall, though the use of ICT in developed and in transition countries
differs substantially, ICT plays a critical role in business growth in developing economies (Roztocki, N.
Weistroffer, 2009).
2.4 ECommerce in the Czech Republic
Sales in 2011 are based on estimates. There was a steady rise of electronic trade importance with
only slight reduction in sales growth rate in the year 2011. An approximate share of B2C eCommerce
on retail trade in 2010 was around 4%. The share of individuals purchasing online on population is
higher in EU-27 than in the Czech Republic: 43% compared to only 30% (Eurostat, 2011), but
shopping online becomes more and more popular, and more than 95% of Czech internet users buy
Christmas gifts online. Overall, the total number of e-shops operating on the Czech market is
approximately between 10,00023,000 subjects, but only 5001,000 of them operate as full-time
e-shops, i.e. companies focusing only on eCommerce. New ones are being created every day, but an
apparent pressure to consolidate is noticeable. Consumers tend to change their behaviour (Dedkova,
2010) and especially retail industry is developed and internationalized very rapidly (Simova, 2010).
Furthermore, every enterprise should search for more effective way of doing business, so that
eCommerce is becoming a natural part of business activity of small and medium sized enterprises
(Rydvalova, Marsikova 2011).
3. Research questions and methodology
In order to contribute to a more efficient method of information data exchange, three research
questions were laid out: firstly, how the after-purchase contact is perceived by customers. The second
question relates to who the customers providing information to the business actually are. Thirdly, how
significant the information provided by eCommerce businesses for the customers in reality is.

More then 500 Internet shoppers were examined in this survey which was carried out in the Czech
Republic. Respondents provided information on their preferences, purchase behaviour, habits,
perceived risk-taking and willingness to share information with the enterprises. In total there were 58
questions both qualitative and quantitative. The structure of the respondents was modified according
to the estimated structure of the population realizing their shopping via the Internet; adjustments and
corrections were made according to age structure (younger), level of education (higher) and also
income structure (higher). The other indicators were kept for application to the regular population.

After two focus groups sessions for precisely determining the range of potential answers, the
questionnaire was drawn up, placed in web pages and distributed in the form of a link to these web
pages. Missing answers were also included in order to raise the level of trustworthiness of the data to
the highest level possible.

The data was processed by statistical methods such as descriptive statistics, correlations, crosstabs
and factor analysis. Also tests of normality and other relevant tests were applied in accordance with
data processing principles (Zambochova, 2008).
4. Findings concerning after-purchase contact perception
Participants of the survey expressed their perception of businesses request for providing feedback,
usually in the form of an e-questionnaire.

The question was formulated in the form of a scale the strength of consent with the statement:
After-purchase contact is not annoying for me. The distribution of answers is shown in Table 1:

Regarding the gender distribution of perceived annoyance resulting from the request for feedback: In
total the situation seems to be well balanced (Males - 46.6% and Females 53.4%). However, a
substantial difference occurs in the degree of tolerance as a result of this after-purchase contact.
Whereas 69.4% of males strongly agree (contact is not annoying) and females are in the minority, a
neutral perception predominates among females (61.6%).

309

Kamila Tislerova

Figure 1: Distribution of respondents - perceived as annoyance resulting from after-purchase
questioning
Table 1: Structure of respondents according to their request for feedback perception

Frequency Percent
Cumulative
Percent
strongly agree 78 15.5 15.5
likely to agree 146 29.0 44.5
depends on the situation 172 34.2 78.7
Likely to disagree 71 14.1 92.8
strongly disagree 36 7.2 100.0
Valid
Total 503 100.0

No significant differences were found in the examination of the dependency of the perception of after-
purchase contact among the differing educational levels. The current working position on the other
hand influences the perception when the most tolerant group is made up of students of higher
schools and ordinary employees (28.3%, resp. 29%). The less responsive category is formed from
self-employed respondents (5.7%).

Is the positive (or neutral) perception with regard to receiving requests for feedback dependent on the
period of time the respondent requires to realize his/her purchases via Internet? Answers vary from 1
to 11 years, the mean is 4.69 years (although showing a relatively large Standard deviation of 2.468).
In all, half of the respondents (49.5) accepting feedback, had been realizing their Internet purchase
maximally for four years.

The other question was formulated: When you receive request for feedback (e-questionnaire), what is
your reaction? There were five options for the answer: always to provide a response, usually to
provide a response, it depends on the situation, usually do not provide a response and never provide
a response. The distribution of answers is very similar to the histogram of answers which examines
the perception of the request for feedback (after-purchase contact).
5. Findings relating to some characteristics of respondents providing
information to businesses
For the purpose of this survey, two main categories of information transfer were created: So-called
alerts providers and discussion contributors. Based on the question When noticing a problem or
cause for some discomfort to the customer, do you inform (alert) the business? the first group was
310

Kamila Tislerova
identified. A three-rate scale was used in both questions (often, occasionally, never). The second
question asks: Do you contribute to discussions, share your knowledge and experience?

Figure 2: Share (in %) of respondents reacting in the form of alerts and discussion
Less then half (46.3%) of the respondents often or occasionally share their experience, provide
advice and contribute to discussions. Out of all respondents observed, a majority of respondents
(67.6%) provide immediate alert if something goes wrong (often or occasionally).

No clear evidence arises from the previous findings as to whether the providers of the alerts and
discussions contributors are the same customers. That is why a correlation analysis was done. The
correlation was significantly substantiated.
Table 2: Correlation between providing alerts and sharing knowledge

Discussion
contributors Alerts providers
Pearson Correlation 1 ,156
**

Sig. (2-tailed)

,000
Sum of Squares and Cross-
products
143,571 28,338
Sharing knowledge and
experience, contribute to
discussions
answer often and
occasionally
Covariance ,286 ,056
Pearson Correlation ,156
**
1
Sig. (2-tailed) ,000

Sum of Squares and Cross-
products
28,338 230,604
When discovering an
inappropriateness,
notification to the
company, (alert)
answers often and
occasionally
Covariance ,056 ,459
**. Correlation is significant at the 0.01 level (2-tailed).
Customers willing to provide information to businesses can also be characterized as follows:
most of them (81%) intend to increase their purchases within a year
they prefer (76%) to realize their purchases in specialized e-shops (the other option was universal
e-shops)
45% of them admit to using the internet connection from their work place (for connection with
internet shops)
there is an active population in the age range 20-25 exceeding the average number of
contributors in the age group
311

Kamila Tislerova
6. Findings on the significance of information provided to customers by
business
There was a set of 16 questions relating to the type of information provided by business, including the
form of presentation. The importance of each piece of information was measured according to the
scale. Using Factor analysis (factor reduction), the following factors were determined:
User-friendly way of information presentation (28%)
Presence of additional and explanatory information (16%)
Information from an independent source such as other customers references (14%)
Information sorted by users, not by technical features of products (13%)
These factors are able to explain the 71% of the customers requirements concerning information
provided by businesses.

On the other side, the less demanded type of information was the presence of Frequently Asked
Questions and info concerning the company history (and other self-presenting data).
The highest deviations occur in the question as to how important the previous purchase display is for
the customer.
7. Conclusions
Even though after-purchase contact has become very frequent and seems to be annoying for the
customers, this survey demonstrates the fact that in 45% of the cases observed, the reaction with
regard to this request for feedback is positive. About 35% is able to tolerate it and act in accordance,
depending on the situation. Only about 20% of participants have a negative attitude (of this number,
just 7 % strongly rejects this type of contact). The suggestion of this paper for companies is not to
doubt whether to ask for feedback or not; the question is how to design the request for feedback so
that it is as pleasant and comfortable for the customer as possible.

Also a relatively high percentage of actively responding customers was found to exist. It is a matter of
Customer Relationship Management to reward these customers, to involve them in loyalty
programmes and provide some advantages and incentives to promote this kind of activity. Customer
potential in information interaction does exist to a significant degree.

Not only regular feedback, but also some alerts or the sharing of knowledge and experience belongs
to important information interaction. A correlation of groups providing alerts and discussion
contribution was shown to exist and some specifications were outlined.

For the explanation of more than 70%, four main factors (groups) of information provided by business
were derived. Successful business should present their information in a very accessible way, should
accompany the main info with additional notices and advice which might be helpful for customers.
Information demanding too much effort on the part of the customers (like Frequently Asked Questions
section) or exhibiting low value for the customers should be minimised if efficient information
interaction is to be established.
References
Acs, Z. J., Arenius, P., Hay, M. and Mininiti, M. (2004). Global entrepreneurship monitor: 2004 Executive report.
MA: Babson College and UK: London Business School
Daniel, E.M. and Wilson, H.N. (2003) The role of dynamic capabilities in e-business transformation, European
Journal of Information Systems, Vol. 12, pp. 282296.
Ddkov, J. (2010) Analza nkupnho chovn v esko-nmeck sti Euroregionu Neisse-Nisa-Nysa. 1. vyd.
Liberec: Technick univerzita v Liberci, ISBN 9788073725938.
Eurostat. (2011). Internet purchases by individuals. [online]. Retrieved April 15, 2012, from
http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=isoc_ec_ibuy&lang=en
Grant, R. Clarke, R.J. and Kyriazis, E. (2007) A review of factors affecting online consumer search behaviour
from an information value perspective, Journal of Marketing Management, Vol. 23, No. 5/6, pp. 519533.
Kathuria, R. and Joshi, M.P. (2007) Environmental influences on corporate entrepreneurship: Executive
perspectives on the internet, International Entrepreneurship and Management Journal, Vol. 3, No. 2,
pp.127-144.
Kotler, P. (2004) Marketing Management. 11 edition, Prentice Hall: New Jersey
312

Kamila Tislerova
Moon, B.J. (2004) Consumer adoption of the internet as an information search and product purchase channel:
some research hypotheses, International Journal of Internet Marketing and Advertising, Vol. 1, No. 1, pp.
104118.
OECD. (2011). Guide to measuring the information society. doi: 10.1787/9789264113541-en
OECD. (2008). Measuring the Impacts of ICT Using Official Statistics. doi:10.1787/230662252525
Payne, A.F., Storbacka, K. and Frow P. (2008) Managing the co-creation of value, Journal of the Academy of
Marketing Science, Vol. 36, No. 4, pp. 8396.
Puccinelli, M. et al., (2009) Customer Experience Management in Retailing: Understanding the Buying Process,
Journal of Retailing, Vol. 85, No. 1, pp. 1530.
Roztocki, N. Weistroffer, H.R. (2009) Research Trends in Information and Communications Technology in
Developing, Emerging and Transition Economies.Collegium of Economic Analysis, Vol. 20, pp. 113-127.
Rydvalov P., Markov K. (2011) Small and Medium Enterprises in the Czech Republic. 1. Ed., Liberec:
VTS, a.s.
SIMOV, J. (2010) Internationalization in the Process of the Czech Retail Development. E+M Ekonomie a
Management. Vol.. 13, No. 2., pp 78 91.
Teo, T.S.H., Wang, P. and Leong H.C. (2004) Understanding online shopping behavior using a transaction cost
economics approach, International Journal of Internet Marketing and Advertising, Vol. 1, No. 1, pp. 6284.
Tilerov, K. (2011) Customers Profitability: Methodological Approaches Liberec Economic Forum, pp 497-
506.
UNCTAD. (2011). Measuring the impacts of information and communication technology for development. [online].
Retrieved April, 2, 2012, from http://www.unctad.org/en/docs/dtlstict2011d1_en.pdf
ambochov, M. (2008) Data Mining Methods with Trees, E+M Ekonomika a Management, Vol.9, No.1, pp
126 132.
313
Designing High Quality ICT for Altered Environmental
Conditions
Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
Bonn-Rhine-Sieg University of Applied Science, Sankt Augustin, Germany
Daryoush.Vaziri@h-brs.de
Dirk.Schreiber@h-brs.de
Andreas.Gadatsch@h-brs.de

Abstract: This article concerns the design and development of Information- and Communication Technology, in
particular computer systems in regard to the demographic transition which will influence user capabilities. It is
questionable if current applied computer systems are able to meet the requirements of altered user groups with
diversified capabilities. Such an enquiry is necessary based on actual forecasts leading to the assumption that
the average age of employees in enterprises will increase significantly within the next 50-60 years, while the
percentage of computer aided business tasks, operated by human individuals, rises from year to year. This
progress will precipitate specific consequences for enterprises regarding the design and application of computer
systems. If computer systems are not adapted to altered user requirements, efficient and productive utilisation
could be negatively influenced. These consequences constitute the motivation to extend traditional design
methodologies and thereby ensure the application of computer systems that are usable, independent of user
capabilities. In theory as well as in practice several design and development concepts described are respectively
applied. However, in most cases these concepts are considered as solitary independent solutions. Generally,
theories contrast usability and accessibility as two different concepts. While the first provides possibilities for
specific user groups to accomplish tasks efficiently, effectively and satisfactorily, the latter provides solutions
taking into consideration people with a wide range of capabilities, such as disabled people or people with an
enduring health problem. Both concepts are quite extensive. Therefore developers tend to decide between these
concepts, which always leads to failures. This article seeks to provide a universal design and development
approach for computer systems, by combining these individually considered concepts into one common
approach. This approach will not distinguish between user groups, but instead, will provide procedures and
solutions to design computer systems, which consider all relevant user capabilities. The results of this article
provide a theoretical approach for design and development cycles. Enterprises will be sensitised for the
identification of relevant user requirements and the design of human-centred computer systems.

Keywords: universal design, usability, accessibility, information and communication technology, computer
system, demographic transition
1. Introduction
The effective and productive application of computer systems is highly dependent on the users
capabilities. Therefore it is crucial to analyse the users behaviour when interacting with computer
systems. Figure 1 defines the authors comprehension of computer systems in the context of this
article.

user requirements
efficiency,
productivity
hardware, software,
peripheral devices
diverse user
capabilities
computer system

Figure 1: Computer system
However, in many cases the requirement analysis concentrates on specific stakeholders that are
currently employed or involved with the system and as such does not consider requirements of
potential users with divergent capabilities. Such a combination of system functionalities and new user
capabilities will eventually result in a mismatch that might reduce efficiency and productivity.
314

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
2. Background
Figure 2 schematically visualises the mismatch of user requirements and system functionalities.

Diversity of user
capabilities
Computer
system lifecycle
Mismatch of user requirements
and system functionalities

Figure 2: Coherence of user capabilities and system lifecycle
The trigger for this development is the demographic transition of industrialised nations. In most
industrialised countries the population declines and grows old (Lutz et al, 2011). Figure 3 provides an
overview of the latest population data and estimations up to the year 2050 for the nations France,
Germany and United States of America (United Nations, 2010).

11,2
9,9
26,0
37,5
20,4
30,9
14,0
12,5
18,4
26,6
13,1
21,2
12,4
11,5
23,0
30,5
16,8
24,9
%aged1524in
2010
%aged1524in
2050
%aged60or
overin2010
%aged60or
overin2050
%aged65or
overin2010
%aged65or
overin2050
Demographictransitioninindustrialisedcountries
(2010to2050)
Germany USA France

Figure 3: Demographic transition of industrialised countries
While the percentage of people aged 15-24 declines and respectively stagnates, the percentage of
people aged 60-65 or over increases significantly. As fewer young people join the labour market,
enterprises need to compensate by employing older personnel for the required human resources. In
addition, the intergenerational contract demands that employees retire at a later date (Sanderson et
al, 2010), as life expectancy rates increase (Leon, 2011). The authors identified three consequences,
which will affect the development of computer systems.

Consequence 1: The age distribution in enterprises will rise; therefore capabilities of older user
groups might differ significantly from younger user groups. Susceptibility to disorders or injuries
related to computer work tendentially has to be classified higher than for younger employees. A
survey from the year 2001 examined upper extremity disorders of 485 people. The mean age of that
group was 38.5 years. Seventy per cent were computer users. Significant findings of that survey
included postural misalignment with protracted shoulders (78%), head forward position (71%),
neurogenic thoracic outlet syndrome (70%) and many more (Pascarelli et al, 2001).
315

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
Consequence 2: Enterprises will lack young qualified personnel that cannot be compensated for by
elder employees. To sustain capability to compete on the global market, enterprises need a specific
number of young employees, who may have just completed academic and/or vocational training to
integrate modern, unprejudiced and open minds. Enterprises thereby benefit from radical, innovative
ideas and therefore need to look out for additional sources of human capital. One source could be
found in people with disabilities or people who mainly have an enduring health problem. Europe is
inhabited by approx. 502 million people (Marcu, 2011). In 2010 approx. 67 per cent respectively 336.4
million of Europe inhabitants were declared as working age population (European Comission
Eurostat, 2011). About 45 million people of the declared working age population either had a disability
or an enduring health problem (European Comission Eurostat, 2003). Worldwide the number of
disabled people aged 15 and older is estimated as 720 million (World Health Organization, 2011).
However, current computer systems do not meet the requirements of this user group.

Consequence 3: Susceptibility to mental disorders induced by computer-related stress factors
increases. In the latest report, Wittchen et al. learned that almost 165 million Europeans suffer from
brain disorders like depression, anxiety, insomnia or dementia every year (Wittchen et al, 2010). This
is an increase of about 100 million people compared to a similar major European study of brain
disorders conducted in the year 2005 (Walker, 2011).

Current HCI research does not sufficiently deal with these developments. In the professional context,
most human-computer-interactions are still executed with keyboard or mouse devices. Graphical User
Interfaces (GUI) become more complex and require the user to apply more cognitive resources. With
regard to the demographic transition, the authors identified the need to provide a universal design
approach for computer systems that takes environmental alterations into account.
3. Universal approach to the design of computer systems
The following paragraphs will introduce the reader to a universal design approach. Consideration of
recommendations and thoughts given in these paragraphs will improve the long-term usability of
computer systems.
3.1 Human-centred design of interactions
When it comes to defining interaction design, three major schools of thought can be distinguished
(Saffer, 2010):
A technology centred view
A behaviourist view
A social Interaction Design view
Human-centred design can be classified in the behaviourist view. The behaviour of people using
products is the central focus of interest. Rather than centred on the restrictions of end user
capabilities, the majority of professional system development is constrained to specific software
packages or technologies and the capabilities associated with them (Poslad, 2009; Kolko, 2011). In
most cases, this leads to company cultures that are strongly computing centred (Kolko, 2011).
However, research studies showed that comprehension and consideration of human behaviour and
human capabilities, for the purposes of system development, result in more usable and accessible
products (Kolko, 2011; Wickens et al, 2000). A cyclic process of perceiving, thinking, recognising,
acting and evaluating actions can be observed, when users interact with computer systems (Monk,
1998). Figure 4 shows how main elements of cognition interact with one another and in the wider
context of cognitive processing (Persad et al, 2007).
Step 1: The user gets in contact with external stimuli, for example a GUI.
Step 2: The perception component analyses and processes the incoming sensory information.
Step 3: The working memory retrieves long-term memories and decides to act on the selected
stimuli. The attention resources are directed to focus on the most informative parts of the stimuli
and initiate actions and reasoning (Mieczakowski et al, 2010).
Step 4: For matching the selected stimuli with objects of similar physical properties and functional
attributes and for grouping them into categories in memory, working memory frequently has to
refer to long-term memory (Miller, 1956).
316

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
Step 5: If the user has experienced the stimuli before and is familiar with the GUI or computer
system, information about them will probably affect the speed and efficiency of cognitive
processing.
Step 6: The user executes an action based upon the previous cognitive processing.

Long-term Memory
Perception
Working memory
Executive function
Attentional resources
Similarity
matching
Input
Low-level senses
Output
Action
Environment and product
1
2
3
4
5
6

Figure 4: Simplified model of cognition processing
Studies that ageing and certain impairments or disabilities can have significant effects on the
elements of cognition are depicted in figure 6 (Rabbitt 1993; Freudenthal, 1999).
3.1.1 Design of graphical user interfaces
The successful design of GUIs follows principles of usability engineering. Usability is defined as:
The extent to which a product can be used by specified users to achieve specified goals
with effectiveness, efficiency, and satisfaction in a specified context of use (ISO, 1998).
To provide an overview of the huge extent of usability engineering, figure 5 will illustrate major
usability categories (Bailey et al, 2003).

Usability engineering
Page Layout
Homepage
Hardware
and Software
Writing Web
Content
Scrolling
and paging
Navigation
Headings,
Titles and
Labels
Text
appearance Screen-
based
controls
Lists
Graphics,
Images and
Multimedia
Content
organisation
Search
Accessibility

Figure 5: Usability engineering
317

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
Usability guidelines often consider accessibility as a detached usability category that exists alongside
the extensive amount of remaining categories. Developers are partially overwhelmed by the mass of
usability principles, so that they tend to avoid the application of accessibility principles. This
perspective leaves the impression that accessibility only provides benefits for specific user minorities.
In fact, accessibility can be considered as a distinct engineering discipline that, upon application,
provides significant benefits for every system user. The definition for accessibility given by the ISO is
as follows.
The usability of a product, service, environment or facility by people with the widest
range of capabilities (ISO, 2008).
This definition implies that true usability can only be achieved by applying accessibility principles
within each usability category. The World Wide Web Consortium (W3C) already provides an extensive
guideline on Web content accessibility. The following paragraphs will shortly explain each accessibility
category.

Perceptibility: Perceptibility implies that content presented on a website or within an application is
perceivable for every user regardless of his capability (Phretmair et al, 2005). To fulfil this principle,
developers can integrate additional functionalities like scalability or the two-channel principle. The
latter is used to provide multiple opportunities for the user to succeed in a specific task (Wegge et al,
2007). Furthermore the content can be enriched by alternative tags, which furnish non-textual content
with descriptions. Another crucial, often underestimated, criterion is the colour contrast of content.
Depending on the combination of colours, viewing content on the computer screen can be more than
exhausting for users. Additionally visually-impaired users are not able to perceive specific colour
combinations; therefore conveyance of information should not be solely executed by colour changes.
There are approximately 200 million people afflicted with dyschromatopsia worldwide and such
people are not able to differentiate between red and green content. An example would be to imagine
high level executives, afflicted with dyschromatopsia, reviewing operating numbers that are
represented in a usual traffic light-system.

Understandability: This criterion intends to make text content readable and understandable as well as
to make the application processes appear predictable and operable. Therefore developers can for
example programmatically determine the default language of the application or label unusual words
and abbreviations. A Screen-Reader, used by many disabled people to read the content presented on
a website or within an application, can only help the user when text or text parts are labelled correctly.
To render application processes predictable to the user it is an advantage to highlight any focused
components. Furthermore navigational mechanisms that are repeated on multiple web pages or
application screens should be consistent if possible and components that share the same functionality
should use a similar identifier like a symbol or name (W3C, 2008).

Operability: To make applications operable for people with disabilities or restrictions, all functionalities
should be triggerable through a keyboard. Some people afflicted by physical movement disabilities
are not able to use a computer mouse and the only way for them to navigate through the content of a
web site or application is to use the tabulator-key of the keyboard. Focus order and focus visibility are
important and have to be considered. Hence, developers should avoid keyboard traps, which would
kill the operability at a stroke. Generally the user should be provided with enough time to use, read
and process the content. Seizure disorders also have to be taken into account when developing a
website or application. So, rapidly flashing content should be avoided (W3C, 2008).

Technical robustness and technical openness: Website or application content must be robust enough,
so that a variety of assisting technologies can interpret the content reliably. Assisting technologies
help users with restricted capabilities to perceive, understand and operate the content. Screen
readers or screen magnifiers are examples of assisting technologies; however, compatibility of current
and future technologies has to be ensured. In addition, fulfilling this principle prohibits redundant data
and multiple versions. Special features of robust content are listed below (W3C, 2008):
Elements have complete start and end tags
Elements are nested according to their specifications
Elements do not contain duplicate attributes
Any IDs are unique (specific exceptions allowed)
318

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
The four-level structure elaborated by the W3C provides a robust categorisation to integrate usability
and accessibility principles. Therefore, the authors recommend the complete and equivalent
transformation of accessibility and usability principles into a corporate framework for the human-
centred design of GUIs, as depicted in figure 6. Within each category the developers will find
accepted criteria to ensure the accessibility and usability of GUIs.

Human-centred Design
Perceptibility
U
n
d
e
r
s
t
a
n
d
a
b
i
l
i
t
y
O
p
e
r
a
b
i
l
i
t
y
Technical robustness and openness
Page Layout
Homepage
Hardware
and Software
Writing Web
Content
Scrolling and
paging
Navigation
Headings,
Titles and
Labels
Text
appearance
Screen-based
controls
Lists
Graphics,
Images and
Multimedia
Content
organisation
Search
Effectiveness
Efficiency Satisfaction

Figure 6: Universal design approach to the development of GUIs
3.1.2 Design of computer control systems
As mentioned in section 1, studies revealed that current computer control systems are mainly
responsible for several injuries, induced by computer-aided work and therefore future environmental
changes will call for innovative control systems, which take alterations of human capabilities into
account.

The last decade has already introduced an innovative computer control system. With the release of
Apples iPhone, the first touchscreen control system became famous and affordable for the broad
majority allowing visually impaired people especially to benefit from this control system. In
combination with the integrated screen reader VoiceOver, these minorities were able to experience
a new standard of living. Automatic speech recognition (ASR) is another promising computer control
system, which is less disseminated. ASR systems provide new opportunities for the human-centred
design of interactions, especially in the context of environmental changes, which were described in
section 1. Theory defines specific requirements that must be fulfilled by the ASR system, in order to
work properly (Marine et al, 2011). The system needs to support a framework, managing the
interaction between human and machine which includes processing of inputs and outputs that enable
the user an individualized interaction that is most natural to him and fit the skills and physical needs of
the user. Rule-based systems are able to realize this requirement, as they describe the behaviour of
the user in a way that the system can understand and save it. Furthermore, the user can edit and
parameterize the described behaviour to fit it to his needs (Marine et al, 2011). As the intended
system behaviour depends on the current system state and the context of the user, the system needs
to permit saving, reading and changing of the current context (Marine et al, 2011). In practice, ASR
systems are not widely disseminated among enterprises and this is presumably due to the fact that
the accuracy rate of these systems is not 100 per cent (Freitas, 2009). The average accuracy rate lies
319

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
between 90-98 per cent, depending on software and testing environment (Karpov et al, 2008; Blanc et
al, 2009; Yuschik, 2010). This means that out of a hundred words spoken, 2 to 10 words would not be
recognized correctly by the ASR system and according to the context of use, this failure rate would be
unacceptable. The following example shown in figure 7 illustrates the authors assuming the
application of an ASR system within a SAP GUI.


Figure 7: SAP GUI
A major problem in using ASR technology in SAP will be the accurate execution of specific functions.
If the user wants to select the tab vendor via speech command, the system will be confronted with
two identically named objects. The failure probability would be 50 per cent in this case. The more
identically or similarly named objects a GUI comprises, the higher will be the failure probability. To
improve voice recognition accuracy rate, the ASR system functionalities could be extended by
human-eye capabilities. The human-eye is able to precisely focus a desired object on a GUI. Around
a focused area an acceptance radius could be defined. The users speech command will be matched
with the objects within this radius. This might significantly diminish failure probability and allow
accuracy rates between 99 and 100 per cent. Available technologies like eye tracking systems can be
applied to identify the human-eye and to determine the eye focus. Figure 8 illustrates the SAP GUI
from figure 9 with the conceptual idea of combining ASR and eye tracking technology. In the
demonstrated SAP GUI, the reader also finds objects and functions that are either abbreviations or
icons. These objects require an alternative tag, as proposed in section 3.1.1 to be executed by
speech commands. If the user focuses on such an object, the defined alternative tag should appear,
so the user is able to execute an accurate speech command.
3.2 Quantifying user experiences
The adaptation of legacy systems in accordance with design approaches as introduced in sections
3.1.1 and 3.1.2 will burden enterprises with additional expenditures and risks. Responsible actors
need to be convinced of benefits that will arise with the implementation of computer systems following
a human-centred design philosophy, as mentioned in this article. To convince economic operators,
quantifiable and monetary key performance indicators are inevitable. Clarity must exist on how the
320

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
benefits of human-centred computer systems will exceed implementation expenditures. A well-known
methodology to quantify human-computer-interaction is usability testing. When referring to usability
testing, the authors align with the definition of Carol M. Barnum:

Acceptance radius for speech command Speech
command

Figure 8: SAP navigation with ASR and eye tracking
Usability testing: The activity that focuses on observing users working with a product, performing
tasks that are real and meaningful to them (Barnum, 2011).

The proposed approach to planning the usability test and quantifying user experiences is composed
of six steps.

1.
Select strategic objectives
2.
Identify test
users
3.
Derive key
metrics
5.
Conduct
usability
testing
6.
Evaluate
results
4.
Determine test
instruments
and tasks
Goals,
Target groups
User
profiles
Key performance
indicators
Operational plan
Results, findings

Figure 9: Usability testing approach
3.2.1 Select strategic objectives
Strategic objectives are necessary to define goals that should be achieved by the computer system.
They allow an evaluation of results and findings gained during the usability test. To identify and
321

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
quantify need for improvement they are an inevitable artefact. Possible strategic objectives are, for
example, value creation of the system, long-term system stability and efficient support of business
processes.

At this stage the determination of target groups is crucial as well. To receive valuable results, it is
important to know the systems end user. End users could be paying customers or employees, for
example. The false determination of target groups will lead to an inferior selection of test users.
3.2.2 Identify test users
Based on the determined target groups from step one, test users need to be identified and in order to
achieve meaningful results, the test users should be identical with stakeholders, who have been
interviewed during the requirements analysis phase. If that scenario is not possible, the
characteristics of selected test users should equal the characteristics of these stakeholders. It is
necessary to create user profiles for each test user, to compare user characteristics and to make
potential test-user-group classifications. Important data for elicitation are age, profession, experience
with the system, disabilities and impairments, satisfaction with current system, etc. The higher the
diversity of capabilities of the test users, the more valuable test results can be expected. For detailed
information on how to identify test users and structure user profiles, the authors refer to Salvendy
(2012) and Rubin et al. (2008).
3.2.3 Derive key metrics
Key metrics or key performance indicators (KPI) represent the most important figure for quantification
of user experiences. They make test results tangible and comparable. They allow the deduction of
recommendations for the adaptation of the computer system. In practice there is a variety of different
metrics for the purpose of usability testing. The authors, however, suggest focusing on a few key
metrics to reduce complexity. In order to achieve the goals defined in step 1 it is necessary that the
computer system meets the users requirements. Therefore, key metrics should base on end user
requirements. Possible key metrics could be, for example, task time, error rate, user satisfaction,
clicks per task, understandability of content, user stress-level, cognitive load, degree of attention, etc.
3.2.4 Determine test instruments and tasks
To measure the KPIs, corresponding instruments have to be applied during the usability test. Some
KPIs can be measured by traditional observation methodologies; however, KPIs referring to the
cognitive processing of the user require special equipment and know-how. To measure cognitive
processing electroencephalograms (EEG) can be applied. These instruments are able to record
electrical activities along the scalp. As output, the EEG generates a pattern of waves, which represent
brain activity. The analysis of these waves allows identifying, for example, states of stress, fatigue or
concentration (Sharma et al, 2010). The installation of eye tracking technology will help to measure
the degree of attention. Eye tracking is a widely disseminated technology for usability testing. Results
can be visualised in heat diagrams for example, showing sections of the GUI the user paid most
attention to. The definition of real and meaningful tasks is an important activity in this step. The test
tasks should be a part of the user behaviour. For example, a paying customer should test order
processes or community functionalities whereas an employee should be confronted with test tasks
that refer to his profession and knowledge base. After test instruments are identified and test tasks
are defined, the testing environment must be prepared and the operational plan, including size of test-
user groups, responsibilities for conduction, time for each usability test, etc., has to be created.
3.2.5 Conduct usability testing
For test preparation it would be beneficial to introduce the test users to special technologies like EEG
or eye tracking. The test tasks should be explained in detail. This avoids confusion of the users during
the usability test, which would distort any results. The test users should be as undisturbed as
possible, meaning that there is enough room between test systems. This ensures that test users are
influenced by each other.
3.2.6 Evaluate results
The findings and results acquired from the usability test need to be analysed and evaluated to derive
appropriate actions. Particularly EEG and Eye Tracking results require special know-how from the
322

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
analysts. EEG results can provide important information about the cognitive load of the user while
working with the computer system. Figure 10 shows how EEG results can be interpreted (Sharma et
al, 2010; Mulert et al, 2010; Hammond, 2006).

Feature Delta Theta Alpha
1
Alpha
2
Alpha
3
Beta
1
Beta
2
Beta
3
Beta
4
Frequency 1.5-4 Hz 4-8 Hz 9-13 Hz 14-30 Hz
Occurrance Deeper stages
of sleep without
dreams
Stage of relaxation
and meditation
Brain is not actively
engaged in mental
processes
Brain is actively engaged in mental
processes
Interpretation User is sleeping User is in a state of
deep relaxation.
He is not
concentrating on a
specific task
User is calm and lucid.
However, he is not
thinking and therefore
not concentrating on a
specific task
The user is completely awake and
concentrated on a specific task
Good cognitive
load
Stress Inattentiveness

Figure 10: Interpretation of EEG results
Oscillations of alpha 3 and beta 1 would represent a state of positive cognitive load, while oscillations
above beta 1 would be an indicator for negative cognitive load. Oscillations of beta 3 and 4 would
indicate that the user is stressed, overworked or the given task is too difficult for him. Oscillations
below alpha 3 would indicate that the user is inattentive. Waves in the range of alpha 2 and alpha 1
could be an indicator that the users attention is distracted by an element of the GUI. To collect more
accurate data material, it is possible to combine EEG and eye tracking instruments.
4. Conclusion
This article discussed the alterations of environmental conditions. The authors identified
consequences that might influence the efficiency and productivity of current computer systems and
proposed a universal design approach, which takes divergent user capabilities into account. The
application of human-centred design perspectives was highlighted as a critical success factor. In
section 3.1.1 the authors introduced the reader to a universal design approach for GUIs. Afterwards,
section 3.1.2 dealt with computer control systems that would be appropriate for a universal design
approach. A combination of ASR and eye tracking technology was introduced to the reader. Finally
the article closed with an approach to quantifying the benefits of human-centred computer systems.
References
Bailey, R. W.; Barnum, C.; Bosley, J.; Chaparro, B.; Dumas, J.; Ivory, M. Y.; John, B.; Miller-Jacobs, H.; Koyani,
S. J. (2003) Research-Based Web Design & Usability Guidelines, published by U.S. Government, available:
http://www.usability.gov/guidelines/guidelines_book.pdf.
Barnum, C. M. (2011) Usability Testing Essentials, published by Elsevier Inc., p. 13.
Blanc, I., Vento, C. (2009) Performing with Microsoft Office 2007: Introductory, USA, p. 13.
European Commission Eurostat (2003) Employment of Disabled People in Europe in 2002, ISBN 1024-4352,
catalogue number: KS-NK-03-026-EN-N, cited in: Buhalis, D., V.; Eichhorn, E.; Michopoulou& G. Miller.
October (2005) Accessibility Market and Stakeholder analysis One stop shop for accessible tourism in
Europe, University of Surrey, UK, available:
http://www.accessibletourism.org/resources/ossate_market_analysis_public_final.pdf, p. 33.
European Commission Eurostat (2011) Population structure and ageing, available:
http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Population_structure_and_ageing.
Freitas, J., Calado, A., Barros, M. J., Dias, M. S. (2009) Spoken language interface for mobile devices, published
in: Vetulani, Z., Uszkoreit, H. (2009) Human language technology-challenges of the information society,
Third language and technology conference, LTD 2007, Poland, p. 34.
Freudenthal, A. (1999) The Design of Home Appliances for Young and Old Consumers. Series Ageing and
Ergonomics, part 2, PhD Thesis. Delft University Press, The Netherlands ISBN 90-407-1755-9.
Hammond, D. C. (2006) What is Neurofeedback?, published in: International society for Neurofeedback and
Research, pp. 1-11.
ISO, International Organization for Standardization. (1998) ISO 9241-11, Ergonomic requirements for office work
with visual display terminals (VDTs) - Part 11: Guidance on usability.
ISO, International Organization for Standardization. (2008) ISO 9241-171, Ergonomics of human-system
interaction -- Part 171: Guidance on software accessibility.
323

Daryoush Daniel Vaziri, Dirk Schreiber and Andreas Gadatsch
Karpov, A., Carbini, S., Ronzhin, A., Viallet, J. E. (2008) Two similar different speech and gestures multimodal
interfaces, published in: Tzovaras, D. (2008) Multimodal user interfaces-from signals to interactions, Berlin
Heidelberg, p. 169.
Kolko, J. (2011) Thoughts on Interaction Design, published by Elsevier Inc., pp. 13-24.
Leon, D. A. March (2011) Trends in European life expectancy: A salutary view, published in: International journal
of epidemiology 2011, pp. 1-7.
Lutz. W. et al. (2011) Demographic challenges for sustainable Development, published in: International Institute
for applied systems analysis (IIASA), September 30 October 1
st
2011, available:
http://www.iiasa.ac.at/Research/POP/Laxenburg%20Declaration%20on%20Population%20and%20Develop
ment.html.
Marcu, M. (2011) Population grows in twenty EU Member States - Population change in Europe in 2010: first
results, published in: Eurostat - statistics in focus, Vol. 38, available:
http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-SF-11-038/EN/KS-SF-11-038-EN.PDF, pp. 1-4.
Marine, A., Stocklw, C., Braun, A., Limberger, C., Hofmann, C., Kuijper, A. (2011) Interactive Personalization of
ambient assisted living environments, published in: Smith, M. J., Salvendy, G. (2011) Human Interface, Part
I, HCII, LNCS 6771, Berlin / Heidelberg, p. 573.
Mieczakowski, A; Langdon, P. M.; Clarkson, P. J. (2010) Investigating Designers Cognitive Representations for
Inclusive Interaction Between Products and Users, published in: Langdon, P. M.; Clarkson, P. J.; Robinson,
P. (2010) Designing inclusive interactions, published by Springer London, pp. 133-144.
Miller, G. A. (1956) The magical number seven, plus or minus two: some limits on our capacity for processing
information, published in: Psychological review, 63, pp. 81-97.
Monk, A. (1998) Cyclic interaction: a unitary approach to intention, action and the environment,published in:
Cognition, 68, pp. 95-110.
Mulert, C.; Lemieux, L. (2010) EEG- FMRI: Physiology, Technique and Applications, published by Springer Berlin
/ London..
Pascarelli, E. F.; Hsu, Y. P. (2001) Understanding Work-Related Upper Extremity Disorders: Clinical Findings in
485 Computer Users, Musicians, and Others, published in: Journal of Occupational Rehabilitation, Vol. 11,
No. 1, 2001, pp. 1-21.
Persad, U.; Langdon, P.; Clarkson, P. J. (2007) Characterising user capabilities to support inclusive design
evaluation.Published in: Universal access in the information society.Special Issue on Designing Accessible
Technology, 6, pp. 119-135.
Poslad, S. (2009) Ubiquitous Computing-Smart devices, environments and interactions, published by John Wiley
& Sons Ltd.
Phretmair, F., Miesenberger, K. (2005) Making sense of accessibility in IT Design - usable accessibility vs.
accessible usability, published in: IEEE Computer society, Proceedings of the 16
th
international Workshop
on Database and Expert Systems Applications (DEXA'05), 1529-4188/05, p. 2.
Rabbitt, P. (1993) Does it all go together when it goes? The nineteenth Bartlett memorial lecture. The Quarterly
Journal of Experimental Psychology, 46A, pp. 385-434.
Rubin, J. B.; Rubin, J.; Chisnell, D. (2008) Handbook of usability testing - How to plan, design, and conduct
effective tests, 2
nd
Edition, published by Wiley publishing Inc.
Saffer, D. (2010) Designing for Interaction: Creating Innovative Applications and Devices, pp. 4-15.
Salvendy, G. (2012) Human Factors and Ergonomics, 4
th
Edition, published by John Wiley & Sons Inc., pp.
1276-1283.
Sanderson, W.; Scherbov, S. (2010) Remeasuring Aging, published in: Science, September 2010, vol. 329, No.
5997, pp. 1278-1288.
Sharma, J. K.; Singh, D.; Deepak, K. K.; Agrawal, D. P. (2010) Neuromarketing-A peep into customers minds,
published by PHI learning private limited, p. 126.
United Nations, population Division of the Department of Economic and Social Affairs of the United Nations
Secretariat (2010) world population prospects, the 2010 revision, available:
http://esa.un.org/unpd/wpp/unpp/Panel_profiles.htm.
W3C. December 11, (2008) Web Content Accessibility Guidelines (WCAG) 2.0, available:
http://www.w3.org/TR/2008/REC-WCAG20-20081211/.
Walker, C. (2011) Europe plagued with mental health problems, published by: Mental Healthy, available:
http://www.mentalhealthy.co.uk/news/841-europe-plagued-with-mental-health-problems.html.
Wegge, K. P., Zimmermann, D. (2007) Accessibility, Usability, Safety, Ergonomics, Concepts, Models and
Differences, published in: Stephanidis, C., Universal Access in HCI, Part I, HCII 2007, LNCS 4554, Berlin /
Heidelberg: Springer, p. 297.
Wickens, C. D.; Hollands, J. G. (2000) Engineering psychology and human performance, 3
rd
edn.Prentice Hall,
Upper Saddle River, NJ, US.
Wittchen, H. U., et al. (2010) The size and burden of mental disorders and other disorders of the brain in Europe
2010, ECNP/EBC REPORT 2011, published in: European Neuropsychopharmacology, (2011) vol. 21, pp.
655679.
World Health Organization. (2011) World report on disability, ISBN 978 92 4 068521 5, available:
http://whqlibdoc.who.int/publications/2011/9789240685215_eng.pdf, p. 27.
Yuschik, M. (2010) Leveraging multimodality to improve call center productivity, published in: Neustein, A. 2010,
Advances in speech recognition-Mobile environments, call centers and clinics, New York, p. 143.

324
An Analysis of the Problems Linked to Economic
Evaluation of Management Support Information Systems in
Poland on the Example of ERP/CRM Class Applications -
Problem Analysis
Bartosz Wachnik
Warsaw University of Technology, Faculty of Production Engineering, Institute
of Production Systems Organisation, Warsaw
bartek@wachnik.eu

Abstract: Research shows that the expenditure of international companies on IS systems has been gradually
growing. There is a common belief that IS development has a direct or indirect impact on the economic
effectiveness of a company. ERP and CRM class systems constitute one of the most important groups of
management support information systems in service and manufacturing companies. A lot of research is aimed at
developing models that allow us to identify the correlation between expenditure on ERP and CRM systems and
the economic benefits to a company. Nevertheless, we need to underline that the result of this research is not
evident. The research is characterised by a lack of consensus between representatives of science, as well as
professionals. The subject matter mentioned below is one of the priorities facing the further development of
business informatics. The author of this article presents the result of research related to the economic
effectiveness evaluation in IS investments in ERP and CRM systems implementation in Poland. The research
conducted will help better understand the logic predominant amongst Polish entrepreneurs and management in
the realisation of IS investments and economic evaluation of performed investments in ERP and CRM systems.

Keywords: economic evaluation, effectiveness, IS investments, MIS, ERP, CRM
1. Introduction
ERP class systems constitute a group of integrated computer systems supporting complex enterprise
management. They originate from an evolution of management support systems which evolved into
ever newer and more advanced solutions. Currently, modern ERP systems are multi-module
applications containing a wide range of functionalities required by a vast majority of commercial
and/or manufacturing companies.

CRM systems are treated as information systems supporting customer relationship management and
the effective functioning of sales departments. We need to highlight the subtle difference between the
philosophy of CRM and ERP system performance. While both of them are expected to increase the
profits (and the value) of an enterprise, the fulfilment of this goal is carried out with different
assumptions.

The choice of ERP and CRM class systems for research stems from the fact that they are the most
popular and predominant types of management support information systems in commercial and
manufacturing companies. They usually constitute the main IT bloodstream of a company and their
implementation is amongst the most complicated organisational projects for companies.

In the early 90s, it was observed that the sole use of IT does not necessarily result in the economic
success of a company. Studies showed that a higher expenditure on IT ventures is not accompanied
by a visibly higher productivity of employees, enterprises, branches or even whole economic systems;
an analysis of statistical data can often lead to a contrary conclusion indicating a lack of correlation
between the level of IT expenditure and measurable productivity indicators (Dudycz H., Dyczkowski
M. 2006). Therefore, an increasing number of both middle and top level managers ask themselves:
How to measure economic effectiveness in IS implementation projects, including ERP and CRM class
systems? How to maximise business value of investment in modern IT ?

The authors of research on economic effectiveness evaluation in IS investments lack a consensus on
the adequacy of the methods used, their range and qualities. Suggestions on the direction of further
research depend, to a great extent, on the stand the researchers take on the difference between IS
investments and other types of investment (Cypryjaski J. 2007).

325

Bartosz Wachnik
From the perspective of IS investment economic effectiveness evaluation analysis, the most important
classification of costs and benefits is the classification based on the direct link with economic results
of an organisation. Looking at the link between the use of information systems and an organisations
economic results, the subject literature divides them into direct and indirect ( Niedwiedziski M.
1989).

There is a group of scientists and practitioners who highlight that investing in IS enterprises differs
from other investments because costs and profits are very difficult to identify and quantify, while non-
material and indirect factors may be of great importance (Powell P.L, 1999). According to them, further
research should be chiefly focused on searching for new methods that will help identify and quantify
non-material, indirect costs and benefits better. The opponents (Weill P., Broadbent M. 1998) of this
position believe that IS investments do not differ from other investments. They assume that the value
of an IS investment depends utterly on the way this investment may render an organisation more
efficient and effective and argue that it is not necessary to create any specific parameters for IT
(Remenyi D., Money A., Sherwood-Smith M., 2000). From the perspective of an investments
character, this group equals IS investments, e.g. ERP or CRM investments, to investments in new
production lines, warehouses or means of communication in a company. As a result, they believe that
the existing methods are applicable to the same extent both in IT investments and any other type of
investments and think that we should focus on researching how the existing methods could be used
and perfected. The third direction of research has been derived from an attempt to consider the
arguments of both sides. It is directed at using the existing methods more efficiently through defining
the specific character of IS investments and the rules of selecting process evaluation methods that
would be adequate to it (Cypryjaski J. 2007).

The author of this article aims at presenting the results of research on economic evaluation of IS
investments effectiveness in ERP and CRM system implementation. The choice of this goal and
range of research has the following research consequences. First of all, it allows for an insight into the
rarely researched subject of economic evaluation of IS investments in ERP and CRM system
implementation amongst Polish entrepreneurs and management personnel. Secondly, it is aimed at
verifying empirically the methods of economic evaluation in selected IT projects relevant to the
functioning of an enterprise.
2. Research methods
The research was conducted between July and October 2011 and consisted of questionnaires. The
interviewers asked questions to the interviewees on the telephone. A sample of 250 enterprises was
chosen for the research, according to the following criteria:
Commercial and/or manufacturing companies
Between 50 and 500 employees
Have their own IT department
20mln zloty minimum income (the equivalent of 5 mln EUR) , 500mln (the equivalent of 125 mln
EUR) zloty maximum income
ERP and/or CRM implementation at least 3 years ago
There are 1 670 000 active companies in Poland. According to EU definition 18 000 of them are
medium companies but only 11% of them are using ERP and CRM systems. A sample of 250
enterprises is 13% of total numbers of medium companies that are using ERP, CRM systems. The
companies included those with both Polish and foreign capital, widely autonomous in terms of
ERP/CRM choice and the way of performing the implantation of an IS system. The companies came
from the following provinces: Mazovian, Greater Poland, Lower Silesian, Kuyavian-Pomeranian,
Lublin, witokrzyskie and Silesian. The structure of the companies is presented in Table 1,
according to the Polish Classification of Activity.

Percentage composition of incidence for respective ERP/CRM systems in the examined group is
following: PeopleSoft 1%, Siebel 1%, Gardens 1%, JDEdwards 3%, GreatPlains 3%, Impuls
(Polish domestic solutions) 4%, Epicor/Scala 4%, Dynamics CRM 4%, SAP BusinessOne
5%, Dynamics Nav 5%, Asseco Safo (Polish domestic solutions) 5%, Exact -5%, Macrologic
6%, CDN XL (Polish domestic solutions) 6%, IFS 7%, Dynamics AX 8%, Oracle Financial 9%,
Simple 10%, SAP R/3 12%. The selected companies have achieved good or average results in
326

Bartosz Wachnik
their branches so they are neither leaders nor marginal companies. The aim of the research was
reaching people directly or indirectly engaged in the selection process of an ERP/CRM information
system as well as in its implementation. The respondents were company owners, directors, members
of the board, financial directors or IT directors. Hence, understanding their views and cognitive maps
is crucial for describing the dominant logics of operation in performing IT projects and economic
evaluation of the ERP/CRM investments. Interviews in successive companies have proved that the
chosen group of respondents was relevant. A great majority of respondents had a profound
knowledge of ERP/CRM system implementation issues.
Table 1: The structure of surveyed companies according to the Polish Classification of Activity
Department Description %
51 Air transport 2%
85 Education 2%
88 Social care without accommodation 2%
50 Water transport 3%
15
Manufacture of leather and leather
products 4%
81
Services to buildings and
landscape activities 4%
31 Manufacture of furniture 5%
95
Repair and maintenance of
computers and personal and
household goods 7%
74
Other professional, scientific and
technical activities 8%
71
Architectural and engineering
activities; technical testing and
analysis 9%
28
Manufacture of machinery and
equipment nec 12%
25
Manufacture of fabricated metal
products, except machinery and
equipment 19%
10 Manufacture of food products 23%

Source: Own study.
3. Research results
A detailed company informatisation strategy concerning an ERP/CRM system should include the
reasons for undertaking this kind of IS venture that result from organisation strategy and its system of
values, and consequently its long-term goals, as well as priorities in investment evaluation. Hence,
strategic information initiatives may be undertaken for one or several reasons.


Survival
Innovation
Platforms of
changes
Possible reasons for
undertaking an ERP/CRM
implementation as an IS
venture












Figure 1: Possible reasons for undertaking an ERP/CRM system implementation decision
327

Bartosz Wachnik
Source: Own study on the basis of F. Bannister, D. Remenyi, Why IT Continues to Matter: Reflection
on the Strategic Value of IT, The Electronic Journal Information Systems Evaluation, vol. 8, iss. 3,
2005, http://www.ejise.com.

We can distinguish the following strategies of choosing and implementing ERP/CRM systems:
Strategy linked to companys survival on the market treats an ERP system implementation as
an instrument allowing the company to survive on the market. An example of such a strategy may
be implementing an ERP system in a pharmaceutical company in the area of production and
logistics where processes are subject to obligatory GMP/FDA quality standard requirements.
Strategy linked to the need to achieve innovation saltatorily treat an ERP system
implementation as an instrument allowing to achieve process innovations quickly and uniquely,
resulting in, e.g., lowering costs through stock and planning production power. An example of this
strategy may be implementing an ERP system in an aggregate mine which, on the one hand, has
a guaranteed market for its products for many years, and on the other hand would like to use a
salutatory innovation to achieve a measurable decrease of production costs, becoming a cost
leader in its branch.
Platform for changes strategy treats an ERP system implementation as a platform for
introducing permanent, step changes in the period of the enterprises system lifecycle. This
situation takes place in companies that function very dynamically on the changing market. Thus,
an ERP system becomes a platform for permanent organisational changes which allow it to follow
changing market expectations.
Table 2 presents percentage structure of respondents answers to the question regarding the choice
of company informatisation strategy concerning ERP/CRM system and consequently making a
decision about system implementation. The respondents could choose more than one answer.
Table 2: Reasons for making an ERP/CRM system implementation decision percentage structure
Own study
Strategies %
Strategy linked to companys survival on the
market
68
Strategy linked to the need to achieve
innovation saltatorily
9
Platform for changes strategy 32
Other 6
Research shows that, in planning an ERP/CRM system implementation, the respondents are mostly
guided by companys survival on the market strategy, for 32% of the respondents the reason for
undertaking an ERP/CRM system implementation project is building a potential platform for
organisational changes. Another 9% of the respondents treat this group of systems as a possibility to
achieve saltatory innovation. Table 3 presents the percentage structure of respondents answers to
questions concerning the decision criteria in ERP/CRM class system investments. The respondents
could choose more than one answer.
Table 3: Choice criteria in ERP/CRM class information system investment decisions percentage
structure
Choice criterial %
Compliance of clients requirements with
functional range of systems
38%
SLA cost 28%
Purchase cost (implementation service, licence) 19%
Suppliers references 9%
Possibility of system development scalability 8%
Possibility of achieving a direct economic profit,
i.e. profit coming directly from the systems
6%
Possibility of achieving an indirect economic
profit, i.e. process automation, an informing and
organising system function.
2%
Source: Own study
328

Bartosz Wachnik
Research shows that entrepreneurs choosing an ERP/CRM system focus on questions relating to
guaranteeing the required functionality by the system 38%, the cost of purchase and
implementation - 19% and its later usage 28%. Only 6% of respondents have defined a direct
economic profit as a choice criterion in an ERP/CRM system selection, which indicates that for the
majority of entrepreneurs implementing an ERP/CRM system is not synonymous with economic profit.
Only 2% of respondents have defined an indirect economic profit as an ERP/CRM system choice
criterion. 9% of respondents define the suppliers references as an ERP/CRM system choice criterion,
which confirms that the quality of implementation depends mostly on the competence and experience
of consultants performing the project. There is an interesting correlation between decision criteria
linked to ERP/CRM class information system investments and the question of economic evaluation of
company investments in this class of IS solutions. The subject literature (Cronholm S., Goldkuhl G.,
2003), in relation to time, considers two perspectives of IS projects economic evaluation.
The ex-ante evaluation performed before launching a project, aimed at defining the possible
future influence of its resulting IS solution on the companys economic situation. The ex-ante
evaluation should be part of formal feasibility study of investment in any IT system.
The ex-post evaluation performed after the completion of an IS project, aimed at evaluating the
achieved results. The ex-post evaluation does not consider the probable incidences of particular
effects or the optionality of its course and all the risk has been discounted during its course.
Table 4 presents research results for the percentage structure of the use of methods allowing defining
the economic aspect of ERP/CRM class information system implementation investments from the ex-
ante and the ex-post perspective. Respondents could choose more than one answer. Research
indicates that before launching ERP/CRM class information system implementation projects, 63% of
chosen companies do not conduct any economic analysis of the planned investment. Within three
years of completing an ERP/CRM class information system implementation, 72% respondent
companies have not conducted any economic analysis of the performed investment that could show
possible economic profit or loss. Research shows that both in the ex-ante and the ex-post
perspective, mostly quantitative and qualitative methods are used, i.e. ROI, Payback Period, IRR,
EVA, multi-criterial methods, TCO, portfolio methods, and the presented methods are used to the
same degree in case of IT and other types of investment.
Table 4: Percentage structure of the methods allowing to define the economic aspect of ERP/CRM
class information system implementation investments from the ex-ante and the ex-post
perspective
Ex-ante Ex-post
No. Methods used % %
1 ROI 10% 9%
2 Payback Period 9% 7%
3 IRR 8% 4%
4 EVA 1% 4%
5 Multi-criterial methods
(strategic method) 5% 6%
6 TCO 3% 3%
7 Portfolio methods 2% 7%
8 No measurement 63% 72%
9 Other 8% 4%
Source: Own study.

As far as similar analysis conducted in other countries is concerned, research by L.G. Paul and P.
Tate [8] performed in 2002 shows that amongst 288 financial directors in American companies over
50% use the ROI method and a method defining the return on investment in information technology in
general. Only 14% of respondent declared a complete lack of IS projects economic evaluation
measurement.
Table 5: Percentage distribution of IS projects economic evaluation methods used by financial
directors
Methods of IT economic evaluation used
by financial directors
%
ROI 64
Payback Period 63
IRR 42
329

Bartosz Wachnik
Methods of IT economic evaluation used
by financial directors
%
EVA 35
Other 5
Lack of measurement 14
Source: L.G. Paul, P. Tate, CFO Mind Shift: Technology Creates Value, CFO Publishing Corporation,
Boston, MA 2002

Other research, devised by IDG Research & Getronics in 2002, show that amongst 456 directors of IT
departments in the USA and six European countries, i.e. the UK, Holland, Italy, Spain, France and
Germany, 18% use the ROI method for economic evaluation of IS implementation and 29% use the
return on investment method. Only 5% of respondents declared a complete lack of measurement in
economic evaluation of IS projects.
Table 6: Percentage distribution of IS projects economic evaluation methods used by IT directors
Method of economic evaluation of
information systems implementation used
by IT departments directors
%
Project completed within planned period
and budget
50
Decrease of cost 49
Increase of productivity 47
Increase of profit/income 36
Attaining flexibility and scalability 31
TCO 30
Payback Period 29
Redukcja zasobw osobowych 22
ROI 18
Other 2
No measurement 5
Source: IDG Research & Getronics, The CIO Agenda: Taking Care of Business, CxO Media 2002.

We need to underline that in both cases research conducted amongst respondents from the USA and
Western Europe concern the total of investment in information technology, not only ERP/CRM class
systems and, additionally, it does not define the perspective of the conducted analysis, i.e. ex-post or
ex-ante. Hence, it is not possible to compare and contrast it from a methodical perspective with the
authors research; however, all the research considered allows us to understand better the views and
cognitive maps of decision-makers, which has a crucial importance for the description of predominant
action logics in IS projects realisation and for economic evaluation of investments in ERP/CRM
systems conducted in Poland, as well as in other countries.

Table 7 presents the percentage structure of respondents answers to questions concerning
organisational problems related to a lack of economic evaluation in ERP/CRM IS projects realisation
in the ex-ante and the ex-post perspective.
Table 7: Organisational problems in economic evaluation of ERP/CRM IS projects investment in the
ex-ante and the ex-post perspective
Ex-ante Ex-post
No. Problems % %
1 Lack of knowledge
about evaluation 31% 9%
2 Lack of interest 47% 55%
3 Lack of data or information 15% 8%
4 Lack of time 4% 16%
5 Other 11% 14%
Source: Own study.

The respondents could choose more than one answer. 31% of respondents declared a lack of
knowledge about economic evaluation in ERP/CRM IS project implementation from the ex-ante
perspective, while 9% of respondents declared a lack of corresponding knowledge from the ex-post
perspective. It is worth noticing that 47% of respondents declared a lack of interest in the economic
330

Bartosz Wachnik
evaluation of an ERP/CRM system implementation from the ex-ante perspective and 55% of
respondents also declared a lack of interest in the ex-post perspective. Table 8 presents the
percentage distribution of answers to questions related to the types of difficulties identified in
economic evaluation of ERP/CRM system investment projects. The respondents could choose more
than one answer. As part of the research, 62% of respondents indicated difficulties related to
quantifying and indentifying benefits from an ERP/CRM information system investment and 46% of
respondents indicated difficulties related to quantifying and indentifying expenditure resulting from an
ERP/CRM information system investment.
Table 8: Types of difficulties identified in economic evaluation of ERP/CRM system investment
projects
Types of difficulties
%
Difficulties related to quantifying and
indentifying benefits from an ERP/CRM
information system investment 62%
Difficulties related to quantifying and
indentifying expenditure resulting from an
ERP/CRM information system investment 46%
No problems encountered 6%
Other 2%
Source: Own study.
4. Conclusion
An ERP/CRM system implementation project consists mostly of adapting a system to the clients
functional requirements through appropriate parameterising and programming works (Customisation).
Whether or not an ERP/CRM system can bring organisational improvement to a company depends
both on a systems functional possibilities and the invention, initiative and most importantly knowledge
and experience of consultants who adapt it to the companys needs.

Summing up the results of research concerning choice strategy and ERP/CRM implementation, we
can notice that for the respondents the main reason for choosing systems from this class is
guaranteeing the companys survival on the market while treating these systems as a platform for
change and thus achieving saltatory innovation after a completed implementation is only in the
second and third position respectively.

Observation of the research results, concerning the criteria of decision-making in ERP/CRM class
information system investment, leads us to the conclusion that in the choice of systems from this
class, the majority of companies management do not link this investment to direct or indirect
economic profits for the company and focus mostly on the question related to the system
guaranteeing the required functionality and on the cost of implementation and usage.

The main expectations of entrepreneurs in this class of systems is servicing basic requirements, i.e.
correct servicing of sales processes, purchases, stock management, correct accounting, possibility of
planning and manufacturing execution, important marketing areas. In other words, ERP/CRM systems
have to allow for performing basic and necessary activities that make current functioning of the
company possible.

Hence, we get a picture of the companies representatives sceptical and cautious approach to
implementation of ERP/CRM class information system as a source of immediate competitive edge
and, consequently, achieving direct and indirect economic profits. ERP and CRM class systems are
long past the early phase of development, when they could be treated as a source of competitive
edge. We have to admit that they are standardised solutions, functionally and technologically mature.

Analysing appropriate group of ERP and CRM systems, i.e. dedicated to a selected group of clients,
we can notice a big similarity when it comes to the variety of functionalities and business solutions on
offer. ERP/CRM systems usually function on the basis of the same technological components, e.g.
data bases. The product standard, or the products group standard, has already been devised and it is
widely used. At the same time, during the last 10 years, the cost of investment in ERP/CR systems
has noticeably decreased, the costs of ERP/CRM system licence have decreased by at least 20-30%,
an hourly wages for consultants have stabilised in Poland on the level of 60 - 70 EUR.
331

Bartosz Wachnik
ERP and CRM systems have suddenly become an everyday good, just as access to mass media or
means of transport. Entrepreneurs stopped treating information systems from this class as a source of
temporary competitive edge, and a result a source of achieving direct and indirect economic benefits,
just as email access or having an eye-catching website is not treated as a source of competitive edge
in companies. However, establishing and applying methods and tools allowing companies to perform
economic evaluation of ERP and CRM class information systems in Poland and in other countries fall
behind functional and technological development of this class of applications.

Summing up research results linked to economic evaluation of ERP/CRM class information systems
implementation investments; we can notice that respondents declare both from the ex-ante (63%) and
the ex-post (72%) perspective that they have not performed an economic analysis for investments in
information systems from this class. Research concerning economic evaluation of IT investments
conducted in 2002 in the USA showed that only 14% of respondents have not carried out an
economic evaluation of the investment. Other research concerning economic evaluation of IT
investments performed in 2002 amongst US and selected Western European companies indicated
that only 5% of respondents did not carry out an economic evaluation of the investment. Between
2001 and 2006, P. Lech conducted a case analysis aimed at defining whether Polish companies,
deciding to perform an IT project, i.e. company management support information systems
implementation, use a structured approach containing an element of economic evaluation. Interviews
were carried out in 29 companies, of different size, planning to implement an integrated information
management system. According to P. Lech (Lech P., 2007), none of the organisations taking part in the
research performed an economic evaluation of their planned investment other than comparing the
cost of individual offers.

Between 2004 and 2012, the author conducted a similar case analysis in order to define if Polish
SMEs, deciding to implement a company management support information systems, use an approach
including elements of a companys economic evaluation. The analysis was carried out in a group of
35 companies. According to the author, only one company performed an economic analysis of the
planned project other than comparing the cost of individual offers. It is worth highlighting that Polish
entrepreneurs, top and middle level managers are not highly motivated or interested in performing an
economic evaluation of ERP/CRM class information systems implementation projects before
launching them (ex-ante). The main reasons are:
Lack of accessible, practical knowledge concerning the possibilities of using methods and
techniques allowing to clearly and relatively easily carry out an economic evaluation of an
investment into IS projects consisting of an ERP/CRM class systems implementation from the ex-
ante perspective. Scientific centres, software producers and independent consulting companies
do not provide sufficient services of economic investments in IS projects. Research indicates that
in their evaluation company representatives rely mostly on quantitative and qualitative methods.
Lack of awareness amongst company representatives about the need to carry out an economic
evaluation of an investment into IS projects consisting of an ERP/CRM class systems
implementation from the ex-ante perspective. We need to underline that economic evaluation of a
planned ERP/CRM system investment should constitute an element of the project feasibility
study, amongst collected and defined functional and technological system requirements. Such
feasibility studies should be performed before the selection of an information system and
implementation partner. As a result, companies do not conduct project feasibility studies properly,
which may result in choosing an inappropriate ERP/CRM class information system or the wrong
implementation partner and have a negative impact on the project realisation.
A very important research result is the fact that 72% company representatives do not carry out an
economic evaluation of their investment after completing an ERP/CRM information system project.
According to the author, the same reasons are valid in the ex-post perspective as in the ex-ante
perspective. It is important to notice that 55% of respondents indicated a lack of interest in performing
an analysis of the project economic evaluation ex-post, compared to 47% of respondents in the ex-
ante perspective. Performing an economic evaluation from the ex-post perspective may reveal
mistakes in the selection process of an ERP/CRM system or implementation partner and, additionally,
allow for verifying organisational mistakes in the process of implementing information systems.
Hence, the lower number of respondents interested in conducting an economic evaluation.

Research shows that the respondents indicated two equivalent main difficulties in performing an
economic evaluation, i.e.
332

Bartosz Wachnik
62% of respondents indicated difficulties related to quantifying and identifying the benefits
resulting from investing in an ERP/CRM information system. In many cases it was impossible to
clearly establish which benefits of an ERP system implementation result from its implementation,
and which from the companys organisational and technological activity. We can look at the
example of design and use of a product price-list structure including complex price discounts that
was implemented in a distribution company. After implementing the price-list structure
functionality in the ERP system, the company noticed an increase of a few percent in the return
on sales. It is very difficult to definitely determine the source of quantified benefit - whether it was
the implementation of a process innovation: designing the price-list, or its implementation in the
sales management module of the ERP system. Very frequently, after the implementation, an ERP
system becomes the source of reliable information generated in a shorter period of time and more
easily than before the implementation, e.g. sending text messages or creating attractive graphs.
Nevertheless, if the information recipients are not sufficiently prepared to use it and miss the
chance, the company will not achieve an economic effect. Also in this case, it is difficult to clearly
identify benefits resulting from the system implementation.
46% of respondents declared problems linked to quantifying and identifying the cost resulting
from an ERP/CRM information system investment. Investing in an ERP system purchase entails
both expenditure on equipment purchase and software licence at the beginning of a project and
covering costs of system maintenance and its development for 4-6 years. Investment expenditure
is linked to spending financial funds and internal expenditure on tasks performed by internal
specialists. In many cases, it is not possible to definitely determine which expenditure over the
years or which part of the expenditure is connected with an ERP/CRM system implementation
project.
Comparative research shows that the identified difficulties concerning quantifying benefits and costs
have also been indicated by other research (Ballatine J.A., Galliers R.D., Stray S.J., 1999). It is worth
highlighting that more respondents indicated identifying and quantifying benefits as a difficulty in
carrying out an ERP/CRM system investment evaluation. To sum up, the observation of research
results leads to the following conclusions. Amongst the entrepreneurs and top management of a big
group of Polish companies, ERP/CRM systems are not treated as avant-garde IT solutions that can
give a temporary competitive edge in the mid-term, and, consequently, a direct or indirect economic
benefit. However, information systems from this class constitute necessary IT solutions, indispensable
for conducting business activity, like email access, electricity or even water. In most cases,
entrepreneurs and management do not carry out an economic analysis while performing ERP/CRM
system investments, neither in the ex-ante nor the ex-post perspective. According to the author, there
are two main reasons for this phenomenon. First of all, there is a lack of verified methods allowing this
kind of economic analysis that could be adapted for a specific type of ERP/CRM system. Secondly,
entrepreneurs and company managers lack determination to solve the problem of economic
evaluation of management support information systems.
References
Ballatine J.A.,Galliers R.D.,Stray S.J, Information Systems/Technology Evaluation Practices: Evidence from UK
Organisations, in: L. Willcocks, S. Lester (eds.), Beyond the IT Productivity Paradox: Assessment Issues,
John Wiley&Sons, Chichester 1999.
Cronholm S., Goldkuhl G., Strategies for Information System Evaluation Six Generic Types,
http://www.ejise.com, EJISE 2003.
Cypryjaski J., The Basic Methods of IS Investment Economic Effectiveness Evaluation in Enterprises, University
of Szczecin, Szczecin 2007
Dudycz H., Dyczkowski M., Effectiveness of IS Investments. The Basic Evaluation Methods and Usage
Examples, Publishing house of Wrocaw University, Wrocaw 2006.
Lech P., Methodics of MIS Investment Economic Evaluation, Publishing house of Gdask University, Gdask
2007.
Niedwiedziski M., Evaluation of an Enterprises IS Projects, Acta Universitatis Lodziensis, Publishing house of
d University, d 1989
Powell P.L., Evaluation of information technology investments: Business as usual? [in] L. P. Willcocks & S.
Lester (eds.), Beyond the IT Productivity Paradox, John Wiley & Sons, Chichester 1999, page 151.
Paul L.G., Tate P., CFO Mind Shift: Technology Creates Value, CFO Publishing Corporation, Boston, MA 2002.
Remenyi D.,Money A.,Sherwood-Smith M., Effective Measurement and Management of IT Costs & Benefits,
Butterworth-Heinemann, Woburn, Massachusetts 2000.
Weill P., Broadbent M., Leveraging the New Infrastructure: How Market Leaders Capitalize on Information
Technology, Harvard Business School Press, Boston, Massachusetts 1998
333
Towards an Understanding of Enterprise Architecture
Analysis Activities
Haining Wan
1, 2
and Sven Carlsson
2

1
Key Lab of Information Systems Engineering, School of Information Systems
and Management, National University of Defense Technology, Changsha,
P.R.China
2
Department of Informatics, Lund University, Lund, Sweden
Haining.Wan@ics.lu.se
Sven.Carlsson@ics.lu.se

Abstract: The connotation of the term of Enterprise Architecture (EA) analysis varies from context to context.
Aiming at, in part, promoting the understanding and reducing the possible misconception of EA analysis, in order
to characterize, classify, and distinguish the connotations of EA analysis, six interrelated types of activities are
identified: (I) System thinking; (II) Modeling; (III) Measuring; (IV) Satisfying; (V) Comparing with requirements,
and (VI) Comparing alternatives. The paper starts with the EA lifecycle management, and then addresses the
main tasks in different stages in the EA lifecycle process. After that, the meaning of each type of activities and
their interrelationships are discussed. The usage of the six types of activities is illustrated through several
scenarios of EA analysis. The main contribution of the paper is twofold: first, it articulates that there is a broader
variety of understandings of the term EA analysis than we could imagine; second, it provides the possibility and
feasibility for researchers to explain and customize, i.e. for authors to clarify and for audiences to grasp, the
meaning of EA analysis using certain combinations of the six activities.

Keywords: enterprise architecture analysis, enterprise architecture, analysis and design, evaluation and
assessment, validation and verification, enterprise analysis
1. Introduction
Enterprise Architecture (EA) is of great importance for aligning enterprise IT-assets with enterprise
business and strategy. EA is a state-of-the-art alternative for achieving enterprise management goals
such as improving business performance, decreasing resource use, controlling risk and complexity,
and coping with uncertain environment. Through the planning of transition from the baseline
architecture to the target one and finally, gearing the IT assets to the needs of enterprise business,
the concept of EA is receiving more and more attentions both from academy and industry.

However, having an enterprise architect does not guarantee a successful EA practice (Sessions
2007), neither necessarily does having an EA framework. As to facilitating a success, EA analysis is
important and indispensable for two reasons:
It supports organizations to get a better understanding of EA (both real and designed) itself. For
instance, in EA initiatives, while artifacts, e.g. models, are built with layers and views (or
viewpoints), enterprise data is collected, and then, analysis is intended to find gaps, weaknesses,
pitfalls and shortfalls that exist in the enterprise.
It supports organizations in their EA decision-making. EA transition is often very expensive and
highly risky; furthermore, with respects to complexity, it is vital to do in-depth analysis before
making decisions.
This paper is intended to react to such a reality: (1) Concerning the term EA analysis, there is no
formal definition which is widely accepted or in common use. (2) The connotations of EA analysis vary
a lot in literature. The contents and boundaries of the term are left vague and ambiguous. (3) Little of
the literature with the keyword EA analysis makes the connotation of EA analysis explicit enough. (4)
Few authors of the literature would think that it is important to clarify the meaning of EA analysis. They
just take it for granted. Of course, it is out of the scope of the paper to develop a one-size-fit-all
definition of EA analysis. Instead, in this work, the possibility and feasibility to develop a better
understanding towards EA analysis is provided.

The remainder of the paper is structured as follows. Section 3 proposes a framework of the six core
types of activities and the relationships between activities after discussing EA and EA analysis in
Section 2. Section 4 presents nine typical scenarios for illustrating how the framework with its six
types of activities can be used. Section of Conclusions finalizes the paper.
334

Haining Wan and Sven Carlsson
2. EA and EA analysis
Enterprise software-intensive systems, e.g., Enterprise Systems (ES) and Enterprise Resource
Planning (ERP) systems, are becoming increasingly complex and expensive and at the same time
there is a need for flexibility and agility since organizations environments are becoming more
turbulent and high-velocity. In such a context, it is vital for an organization to keep its investment at a
reasonable level, i.e. to try to make a good enough alignment between IT (including IT investment)
and the organizations strategy, business model, and business processes.

EA has been emerging as an interdisciplinary subject since Zachmans Framework for information
system architecture was presented (1987). It should be noticed that not only IT-related information is
included in EA research and practice, but business-related issues, such as processes, organization,
strategy, capacity, and so on, should also be included in EA initiatives(Mayo & Tiemann 2005; Meyers
2011).

Normally, enterprises employ EA Frameworks in initiatives to accomplish their alignment. There are
many EA Frameworks linked: Zackman Frameworks (Zachman, John Alexander 2001; 2004, 2011),
TOGAFs (OpenGroup 2003, 2010), DODAFs (DoD Architecture Framework Working Group 2003,
2009), FEAF (The Chief Information Officers Council 1999), TEAF (US Department of the Treasury
Chief Information Officer Council 2000) and some other frameworks belonging to some particular
enterprises. Based on different experiences and knowledge, many organizations and researchers had
proposed EA frameworks of their own with definitions of EA.

Practically, in pursuit of success, continuous EA management is the choice of factualisms, i.e. the life
cycle management. There are many similar expressions on the architecture lifecycle process, e.g.
TOGAF ADM(OpenGroup 2010), PDCA(Plan Do Check Act) (DoD Architecture Framework
Working Group 2009; Moen & Norman 2006) and Architecture process (Create EA Apply and Use
EA Maintaining EA results), etc.

The initial objective of an EA analysis is to understand the enterprises strategy and develop solutions
for the gap between baseline and desired target. EA is oftentimes viewed as a tool to provide
evidences for enterprise-wide decision-making, similarly, EA analysis could also be regarded as a
way to provide evidences for architecture process management, and to support the main tasks
depicted in Table 1. To be concise, PDCA process is directly used in the table. It should be noted that
the tasks in steps within the PDCA process help to shape Section3 and Section4.
Table 1: PDCA process with the main tasks
Phases Main tasks to do
Plan to make clear the intent, the requirements, the vision, the strategy, the target
Do to collect information and data, to build up models, to quantify EA
Check
to check the process of creation, to make sure both the procedure and the result are right, to
check the effectiveness and efficiency of the architecture artifacts
Action

to apply the result of artifacts into the real enterprise, to implement the architecture artifacts, to
enable the transition and migration of the enterprise architecture, to maintain and govern the
architecture in order to keep the enterprise architecture in good condition, to launch a new
cycle if needed
3. The core types of activities in EA analysis
Based on literature review and narrative, this part of the paper will discuss the many core types of
activities in EA analysis. Of course, another source is my personal understanding of EA analysis.
3.1 System thinking
System thinking is defined as an epistemology which, when applied to human activity is based upon
the four basic ideas: emergence, hierarchy, communication, and control as characteristics of systems.
When applied to natural or designed systems the crucial characteristics is the emergent properties of
the whole. (Checkland 1981, p. 381) and the art and science of making reliable inferences about
behavior by developing an increasingly deep understanding of underlying structure (Richmond 1987).
However, system thinking here represents problem-solving social human activity embodied in the
epistemology, art and science. System thinking is included as an important element of system
335

Haining Wan and Sven Carlsson

engineering and system analysis in the methodological cycle towards problem-solving problems
(Checkland 1981, pp. 147, 54).

In the PCCAM process, the enterprise itself is viewed as a dynamic and complex whole system.
Different parts of the system within the enterprise are interacting with each other as a structured
functional unit. It depends upon the concern, interests of system thinker, and the granularity to
determine the ingredients of the collection of system parts when there is a problem to be solved.

System thinking for EA analysis has two characteristics. First, holistic consideration of both enterprise
business-related and IT-related issues. System thinking in problem-solving of EA analysis takes a
holistic look at enterprise. With system thinking, analysts justify what the architecture within an
organization is (as-is) and what it ought to be (to-be). Second, separation of concern. Analysts should
dig in-depth with views (viewpoints) to focus on some particular parts of the enterprise and get detail
models according the concerns. With the aid of system thinking, analysts divide the enterprise into
parts and make classification of layers with some special views or viewpoints. In such a (thought)
process, views are the result of what could be seen, the viewpoints are the points from which to get
the views (Maier, Emery & Hilliard 2004).

System thinking is applicable in all the stages of the EA process, no matter whether the existence
form is visible or invisible, tangible or intangible, conscious or unconscious, direct or indirect, implicit
or explicit, and formal or informal. System thinking is of great significance to deal with complex and
unstructured problem existing within the domain of EA.
3.2 Modeling
EA Modeling is intended to build up models and to collect information and data for describing or
prescribing EA. This type of EA activities aims to provide stakeholders with common artifacts (models)
for communications.

With EA Modeling from the real world to logical world, an integrated whole enterprise could be divided
into parts according to the concern, framework, and viewpoint of all types of stakeholders. And finally,
there are EA models with more details in the logical (artificial) world, either descriptive or prescriptive.
Collections of EA artifacts could also be called layers, slices or views, details and models, shown as in
Figure 1.

Figure 1: Analysis and synthesis of EA with modeling
Actually, EA modeling is a sort of work consisting of:
Design. As a work of design, modelers usually rely heavily on imagination, their experience and
knowledge. It is a work of innovation.
Decision-making. To some extern, EA modeling is an attempt to reduce uncertainty. From a top-
down point of view, the modelers endeavor to make clear the details in the models. Modelers
have to make choices, adopt some design, and abandon others.
Analogy and metaphor (Khoury & Simoff 2004; McFadden 2001). In the mapping from the real
world to the logical world, one side is the enterprise and environment; the other side is the
artifacts (models or data, descriptions or prescriptions). With the aid of analogy and metaphor,
analysts rebuild from the real world to the logical world in terms of components and the
relationships between components by symbols and modeling language.
Information reduction and information encapsulation. As the communication medium among
stakeholders and the blueprint of transition plan, the less complicated EA models are better
models for stakeholders(Sessions 2007), especially executives. Every model focuses on only
limited aspects. Modelers have to exclude irrelevant information in every special model so as to
gain more concise ones.
336

Haining Wan and Sven Carlsson
3.3 Measuring
EA is complicated and EA initiatives are often risky, expensive and can have a high degree of
uncertainty. Enterprise transformations and EA migrations connote change, and change brings about
new assignments of duties and rights. Probably, this means that there would be uncomfortableness
and even resistances. The executives need to overcome the resistances from both internal and
external stakeholders. As a result, enterprise transformation and EA migration are often associated
with painful things. For this reason, analysts try to guarantee the success rate. Measuring is a type of
underpinning and underlying activity to facilitate the success of EA practice. More basically, EA could
not be readily managed and controlled in practice if it is not measured.

Artifacts are tools and mediums which analysts use to understand the performance of the real
enterprise. The artifacts, normally, are compared with the requirement of the enterprise before final
EA actions, e.g. decision-making, implementation.

Both qualitative and quantitative methods could be used in measurement. Concerning measurement
for analyzing the EA, aspects could be divided into two parts, shown as in Table 2. The table is based
on personal experience and understanding.
Table 2: Two parts of EA measurement
EA itself(both artifacts and real world)
Mapping of EA
(from real world to models)
Object
Functional and non-functional performance
according to the spatial-temporal attributes
The rightness of both artifacts and the
process to build up the artifacts
Quality of
what
Quality of the EA itself (reflecting the
goodness of design and real enterprise)
Quality of the models (reflecting the
goodness of mapping and visualizing)
Focus
Components and their relationships in the
same EA
Both the models and the real enterprise
system
Example of
activity
Measuring for evaluation and assessment,
e.g. measuring performance based on model
simulation and execution
Measuring for validation and verification,
e.g. consistency checking
3.4 Satisfying
The thoughts to get a good enough EA by designing only once seems not to be practical. The
classical pattern to make a success by only designing once does not work in the domain of EA design
(Webnova Inc. 2009). On the contrary, analysts have to improve their design step by step, again and
again, till there is a satisfactory design. Satisfactory means that stakeholder requirements are met, or
there is a compromise between stakeholders to reach a consensus.

Zia et al. (2011) proposed to view EA analysis as a MCDM (multiple criteria decision making)
problem. That is to select correct architecture from designs, and view such a selection problem as a
MCDM problem. Two difficulties exist: one is to transform unstructured problem of EA analysis into
structured problem of MCDM by conceptualizing the problem with concrete objective functions and
constraint conditions, and the other is to transform the MCDM problem into SCDM (single criteria
decision making) problem by converting between objective functions and constraint conditions.

Note that, EA analysis problems are often open ones with open objective functions and constraints
rather than closed ones. Those who want to do optimization should first transform real open problems
into closed ones. However, one of the most important characteristics of such a problem
transformation is not whether the artifacts are right or wrong; yet, it is whether or not it works well
enough.

Practically, it is often impossible for analysts to find the optimal solution(s) of the EA optimization
problem, sometimes the sub-optimal solution(s) and even the satisfactory solution could be the final
choice.
For a better result of EA optimization problem, three sub-types of activities could be done:
Sensitive analysis. It is to study how the inputs uncertainty of EA analysis will influence the
outputs result. There are two approaches. One to change the value of system attributes, and the
other to change the set of system attributes tree.
337

Haining Wan and Sven Carlsson

Trade-off analysis. While the sensitive analysis is done with the changes of system attributes,
trade-off analysis could be done by altering the enterprise model. EA decisions are full of trade-
offs, between long-term and short-term (referring to transition plan), between different
organization deployments (referring to department deployment plan), between different contents
of organization deployment (referring to system suit).
Solutions formulation and revision. Associated with satisfying, solutions for enterprise
misalignment and change management are often formulated. Meanwhile, for the reason that the
collection of system attributes are multidimensional, the stakeholders are also multiple, solutions
often need revisions before agreements and consensuses could be reached.
3.5 Comparing with requirements
To build a model without purposes and requirements does not make sense. It is vital to identify the
requirements in that different stakeholders hold different concerns, and in different contexts, EA
project is intended to deal with different kinds and aspects of issues.

The requirements consist of two parts:
The requirement of enterprise architecture. This kind of requirement is about the functional and
non-functional setting of enterprise architecture (problem of what and how to do EA analysis,
syntax and semantics). It comes from the chain: EA vision strategy goal requirement.
This requirement shapes the direction of the EA, and it could be embodied within enterprise
vision, the strategy, the gap between the ideal EA status (As Is) and the reality (To Be).
The requirement for what to do EA analysis. It is about the context in which EA analysis is needed
(problem of why, pragmatics). Sometimes, this requirement has greater influence on choosing the
method and approach to do EA analysis.
3.6 Comparing alternatives
As mentioned above, usually there is no optimized solution, but only satisfactory one. The
performances of some solutions in some particular dimensions are better than others. Comparing
alternatives is to observe the performance differences of alternatives in all kinds of dimensions. Based
on the differences, advices for making choice begin to take shape.

In EA initiatives, choosing the right one from alternatives is an important thing. Analysts could obtain a
whole appraisement about the EA through modeling, measurement, and satisfying. According to the
appraisement, analysts could make comparison and sort them in order, and sometimes they apply
sensitive analysis again so as to facilitate observing the possible change of the appraisement. This is
very common in the context of selecting from several EA solution candidates.
3.7 Relationships between types of activities
The interrelationships between the six types of activities are depicted in Figure 2. The six activities
could be divided into three categories or levels, i.e. fundamental level including system thinking and
comparing of requirement; main level including modeling, measuring, and satisfying; and decision-
oriented level including comparing between alternatives. Fundamental implies that it is basic,
elementary and primary. Main means principal, mainstream and common. As the name suggests, the
top level, i.e. decision-oriented activity is ultimately and directly useful for decision making.

The directions of arrows indicate the supporting directions. Obviously, there are two implications for
the supporting relationships, i.e. the supporting relations in the same level and that crossing levels.
Another point important in the interrelationships is that two threads could be identified, artifacts and
architecture evolution. All the six types of activities are closely related with EA artifacts (e.g.
models). The crafting and using of artifacts witness the information flows (inputs and outputs) between
activities and supporting directions. EA has an artificial life, and architecture evolution implies that
there is a temporal sequence for EA programs and activities in EA programs. This also exposes the
relationships of supporting in the figure, specifically, the temporal sequence of conducting the
activities in order to form a continuous EA management.
338

Haining Wan and Sven Carlsson

Figure 2: Interrelationships of six activities
Razavi et al. (2011) introduced AHP (Analytic Hierarchy Process) approach to do enterprise
architecture analysis. This approach could be used as a concrete example to explain the
interrelationships. First, System thinking is needed in each step in the whole process to reduce
mistakes and possible contradictions, to improve the analysis results. Second, the enterprise is
modeled as a hierarchy according to the system attributes. Third, based on the results of modeling,
measurement is done to calculate the performances layer after layer. Fourth, based on the results of
modeling and measuring, sensitive study and trade-off analysis is conducted through the modification
of both system attributes (and sub-attributes) and the value of system attributes (and sub-attributes).
Fifth, the analysts need to bear in mind the requirement in the whole EA analysis process. Sixth,
based on the results of modeling, measuring and satisfying, the performances of different alternatives
could be compared, and finally, decision is made. According to the six activities of EA analysis,
potential connotations of the term EA analysis are illustrated in Table 3, with each connotation
differing from the other two connotations. With the requirements in hand and the question whether or
not and to which extent these types of activities are involved, the connotations seem to be capable of
being portrayed.
Table 3: Possible connotations of EA analysis
Potential Connotations Remarks
Corresponding
Combination of
activities
Documentation,
communication, presentation,
descriptions of enterprise and
the enterprise architecture,
etc.
Literature examples: Buckl
(2011), de Boer et al. (2005).
There are several meanings, e.g., using structured
analysis and object-oriented analysis method to get
a better understanding of the enterprise. It may focus
on the procedures to build artifacts, and oftentimes,
some modeling languages, modeling tools, and
database are introduced. With analysis, artifacts are
created, and used to express the design thoughts.
system thinking,
modelling,
comparing with
requirements.
Measurement, evaluation,
assessment, validation and
verification, etc.
Literature examples: Johnson
et al. (2007), Lagerstrom et al.
(2010).

Quality and quantity of EA become the focus, and
usually, criteria tree, variables, system properties,
metrics and KPIs (Key Performance Indicators) are
introduced in order to measure, evaluate, assess,
validate, and verify EA.
system thinking,
modelling,
measuring,
comparing with
requirements, and
sometimes
satisfying
Solutions formulation and
decision support, etc.
Literature examples: Razavi
(2011), Zia et al. (2011).
The main ends are to formulate or introduce some
enterprise solutions, and then do some analysis on
the solutions, including revision, optimization and
comparison. Also typically it emphasizes assisting
executives making decisions about the solutions.
The ultimate solution could either be one of the
options with revision, or a combination of some
particular options with trade-offs.
All the six
activities included
4. Illustrations and some typical scenarios
In order to illustrate how the six types of activities can be applied in EA practice, this section will first
introduce several classic scenarios that could be distilled of EA analysis form the EA lifecycle process.
339

Haining Wan and Sven Carlsson

From the EA lifecycle process of view, there are many scenarios that could be encountered frequently
in EA practice. With the main works to do in Table 1, the scenarios could be sorted as below:
S1: To make EA descriptions.
S2: To form EA prescriptions or solution alternatives.
S3: To make assessments while designing or implementing enterprise architecture.
S4: To make tests and measurements on how well the design performs.
S5: To make an investigation in order to improve the EA.
S6: To check whether the model is built right (i.e. verification).
S7: To check whether the model is the right one (i.e. validation).
S8: To make an investigation while in order to choose one from many alternatives.
S9: To make a conclusion and evaluation of the EA.
With the main works included in Table 1, it is important to realize the distinctive capabilities and
applicability of the six types of activities in doing the tasks. We could make a reflection about how
each type of activity could be applied in each stage of the EA process (Table 4). Based on the
scenarios listed and Table 4, we can make a mapping (Table 5) to show how the six types of activities
could be applied. The process to make the mapping is qualitatively based on:
Epistemology and personal experiences. In several EA projects, it was needed to do EA analysis
with some specific approaches and concrete requirements.
Informal discussion with some practitioners in the domain of EA. Actually, it is interesting and
helpful to communicate with practitioners. Some of them are employees in industrial companies;
some of them are independent consultants.
The applicability of scenarios in EA process. The differences of tasks in stages in the EA process
enable practitioners to identify the different applicability of scenarios in EA process.
The applicability of the six types of activities in the scenarios and EA process. The six types of
activities have different applicability and problem-solving capabilities in the scenarios and the
tasks in stages in EA process.
Table 4: Compulsory or optional in EA process
Prepare Create Check Action
System thinking Compulsory Compulsory Compulsory Compulsory
Modeling N/A Compulsory Optional Optional
Measuring N/A Optional Compulsory Optional
Satisfying N/A Optional Optional Optional
Comparing with
requirements
Optional Optional Optional Optional
Comparing alternatives N/A N/A Optional Optional
Table 5: Distribution situation of types of activities with scenarios
S1 S2 S3 S4 S5 S6 S7 S8 S9
System thinking I I I I I I I I I
Modeling I I I III III II II IV II
Measuring III II I I II II II II I
Satisfying IV III III III I III III III III
Comparing with requirement IV II III II III IV I IV III
Comparing alternatives V V III III III V IV V III
For scenarios(S1-S9), the classifying system:
I: Should be included, undoubtedly
II: Normally included, if included, it would be more helpful and reasonable
III: Sometimes included and sometimes not, it depends on the actual context
IV: Normally not included, if not included, it would be more helpful or reasonable
V: Should not be included, undoubtedly
5. Conclusions
This paper presented six types of activities to characterize EA analysis. A particular EA analysis
approach embraces only part (one or several) of the six types of activities, not including all the six
340

Haining Wan and Sven Carlsson
types of activities. More often, EA analysis approaches are confined to requirements and even
particular organizations. As a result, practically, there is no EA analysis approach which is so-called
heavyweight engineering one and could be applied in different situation, context, and background. In
each stage in the EA process, EA analysis has different focus, and the challenge to portray these
different EA analysis calls for in-depth recognition of the EA process. In order to avoid the confusion
of EA analysis, we propose that finite types of activities could be abstracted from the EA process.
These activities are intuitive, near orthogonal and reflecting the nature of system analysis. The result
of the debate whether or not and to which extent these types of activities are involved could help to
define and specify the feature of the concrete EA analysis. This paper tries to facilitate further
research by thinking and rethinking about the term of EA analysis. It is hoped that this paper would
move forward the practices of EA practitioners; especially mappings from EA analysis task to analysis
approaches and software tools. The illustrations part of the paper is a little weak. Further research of
this paper would be to do some empirical study with data collection around the EA lifecycle process,
e.g. to identify the success factors in EA process, to observe the EA analysis as a systems
engineering process, to illustrate how each of the six types of activities could support EA analysis.
References
Buckl, S. (2011) "A Meta-language for Enterprise Architecture Analysis", in Enterprise, Business-Process and
Information Systems Modeling: 12th International Conference, BPMDS 2011, and 16th International
Conference, EMMSAD 2011, held at CAiSE 2011, London, UK, June 20-21, 2011. Proceedings, vol. 81, pp
511-25.
Checkland, P. (1981) Systems thinking, systems practice, J. Wiley.
de Boer, F.S., Bonsangue, M.M., Jacob, J., Stam, A. & van der Torre, L. (2005) "Enterprise Architecture Analysis
with XML", in Proceedings of the 38th Annual Hawaii International Conference on System Sciences, 2005.
HICSS '05.
DoD Architecture Framework Working Group. (2003) DoD Architecture Framework Version 1.0.
DoD Architecture Framework Working Group. (2009) DoD Architecture Framework Version 2.0.
Johnson, P., Lagerstrm, R., Nrman, P. & Simonsson, M. (2007) "Enterprise architecture analysis with extended
influence diagrams", Information Systems Frontiers, vol. 9, no. 2, pp 163-80.
Khoury, G.R. & Simoff, S.J. (2004) "Enterprise architecture modelling using elastic metaphors", paper presented
to Proceedings of the first Asian-Pacific conference on Conceptual modelling - Volume 31, Dunedin, New
Zealand.
Lagerstrm, R., Johnson, P. & Hook, D. (2010) "Architecture analysis of enterprise systems modifiability - Models,
analysis, and validation", Journal of Systems and Software, vol. 83, no. 8, pp 1387-403.
Maier, M.W., Emery, D. & Hilliard, R. (2004) "ANSI/IEEE 1471 and systems engineering", Systems Engineering,
vol. 7, no. 3, pp 257-70.
Mayo, D. & Tiemann, M. (2005) "EA: It's not just for IT anymore", Journal of Enterprise Architecture, vol. 1, no. 1,
pp 36-44.
McFadden, T.G. (2001) "Understanding the Internet: Model, metaphor, and analogy", Library Trends, vol. 50, no.
1, pp 87.
Meyers, M. (2011) "Rethinking our Enterprise Architecture Principles", Journal of Enterprise Architecture, vol. 7,
no. 3, pp 41-8.
Moen, R. & Norman, C. (2006) "Evolution of the PDCA Cycle. Society", pp 1-11.
OpenGroup. (2003) TOGAF 8.1.
OpenGroup. (2010) TOGAF 9.0.
Razavi, M., Aliee, F.S. & Badie, K. (2011) "An AHP-based approach toward enterprise architecture analysis
based on enterprise architecture quality attributes", Knowledge and Information Systems, vol. 28, no. 2, pp
449-72.
Richmond, B. (1987) "System Dynamics/Systems Thinking: Let's Just Get On With It", paper presented to 1994
International Systems Dynamics Conference, Sterling, Scotland.
Sessions, R. 2007, "A Comparison of the Top Four Enterprise-Architecture Methodologies", viewed December 29,
2011, <http://msdn.microsoft.com/en-us/library/bb466232.aspx>.
The Chief Information Officers Council. (1999) Federal Enterprise Architecture Framework Version 1.1.
US Department of the Treasury Chief Information Officer Council. (2000) Treasury Enterprise Architecture
Framework(Version 1).
Webnova Inc. 2009, Executable Architecture: Design Must Be Incremental, viewed April 15 2012,
http://www.webnova.ca/Home/ViewPost/5505477172482062627.
Zachman, J.A. (1987) "A framework for information systems architecture.", IBM Systems Journal, vol. 26, no. 3.
Zachman, J.A. (2001) The Zachman Framework.
Zachman, J.A. (2004) The Zachman Framework 2.
Zachman, J.A. (2011) The Zachman Framework for Enterprise Architecture Version 3.
Zia, M.J., Azam, F. & Allauddin, M. (2011) "A Survey of Enterprise Architecture Analysis Using Multi Criteria
Decision Making Models (MCDM)", in R Chen (ed.), Intelligent Computing and Information Science, Pt Ii,
Springer-Verlag Berlin, Berlin, vol. 135, pp 631-7.

341
Moving Towards a Sensor-Based Patient Monitoring
System: Evaluating its Impact on Data and Information
Quality
Atieh Zarabzadeh, John ODonoghue, Frederic Adam, Mervyn OConnell,
Siobhn OConnor, Simon Woodworth, Joe Gallagher and Tom OKane
Health Information System Research Centre, University College Cork
zarabzadeh@ucc.ie
zarabzaa@tcd.ie

Abstract: For future healthcare systems to take advantage of sensor-based patient monitoring devices, careful
consideration must be taken in respect to the impact they can have on existing workflow processes. In
association with this, data and information quality dimensions need to be incorporated to help ensure a
successful outcome. This paper will explore the utilisation of a paper-based patient assessment scorecard and
the transition to an electronic version with a view to the future adoption of sensor-based devices. To evaluate the
transition from paper-based to sensor-based solutions, the Modified Early Warning Scorecard (MEWS) is the
primary exemplar within this paper. MEWS has a defined set of protocols and guidelines that assist the
healthcare providers in classifying a patients status of health in detecting patient deterioration. Paper-based
MEWS are already deployed within Medical Assessment Units (MAU). Thus, the MAU is an ideal test bed for the
evaluation of sensor-based solutions. The Socio-Technical Information Systems Design (STISD) science
research framework is the methodology employed to address a practical problem (i.e. frequent capturing of
patient vital signs) raised by the healthcare providers in relation to MEWS. In accordance with the STISD
framework, a review of the extant theories, knowledge and data reveal an Event-driven Process Chain (EPC)
diagram for the paper-based MEWS. Based on these findings an electronic version of the paper-based MEWS
(eMEWS) is proposed and tested. Further refinements are required to explore the full capabilities of sensor-
based solutions and the role they can play within MEWS. Presented is a conceptual model which examines the
relationship between the content and system quality measures and their associated independent variables. Alpha
and gamma tests are conducted to evaluate the eMEWS against the desired outcomes and provide the
foundation for the development of the sensor-based eMEWS solution. Results show that the eMEWS prototype
addresses key data quality dimensions, with sensor-based eMEWS indicting potential enhancement to the
timeliness and frequency of data capturing.

Keywords: sensor-based electronic modified early warning scorecard, data and information quality, socio-
technical information systems design methodology
1. Introduction
Electronic healthcare systems are found to improve the patient care delivery process by reducing
medication error (Bates 2000; Kaushal et al. 2001) and as a consequence, improving the quality of
care (Chaudhry et al. 2006; Koppar & Sridhar 2009). However, little research has been conducted on
improving the process of assessing patient health status in Medical Assessment Unit (MAU) where
the patients with various health statuses are classified using Modified Early Warning Scorecard
(MEWS) protocols. Thus, this paper focuses on improving patient outcomes in a MAU ward by
exploring the impact of data and information quality dimensions. To serve this purpose, influential
data quality factors on patient outcomes are identified and discussed. The Socio-Technical
Information Systems Design (STISD) science (Carlsson et al. 2011) methodology is employed to
provide a design theory concerning improvements on paper-based MEWS. An electronic Modified
Early Warning Scorecard (eMEWS) is proposed, which is then extended to a sensor-based MEWS to
meet the desired outcomes. Abnormalities in vital sign readings are found to indicate the risk of
adverse clinical events in the state of the health of the patient up to several hours before the
unexpected event (Kause et al. 2004; Hillman et al. 2002; Jacques et al. 2006; Gustafson et al. 1999).
These adverse events may lead to avoidable and unexpected death of the patient or poor patient
outcome (McGloin et al. 1999). Studies revealed that improvements on the detection of such
abnormalities and timely actions are likely to provide avenues to substantially reduce the risk of rapid
deterioration in patients condition whilst in hospital (Buist et al. 2004; Sax & Charlson 1987). Paper-
based MEWS are used to classify patients with regards to the likelihood of deterioration or adverse
event (Subbe et al. 2001; Gardner-Thorpe et al. 2006) and score them accordingly. MEWS is a set of
guidelines and protocols for interpreting vital sign readings. Each vital sign parameter is categorised
into bands which are associated with a score. The sum of the scores for a set of vital sign reading is
the MEWS score for the patient at that point of time (OKane et al. 2010). High MEWS scores
demonstrate the likelihood of patient deterioration occurring (G. B. Smith et al. 2006). Thus MEWS
342

Atieh Zarabzadeh et al.
serves as a paper based Clinical Decision Support System (CDSS) by utilising knowledge and
information presentation to healthcare providers in accordance with the MEWS protocols and
guidelines in a timely manner (Berner 2009). Section 2 of this paper describes the Socio-Technical
Information Systems Design (STISD) methodology employed in the presented research. This paper
explores three approaches to MEWS: paper-based MEWS, eMEWS, and sensor-based eMEWS.
Section 3 describes the relevant activity to problem situation identification and desired outcomes in
relation to paper-based MEWS. Healthcare providers are interviewed and visits to a MAU ward are
conducted to establish the ground for proposing a design theory which is the proposed eMEWS. The
extant knowledge and data is presented in Section 4. A design theory is then proposed and described
comprehensively in Section 5. The design theory is tested by alpha and gamma testing methods with
results presented in Section 6. Alpha testing is carried out by the original developers of the system,
while gamma testing is conducted by the users of the system. A refinement to the proposed design is
then described to incorporate the sensor-based eMEWS in Section 7.
2. Methodology
The Socio-Technical Information Systems Design (STISD) science research framework developed by
(Carlsson et al. 2011) is based on Information Systems (IS) design science (Hevner et al. 2004;
Peffers et al. 2006) and management design science (Van Aken 2005). Table 1 represents the four
main activities of the STISD framework and how this paper relates to each of them, noting the
numbered sequence of each activity. The problem targeted by this paper is experienced by the
healthcare providers in their practice. As demonstrated in Table 1, the STISD framework is a suitable
and appropriate theory for the objectives of this research. While requiring the design theory to be
established on solid grounds, STISD framework allows for multiple iterative refinements to a design
theory and supports a variety of testing methods.
Table 1: The proposed research mapped to the STISD activities (Carlsson et al. 2011)
STISD Activities Activity Adoption by the Research
Identify problem
situations and
desired outcomes
1) The problem identified in this research has practical implications for the
practitioners in the area. The desired outcomes identified are to match the problem.
Review extant
theories, knowledge
and data
2) To gather existing research and knowledge, healthcare providers are interviewed
and site visits are conducted. The focus of these activities is to achieve the desired
outcomes.
3) A design theory is proposed and described comprehensively to contextualise the
problem.
Propose/refine
design theory and
knowledge
5) The proposed design theory is refined and described to the practitioners in the
area.
Test design theory
and knowledge
4) The proposed design theory is evaluated using alpha and gamma approaches.
The results of the evaluation derived the refined design theory.
The process followed in the presented research is adopted from the STISD framework and is
presented in Figure 1. The problem situations and desired outcomes are identified. Existing theories,
knowledge and data are reviewed to establish the grounds for the design. On this basis, a design
theory, i.e. eMEWS prototype, is proposed and tested. Refined design theory and knowledge, i.e.
sensor-based eMEWS, is then developed and is subjected to tests in the future iterations. Further
refinements may also be deemed necessary.
Identify Problem Situations
and Desired Outcomes (Section 3)
Propose Design Theory and
Knowledge (Section 5)
Review Extant Theories, Knowledge
and Data (Section 4)
Test Design Theory and
Knowledge (Section 6)
Refine Design Theory and
Knowledge (Section 7)
Test Design Theory and
Knowledge (Future Work)

Figure 1: STISD research framework applied to MEWS adapted from (Carlsson et al. 2011)
343

Atieh Zarabzadeh et al.
3. Identify problem situation and desired outcomes
The aim of this research is to facilitate improvement on the patient outcomes in the MAU ward.
Improvements in the patient outcomes require improvements in the healthcare providers decision
making process. At a high level this process involves measuring patients vital signs, calculating their
MEWS score, interpreting both these scores and other signs, not amenable to capture in a scorecard,
making a decision and taking appropriate action(s). Building on (OConnor et al. 2011) two measures
are identified that underpin the proposed model as presented in Figure 2:
Content quality: involves the information and data quality exclusive of the technology used.
System quality: determined by the technologies used.

Improved Patient Outcomes
System Quality Independent Variables
Scope and Scale of
Measurements
Frequency of reporting
Measurements
Non-vital sign Information
Content Quality Independent Variables
Ease of Manipulation
Concise Representation
Consistent Representation
Free-of-error
Objectivity
Relevancy
Timeliness
Understandability
Accessibility
Completeness

Figure 2: Independent variables contributing to improved patient outcomes
The content quality measure that impacts the patient outcomes includes ten data and information
quality dimensions relevant to MEWS as shown in Table 2 (Pipino et al. 2002).

The system quality measure that impacts the patient outcome includes three independent variables:
Scope and scale of measurements: The extent to which vital sign data and the scales used for
taking measurements reflect the patient health status.
Frequency of reporting measurements: Infrequent data reporting and MEWS score calculation
may lead to the lack of sufficient data. Appropriate frequency of data capturing may assist in
preventative timely actions in the MAU ward.
Non-vital sign information: The subjective data being collected by the healthcare providers at the
point of care delivery and through various methods including observation and conversation with
the patient may contribute to the decision making process as a valid determinant. Crucially, these
signs are not amenable to coding in a scorecard.
In this paper, the focus is on concise and consistent representation, free-of-error, timeliness and
understandability of the content measure. Therefore, the desired outcome of this research is a
method that meets these data and information quality parameters. The authors propose that the
model underpinning this research requires a computer-based design theory that eliminates the
manual process of taking measurements and calculating MEWS score.
Table 2: Data and information quality parameters (Pipino et al. 2002) and their relevancy to MEWS
Parameters
(Pipino et al.
2002)
Relevance to MEWS
Accessibility
Availability and accessibility of patient data at any given time is a critical requirement in
MEWS
Completeness
Vital sign dataset needs to be complete in order to obtain the real MEWS score. In addition
to vital signs, other subjective data gathered via observation or conversation should be
considered.
Concise
representation
Concise and brief representation of data is very important to reduce the time spent reading
data and increase the time spent on care delivery.
344

Atieh Zarabzadeh et al.
Parameters
(Pipino et al.
2002)
Relevance to MEWS
Consistent
representation
With the large quantity of data being presented in the MEWS, consistency in the
representation of data is critical.
Ease of
i l i
Vital sign data gathered should be easily applied to various activities as part of the decision
ki
Free-of-error
Decisions made are extremely sensitive. Thus vital sign data must be correct, reliable and
Objectivity The explicit (i.e. vital sign) and subjective data must be impartial.
Relevancy
Vital signs being considered for MEWS are already proven relevant to the decision making
process. The subjective data needs to be incorporated to the decision making process.
Timeliness
Real time vital sign data is of the utmost importance in the context of MEWS where patients
are under assessment and their status of health may change rapidly.
Understandabili
Comprehension of vital sign data is critical to MEWS.
To test the proposed system against the desired outcomes, the following testing strategies are
applied:
Concise and consistent representation is achieved with an electronic system that uses the same
format to display data.
Free-of-error is tested by an alpha method where the developers of the design assess if the
proposed design has allowed for sufficient automation in capturing vital sign data, calculating the
MEWS score and selecting an appropriate action protocol.
Timeliness is obtained as a product of full automation (i.e. sensors) of the data capturing process
with little or no human intervention.
Understandability of the design theory is tested by a gamma method where the healthcare
providers assess their comprehension of the data presented in the concise and consistent
fashion.
Previous attempts at the adoption of new systems in the healthcare sector have faced adoption
challenges (Lluch 2011; Hikmet et al. 2008). Thus, the adoption of the design theory needs a
thorough assessment achieved by gamma testing.
The presented paper evaluates the design theory by:

Alpha testing:
The design theory developers assess the conciseness and consistency of the representation of
data.
The original developers ensure the system is eliminating or reducing the sources of error.
Testing the timeliness parameter may entail testing a full automation that can be carried out by
the original developers.
Gamma testing:
Understandability can be tested through a survey with healthcare providers.
Given the importance of adoption of new system or technology in the healthcare setting, a test is
carried out by the healthcare providers commenting on the adoption challenges they foresee.
4. Review extant theories, knowledge and data
The design theory has to be grounded on previous theories, knowledge or data. In the context of the
presented paper, the extant theory and knowledge relates to the existing process diagram for the
paper-based MEWS. The Event-driven Process Chain (EPC) diagram for the paper-based MEWS is
presented in Figure 3 has the following steps: 1) the patient arrival to the ward, 2) vital sign
measurements taken, 3) MEWS score calculated, 4) a decision made, and 5) an associated action
taken. Figure 3 is constructed on the observations from visits to the MAU ward and interviews with
healthcare providers in this ward.
345

Atieh Zarabzadeh et al.

Figure 3: MAU EPC diagram for MEWS
An overview of the EPC diagram: The patient medical assessment process (the focus of this paper)
including MEWS
1
commences when a patient arrives to the MAU and terminates on patient leaving
this ward. A patient may be admitted from the hospital reception, Accident and Emergency (A&E), a
General Practitioner (GP) or an ambulance. Prior to MEWS score calculation, a series of background
questions are asked from the patient. Vital signs may also be captured. Vital signs are manually
measured and transcribed on the patient chart. MEWS score is then manually calculated and an
action protocol is identified. Based on the proposed action, a decision is made. Nurses may transcribe
additional observations and data they gather by talking to the patient. These notes may be consulted

1
The process that the patient undergoes in the MAU ward is one stage of a multi stage care delivery process. Other stages of
care delivery process are not the subject of this paper.
346

Atieh Zarabzadeh et al.
at the time of decision making or may be referred to other healthcare providers. An action is taken if
needed; otherwise the need for further vital sign measurement is assessed. The process is completed
once the healthcare provider decides that no more vital sign measurements are needed. The patient
may be discharged from the hospital, admitted to the hospital. The MEWS protocols and guidelines
play a crucial role in classifying a patients health status. There are two sets of procedures concerning
MEWS in the patient assessment process as highlighted in Figure 3: 1) recording vital signs, and 2)
evaluating the frequency of measurements.
5. Propose design theory and knowledge
To achieve the desired outcomes, an electronic version of the paper-based MEWS is designed and
developed (eMEWS). The proposed design aims at addressing the independent variables described
in the previous section. Thus, an eMEWS prototype is developed based on the design theory and the
paper-based MEWS. eMEWS prototype essentially follows the paper-based MEWS forms to maintain
conciseness and consistency. However, additional features such as colour coded trends are
integrated (Zarabzadeh et al. 2012) to enhance the users experience. The eMEWS prototype is
presented to the healthcare providers who work in the MAU and their comments on how it may impact
their daily workload are explored.

The workflow presented in Figure 3 was revised to take on board the features of eMEWS. The circled
activities labelled as Recording vital signs are automated to reduce transcribing and calculation
errors as shown in Figure 4. All other activities remain the same. With the proposed design theory,
the vital signs are taken by the healthcare provider and are typed using the tablet PC solution. The
MEWS score is auto-calculated and presented to the user in a consistent fashion.

Figure 4: eMEWS refinement to the workflow (updated Recording vital signs from Figure 3)
6. Test design theory and knowledge
According to the STISD framework the proposed design is subject to testing. The testing
methodologies used are alpha and gamma, as described:
6.1 Alpha testing
Testing the conciseness and consistency of the data representation: The developers of the
eMEWS prototype propose that the system is consistent in recording vital of signs and in
representing the data collected. The user interfaces are concise and clear in representing data.
This has been validated through a number of internal static evaluations.
Reducing the sources of error has been achieved to a certain extent. In the eMEWS prototype
where the users type the measurements, the system calculates the MEWS score, identifies and
presents the appropriate action protocol. The design is still prone to typing errors which can be
addressed in the refined design. This finding concurs with (M. Mohammed et al. 2009) the
accuracy of calculating scores has improved by running the scorecard on the hand-held
computers.
347

Atieh Zarabzadeh et al.
The developers of the design theory believe that the timeliness parameter is not fully addressed in
the proposed theory. However, eMEWS increases the speed of data transcription which concurs
with (D. R. Prytherch et al. 2006).
Thus, while the concise and consistent data representation is facilitated by the eMEWS proposed
design theory, further refinement is needed to eliminate the sources of error and allow a real-time up-
to-date data capturing strategy.
6.2 Gamma testing
A survey was conducted and completed by 71 participants of whom 11 were medical doctors, 56 were
nurses, and the remaining four did not specify their expertise. All participants are based in the same
hospital. Participants nursing experience and their prior knowledge of MEWS were recorded.
Approximately 65% of the respondents had over 10 years of nursing experience. Of the 66 who
responded to the question on their prior experience with paper-based MEWS, only 24 had more than
24 months of MEWS experience. Thus, majority of the participants are senior nurses and have prior
exposure to paper-based MEWS.

Understandability of the data: 62 out of 67 respondents find the data in the eMEWS easy or very easy
to understand. Five out of 67 are neutralas presented Table 3 (Zarabzadeh et al. 2012).
Table 3: Participants responses to understandability of data represented by eMEWS
Options Reponses % of responses
Very easy to comprehend 18 27%
Easy to comprehend 44 66%
Neither easy nor difficult to comprehend 5 7%
Difficult to comprehend 0 0%
Very difficult to comprehend 0 0%
To evaluate the adoption challenges of the design theory, after a demonstration of the application,
survey participants were asked if they foresee any issues with the eMEWS fitting in with the existing
ward practices. A number of respondents did not foresee any challenges in adoption of eMEWS, for
instance one commented ...it should fit in with existing ward practices. Others made the following
comments:
Double documentation: the doctors will need paper records and print outs, which will impose time
constraints.
Lack of resources: providing a computer monitor for each bed can be challenging both financially
and space-wise. Technical support is also needed.
Staff training: different staff may initially find working with the new system difficult, so prior training
is required.
Building trust in technology which may break down. Regular back up of the database may assist
in establishing confidence in the technology among the staff. One comment envisaged the
maintenance of machine could be a problem.
Security issues: which may include technical security concerns and doctors access to all patients
data on the ward.
In addition to the presented issues that the participants raised, comments were made on the benefits
of the eMEWS. For example, one comment confirmed that automated MEWS score calculation
should be beneficial for nursing staff not to have to calculate [MEWS score].
7. Refine design theory and knowledge
A refinement to the proposed design theory is anticipated to address the inefficiencies concerning 1)
sources of error, and 2) real-time up-to-date vital signs data capturing revealed by alpha testing in
previous section.

The refinement can be made to the eMEWS prototype by the adoption of wireless patient monitoring
sensors. A sensor-based eMEWS may improve the speed of the process by facilitating constant
monitoring of the vital signs and therefore improving the proposed design theory to address timeliness
348

Atieh Zarabzadeh et al.
of collection and presentation of data. Furthermore, this system eliminates the manual vital sign
recording.

With the proposed update to the Recording vital signs component of the workflow (the refinement to
Figure 3 leading to the outcome Figure 4) and the introduction of sensor-based eMEWS, all four
steps are reduced as shown in Figure 5. This sensor-based eMEWS system automatically captures
all patient vital signs and stores the recorded data in a database. In turn the sensor-based eMEWS
automatically calculates and displays the MEWS score with the associated action protocol to the user.
By providing a continuous and customized set of measurements, the sensor-based eMEWS can
reduce a number of the data capturing steps in the previous design theory. Furthermore, the number
of decision points taken by a health care provider when deciding on the frequency of data capture is
reduced. In relation to paper-based MEWS workflow diagram (Figure 3), the sensor-based eMEWS
eliminates the Evaluating the frequency of measurements component.

Figure 5: Sensor-based eMEWS improvement to the workflow (updated Recording vital signs from
Figure 3)
The refined design (i.e. the sensor-based eMEWS) is subject to further testing in relation to the
independent variables presented in Figure 2. However at theory level it appears promising in terms of
delivering the desired objectives by increasing the frequency of measurements to continuous or
customized intervals. At this point, the refinement and testing process are ongoing.
8. Future work
Using the STISD framework a sensor-based eMEWS is designed to allow for concise and consistent
data representation, error-free, timely and comprehensible vital sign data capturing. However, further
refinements to the design theory are deemed necessary. The refinements can be carried out on
multiple areas including facilitating appropriate frequency of reporting vital sign measurements,
clarifying scope and scale of measurements and more importantly from a decision support viewpoint,
supporting the capture and interpretation of non-vital sign information.
Acknowledgments
This publication has emanated from research conducted with the financial support of Science
Foundation Ireland under Grant Number SFI 11/RFP.1/CMS/3338. The authors wish to
acknowledge their gratitude to all the staff at St. Lukes General Hospital, Kilkenny, for their kind
support in facilitating the workshops and for their ongoing participation in the survey and interviews.
References

Bates, D.W., 2000. Using information technology to reduce rates of medication errors in hospitals. BMJ, 320.
Berner, E.S., 2009. Clinical Decision Support Systems : State of the Art, AHRQ Publication No. 09-0069-EF.
Rockville, Maryland: Agency for Healthcare Research and Quality.
Buist, M. et al., 2004. Association between clinically abnormal observations and subsequent in-hospital mortality:
a prospective study. Resuscitation, 62(2), pp.137-141. Available at:
http://www.sciencedirect.com/science/article/pii/S0300957204001236.
Carlsson, S. et al., 2011. Socio-technical IS design science research: developing design theory for IS integration
management. Information Systems and E-Business Management, 9(1), pp.109-131. Available at:
http://dx.doi.org/10.1007/s10257-010-0140-6.
349

Atieh Zarabzadeh et al.
Chaudhry, B. et al., 2006. Systematic Review: Impact of Health Information Technology on Quality, Efficiency,
and Costs of Medical Care. Annals of Internal Medicine, 144(10), pp.742-752. Available at:
http://www.annals.org/content/144/10/742.abstract.
Gardner-Thorpe, J. et al., 2006. The value of Modified Early Warning Score (MEWS) in surgical in-patients: a
prospective observational study. Annals of The Royal College of Surgeons of England, 88(6), pp.571-575.
Available at: http://www.ingentaconnect.com/content/rcse/arcs/2006/00000088/00000006/art00013.
Gustafson, D.H. et al., 1999. Impact of a patient-centered, computer-based health information/support system.
American journal of preventive medicine, 16(1), pp.1-9. Available at:
http://linkinghub.elsevier.com/retrieve/pii/S0749379798001081?showall=true.
Hevner, A.R. et al., 2004. Design science in information systems research. MIS Q., 28(1), pp.75-105. Available
at: http://dl.acm.org/citation.cfm?id=2017212.2017217.
Hikmet, N. et al., 2008. The role of organizational factors in the adoption of healthcare information technology in
Florida hospitals. Health Care Management Science, 11(1), pp.1-9. Available at:
http://dx.doi.org/10.1007/s10729-007-9036-5.
Hillman, K. et al., 2002. Duration of life-threatening antecedents prior to intensive care admission. Intensive Care
Medicine, 28(11), pp.1629-1634. Available at: http://dx.doi.org/10.1007/s00134-002-1496-y .
Jacques, T. et al., 2006. Signs of critical conditions and emergency responses (SOCCER): A model for predicting
adverse events in the inpatient setting. Resuscitation, 69(2), pp.175-183. Available at:
http://www.sciencedirect.com/science/article/pii/S0300957205003813.
Kause, J. et al., 2004. A comparison of Antecedents to Cardiac Arrests, Deaths and EMergency Intensive care
Admissions in Australia and New Zealand, and the United Kingdomthe ACADEMIA study. Resuscitation,
62(3), pp.275-282. Available at: http://www.sciencedirect.com/science/article/pii/S0300957204002473.
Kaushal, R., Barker, K.N. & Bates, D.W., 2001. How Can Information Technology Improve Patient Safety and
Reduce Medication Errors in Childrens Health Care? Archives of Pediatrics Adolescent Medicine, 155(9),
pp.1002-1007. Available at: http://archpedi.ama-assn.org/cgi/content/abstract/155/9/1002.
Koppar, A.R. & Sridhar, V., 2009. A Workflow Solution for Electronic Health Records to Improve Healthcare
Delivery Efficiency in Rural India. In eHealth, Telemedicine, and Social Medicine, 2009. eTELEMED 09.
International Conference on. pp. 227-232.
Lluch, M., 2011. Healthcare professionals organisational barriers to health information technologiesA literature
review. International Journal of Medical Informatics, 80(12), pp.849-62. Available at:
http://www.sciencedirect.com/science/article/pii/S1386505611001961.
McGloin, H., Adam, S.K. & Singer, M., 1999. Unexpected deaths and referrals to intensive care of patients on
general wards. Are some cases potentially avoidable? Journal of the Royal College of Physicians of
London, 33(3), pp.255-259. Available at: http://cat.inist.fr/?aModele=afficheN&cpsidt=1551867 [Accessed
October 31, 2011].
Mohammed, M., Hayton, R. & Clements, G, 2009. Improving accuracy and efficiency of early warning scores in
acute care. British Journal of Nursing, 18(1), pp.18-24. Available at:
http://elib.tcd.ie/login?url=http://search.proquest.com/docview/764373854?accountid=14404.
OConnor, Y., ODonoghue, J & OReilly, P., 2011. Understanding Mobile Technology Post-Adoption Behaviour:
Impact upon Knowledge Creation and Individual Performance. In Mobile Business (ICMB), 2011 Tenth
International Conference on. pp. 275-282.
OKane, T. et al., 2010. MEWS to e-MEWS: From a Paper-Based to an Electronic Clinical Decision Support
System. In 4th European Conference on Information Management and Evaluation. Lisbon, Portugal.
Peffers, K. et al., 2006. The Design Science Research Process: A Model for Producing and Presenting
Information Systems Research. In DESRIST. Claremont, CA, pp. 83-106.
Pipino, L.L., Lee, Y.W. & Wang, R.Y., 2002. Data quality assessment. Commun. ACM, 45(4), pp.211-218.
Available at: http://doi.acm.org/10.1145/505248.506010.
Prytherch, D.R. et al., 2006. Calculating early warning scoresA classroom comparison of pen and paper and
hand-held computer methods. Resuscitation, 70(2), pp.173-178. Available at:
http://www.sciencedirect.com/science/article/pii/S0300957205005526.
Sax, F.L. & Charlson, M.E., 1987. Medical patients at high risk for catastrophic deterioration. Critical care
medicine, 15(5), pp.510-515. Available at: http://ukpmc.ac.uk/abstract/MED/3568715.
Smith, G.B. et al., 2006. Hospital-wide physiological surveillanceA new approach to the early identification and
management of the sick patient. Resuscitation, 71(1), pp.19-28. Available at:
http://www.sciencedirect.com/science/article/pii/S0300957206001286.
Subbe, C.P. et al., 2001. Validation of a modified Early Warning Score in medical admissions. QJM, 94(10),
pp.521-526. Available at: http://qjmed.oxfordjournals.org/content/94/10/521.abstract .
Van Aken, J.E., 2005. Management Research as a Design Science: Articulating the Research Products of Mode
2 Knowledge Production in Management. British Journal of Management, 16(1), pp.19-36. Available at:
http://dx.doi.org/10.1111/j.1467-8551.2005.00437.x.
Zarabzadeh, A. et al., 2012. Features of Electronic Early Warning Systems which Impact Clinical Decision
Making. In The 25th IEEE International Symposium on Computer-Based Medical Systems (CBMS 2012).
Rome, Italy.

350
Using the REA Approach to Modeling of IT Process
Evaluation
Ryszard Zygala
Wroclaw Universuty of Economics, Wroclaw, Poland
ryszard.zygala@ue.wroc.pl

Abstract: For many businesses, Information Technology (IT) solutions play a strategic role in gaining and
maintaining competitive advantage. For several decades, a business information system infrastructure has been
persistently developed and hence it is increasingly more complex. Well organized IT processes have become
more crucial than ever before. IT executives (CIOs) have to make decisions based on high quality information
concerning various features, how IT processes are planned, managed and improved. In order to successfully
evaluate and manage IT processes, CIOs are supported by dedicated software and hardware solutions, best
practices and standards. The IT processes evaluation is much more effective when the source data comes from
tailored solutions for given information needs. There are a lot of opportunities to develop an information system
architecture for the IT process evaluation using various software tools. Today, we are facing the important
question of whether or not it is possible to create software strictly dedicated to an IT management domain, which
also would be an integral part of the ERP architecture and more suitable for SMEs. An attempt to give an answer
to this question is the main goal of this paper. We propose to use the Resource-Event-Agent (REA) approach to
modeling IT process evaluation. This is an important assumption because REA lets us see an IT management
domain both as a set of mutually connected business activities and as a part of the interests of accounting
records. Therefore, the REA modeling makes it possible to describe the IT realm to satisfy information needs, for
both accountants and non-accountants.

Keywords: IT process evaluation, REA modeling, ITSM system
1. Introduction
Depending on business size and type, it is estimated that IT-based expenditure may exceed
50 percent of total company capital investments (Posthumusa, von Solms 2005). The current
economic recession is more conducive to the improvement of business processes, workforce
effectiveness and cutting IT costs (Gruman 2010). Along with this level of expansion, IT managers are
facing great challenges to meet the business expectations to provide high quality information that
generates adequate value and ensures effectiveness of all business processes. This state of
expectations has led to a spate of discussion concerning the return on IT investment and how to
control IT activities. Generally, CIOs have already perceived IT management as an important
technological problem (Gruman 2010), and the IT process evaluation and control usually have been
performed manually for last decades, but today IT management tasks are often automated (Dubie
2008). Usually, each of main business functions (i.e. production, sales, accounting etc.) is
implemented as a part of an integrated software package. Today, the information technology
management is undoubtedly very important for numerous businesses, but the software supporting this
area is functionally incomplete, heterogenous and often isolated from the main enterprise system
architecture. Some leading ERP and business intelligence providers do offer their software tools for IT
management and evaluation support, but they have some disadvantages, i.e.:
There are multi-module software packages which have to be integrated by using additional
middleware layer (e.g. Oracle),
Leading ERP solutions often are outsized for small and medium enterprises (SME),
There is business intelligence (BI) software specialized in IT process evaluation and management
(e.g. from SAS), but generally BI is not useful without a suitable operational database,
Existing solutions are assembled from various components of different origin (e.g. Oracle).
Today, we face the important question of whether or not it is possible to create a software strictly
dedicated to an IT management domain, which also would be an integral part of the ERP architecture
and more suitable for SMEs. An attempt to give an answer to this question is the main goal of this
paper. The main scope of the current discussion about IT process management is oriented towards
ITIL and Cobit - the widely known best practices (de facto standards). These standards are important
not only for the IT management practice, but also for designing software which would support and
automate business processes in this domain. Currently, a rich offer of IT Service Management (ITSM)
software can be observed. The existing models of ITSM seem not to be suitable for enterprises with
limited IT resources, both material and human. Small and medium enterprises are those which have
351

Ryszard Zygala
insufficient capital for IT investment and besides, they often have insufficient knowledge to assess
advantages and potential threats related to offered software. We propose to use the Resource-Event-
Agent (REA) approach to modeling IT process evaluation. This is an important assumption because
REA lets us see an IT management domain both as a set of mutually connected business activities
and as a part of the interests of accounting records. Therefore, the REA modeling makes possible to
describe the IT realm to satisfy information needs, for both accountants and non-accountants.
2. The scope of IT process evaluation
In the last two decades, and especially since the beginning of the millennium, IT investments payoff
has come to play an increasingly greater part in the debate on information and IT management.
However, information technology investment has no direct value itself, but it has potential for derived
value (Remenyi et al 2000). The essence of what computers produce is in the form of strings of bits,
which finally, as processing outputs, meet various information needs, direct and indirect information
systems users. These outputs are intangible by nature and they are used in form of output data,
screen content, database views, multimedia files, internet portals, communication channel, sales
channel, virtualized business process and so on. Each of the mentioned outputs has a value, which is
often different for different people, so they will further be called information goods, using information
economics terminology. According to Shapiro and Varian, information goods are type of products that
essentially consist of information, where information refers to anything that can be digitized, that is,
represented as a streams of bits (Shapiro, Varian 1998, p. 3). Kaplan and Norton argue that
intangible assets are worth far more to many companies than their tangible assets, because they
are hard for competitors to imitate (Kaplan and Norton 2004) but intangible outputs of IT investments
often affect financial performance indirectly.

The main subject of interest in information management evaluation is concerned with the life cycles of
computerized information systems. By conducting literature review and observing business practice it
is possible to draw some interesting conclusions (see: Banister and Remenyi 2000; GAO 1998; Lech
2007; Remenyi et al 2000; Willcocks and Graeser 2001; Zygala, 2009):
Information technology investment is outside the mainstream of the general business
performance management. IT investment economics is first of all a subject of interest of IT
departments, while the second area is managed by financial departments. To build effective IT
process evaluation system it is essential to integrate these two areas.
To identify benefits from IT processes it is enough to use general business performance
measures. To express these benefits in financial measures it is often necessary to build an effect
chain to decompose quantity and quality metrics into financial ones.
Managers, especially in small and medium enterprises, do not understand the foreign language
of complicated formulas which are used in IT projects evaluation. The evaluation of information
systems demands more user friendly methods.
The calculation of IT processes costs is limited to the IT departments borders, passing the fact
over that IT costs are being spent not only within the confines of IT function.
Traditional business cost systems do not facilitate calculating total cost of ownership (TCO) for
information systems. More valuable for that is the activity based costing (ABC) model.
Tax considerations (in Poland) force businesses to classify some IT costs to incorrect periods (i.e.
to minimize business profit).
Businesses avoid allocating indirect IT costs and concentrate on allocating direct IT costs.
Major IT projects have not dedicated them costs accounting dimension entries.
Financial measures, product quality, customer relations, and customer satisfaction and faster
delivery times dominate in IT accounting.
Apart from IT investment evaluation, organizations pay increasingly more attention to their IT process
efficiency and effectiveness. There are some major advantages of identifying an organization from
business process perspective. First, it may be possible to answer the question how the work is
done? Second, the important characteristics are defined in every business process, i.e. the beginning
and end of process, the order of activities in time and space, inputs and outputs. Third, the business
process approach implicates a customer point of view. Finally, the business process is a structure,
which generates a business value for customers (see Davenport 1993). The above-mentioned
characteristics can be identified for every IT business process.
352

Ryszard Zygala
Today, it is commonly accepted that IT business-units work for internal and external customers to
identify and meet their needs. This results in change in the IT management approach: from
technology-oriented to service-oriented (Keel et al 2007), therefore IT service management (ITSM)
has gained popularity in various organizations. In this paper we recognize ITSM as a crucial part of IT
process management. This distinction is particularly important for developing an IT process evaluation
model. According to ITIL Glossary the term service is defined as "A means of delivering value to
customers by facilitating outcomes customers want to achieve without the ownership of specific costs
and risks" (ITIL Glossary) and based on ISACA process is "a collection of activities that takes one or
more kinds of input and creates an output that is of value to the organization" (Cobit 5). These
definitions represent two widely known best practices (standards de facto) for the IT management
domain, i.e. Information Technology Infrastructure Library (ITIL) and Control Objectives for
Information and related Technology (Cobit). From the economic point of view, services are intangible
products (Waters 1996). Thus, IT services can be recognized as intangible outputs of IT processes,
therefore Cobit and ITIL are more complementary than competitive standards. IT process evaluation
is one of the most important objectives in both frameworks. The ITIL framework is more detailed and
operational than COBIT, and widely accepted approach to IT service management in the world, and
therefore it is often taken to be a subject of computer implementation. COBIT appears to be the best
framework to IT process control and evaluation at all levels of IT management. COBIT in version 4.1
supports IT Governance, and its IT process evaluation features are in line with the concept of Robert
Kaplan and David Nortons balanced scorecard. COBIT covers 34 IT processes, grouped into four IT
management domains: Plan and Organize, Acquisition and Implementation, Delivery and Support,
Monitoring and Evaluation. Each of these processes is measured from three perspectives:

1. Organizational maturity. Each of the COBIT processes has defined six maturity levels, which allow
benchmarking and identification of the organizational weaknesses.

2. IT process effectiveness. Each of the COBIT processes has defined outcome measures - lag
indicators, which are used to mean how the goals are met.

3. IT process performance. Each of the COBIT processes has defined performance measures lead
indicators, which are used to assess process performance.

The evaluation of IT costs, benefits and risks is included in both above described frameworks, and
thus they can be implemented as a sufficient source of data for evaluation of IT processes in SMEs.
Typical functionality in an ITSM system encompasses all major areas of ITIL best practices in an IT
service Management environment, such as: availability management, change management, security
management, incident management and others. There is general consensus in literature that
information systems are based on algorithms; therefore it is easier to develop IT solutions in well-
structured and standardized business domains (e.g. Accounting, Payroll). Standardization of IT
management area seems to be mature enough to design information systems supporting this
business function. Johnson et al (2007) proposes to develop IT Service Management solutions that
are based on publicly available and implementable standards that cover three areas:
IT processes - ITIL, ISO 20000,
data/metadata - CMDB (configuration management database), Service Modeling Language
(SML), and Solution Deployment Descriptor (SDD),
and management protocols - standards related to Web Services Distributed Management
(WSDM).
Generally, this proposition is in line with contemporary technology and practice in ITSM solutions,
however it does not make allowances for important standards and good practices for IT processes
evaluation, i.e. Cobit, Capability Maturity Models, IT Scorecard, TCO, PMI / PMBOK, ISO standards
for software quality, etc. Additionally, the accounting standards are essential in order to effectively
integrate an ITSM architecture with an accounting information system. Todays small and medium
businesses appreciate the advantages of implementing best practices in Service Management. As
was mentioned above, the leading ITSM software providers mainly focus on manufacturing ITSM
systems for large businesses based on the ITIL standard. These solutions are not adjusted due to the
cost of purchase and implementation, functionality, and various maintenance requirements. Desirable
features for an IT process evaluation system may include:
353

Ryszard Zygala
The ability to set up and configure an IT process repository where each process would be
integrated within a general enterprise system architecture, and first of all with an accounting
function.
the ability to define and exploit a set of key performance indicators to enable continuous process
improvement,
the capability to support both operational and investment tasks,
The capability to support the open standards for IT service and process management, especially
ITIL, Cobit and project management.
3. The REA model
The Resource-Event-Agent (REA) model was originally designed by McCarthy (McCarthy 1982) as an
alternative for traditionally designed accounting information systems. The main goal of his work was
to propose a new model of accounting, where both accountants and non-accountants are interested
in maintaining information about the same set of phenomena (McCarthy 1982). It is important to
emphasize that McCarthys model arose under the strong influence of Sorters events accounting
theory (see McCarthy 1979), and further he perceived the REA Accounting Information System (AIS)
as a part of events accounting (Dunn and McCarthy 1997). The REA model identifies accounting
transactions from the point of view of such primitives as resource, event, agent, stock-flow,
participation and duality. The generalized REA framework is shown in Figure 1.


Economic
Event
duality

Economic
Resource
stock
flow
control

Economic
Agent

Economic
Unit
respon-
sibility

types of
associations
types of
entities
Figure 1: REA framework. Adaptation based on (McCarthy 1982).
Primitives used in this model were defined based on the accounting and economic theory. Thus,
economic resources are the things that can be used (economic event) by an economic agent.
Economic events are defined as activities that change the state of the enterprise (economic)
resources. Agents are individuals or organization units that manage or participate in economic events.
Stock-flows are relationships (associations) between economic events and economic resources that
increment or decrement the stock of resources. Duality associations connect each increment event
with related to their decrement events. (McCarthy 1982). In formalizing his model, McCarthy used the
entity-relationship (ER) method of database modeling in order to design a semantic schema of
enterprise business processes. The choice of this method was significant for him because a semantic
model of REA enterprise is not technology dependent and it may be used both as a tool of
communication during requirements analysis, and as an element of the database system engineering
(Geerts 2008). In the initial stage of its development the REA model was limited to modeling and
analysis of the accounting realm. The types of entities and associations used in the early stage of the
REA model development did not go beyond the borders of the widely accepted accounting theory.
Further research has significantly extended this model and more and more researchers
have dedicated their endeavors to change its objectives, scope and capability to formalize a subject of
analysis and design. In the 1990s, the REA model was an area of applied research, e.g. Grabski and
Marsh (1994) presented the method of implementation of activity-based costing (ABC) using a REA-
based system, and Denna, Jasperson, Fong, and Middleman (1994) demonstrated the modeling of
354

Ryszard Zygala
conversion processes within an REA system. Next, Walker and Denna (1997) gave several examples
of acceptance of the event-driven technology by various business and public organizations. In their
extensive textbook, Denna, Cherrington and Hollander (1995) proposed the REAL approach to design
AIS with added location dimension. Dunn and McCarthy (1997) postulate developing REA research
both as a part of design science, related to computer science and as a part of natural science.

As mentioned earlier, the REA model was originally developed as an alternative way of describing
enterprise activities from the accounting perspective. Geerts and McCarthy have extended the scope
of the REA model from the accounting domain to the enterprise domain ontology. They proposed
some new REA primitives, viewed both from business process granularity (i.e. entrepreneur script,
process, and task) and as type images for all these phenomena as well as relationships between
these type images (Geerts and McCarthy 2000). They follow Grubers definition of ontology as an
explicit specification of conceptualization (Gruber 1993). These researchers have further analyzed
REA ontology based on the conceptual terminology of John Sowa (Geerts McCarthy, 2002). David et
al (2002) examined the current state of the REA design science research and, among other things,
they suggest including intangible assets as resources in the REA modeling practice.

Geerts and McCarthy (2006) used abstractions to specify policy-level extensions to REA enterprise
systems, where the term policy means a description of economic phenomena that could, should, or
must occur. The REA model was developed using entity-relationship diagramming techniques;
nevertheless, it may be practiced with other modeling tools, esp. Unified Modeling Language (UML).

We propose to use the Resource-Event-Agent (REA) approach to modeling IT processes evaluation.
There are several key advantages of this approach. Firstly, the REA approach is especially useful for
modeling business processes according to the accounting principles. REA modeling on its semantic
level rejects the traditionally used accounting artifacts, such as debits, credits, and accounts. These
artifacts can be delivered to users as database views. Secondly, the REA concept was used as a
theoretical background for the ISO standard: ISO/IEC 15944 concerning accounting and economic
ontology. Thirdly, the REA framework has evolved into business ontology, therefore it may include
and integrate both accounting and non-accounting business processes.
4. Modeling IT process evaluation based on REA approach
In this section of the paper we propose some patterns that allow modeling IT process evaluation using
REA principles. There are many business producers of software that offer ITIL support, but their
software tools for IT process evaluation have some disadvantages, i.e.:
There are multi-module software packages which have to be integrated by using an additional
middleware layer,
There is business intelligence (BI) software specialized in IT process evaluation and
management, but generally BI isnt useful without a suitable operational database,
Existing solutions are assembled from various components of different origin,
Leading ERP solutions often are outsized for small and medium enterprises (SME).
Given the above, the fundamental question seems to be how to successfully build an optimal
information system for SMEs, covering the most important aspects of the IT process evaluation area.
In order to answer this question, it is important to take into consideration that IT adoption and
management in above-mentioned enterprises are different than in larger firms. First, SMEs usually
employ multi-skilled workers with the lack of strong IT/IS knowledge and technical skills. From this it
follows that software for IT managers in SMEs should be adjusted to both their scope of responsibility
and to their time budget. Second, the information system users often have to be more independent in
their own daily routines. Third, SMEs usually have limited financial resources to IT investments, hence
software supporting IT management tasks should be characterized by simplicity, "all in one"
functionality and low costs of purchase and maintenance.

As was earlier mentioned, the contemporary service approach has become dominant in IT
management issues. During the ongoing discussion about REA-based areas of business solutions,
research is focusing on the processes of the physical value chain where inputs and outputs are
tangible and customers are external. IT processes are a special type of operational business activity
that often have their inputs and outputs intangible and their customers are internal users of different
information systems. The main REA research focuses on the crucial economic events concerning
355

Ryszard Zygala
major business processes: sales, procurement, and manufacturing. In more complex organizations an
information system architecture includes data about many internal business activities. Therefore, for
our purposes, especially useful may be the extension proposed by David (David 1997), where the
three types of events are defined:

1) Economic events - "events that increase or decrease the quantity of a firms resources";

2) Business events - "any business activity that management wants to plan, monitor, and evaluate";

3) Information events - "procedures that are performed in organizations solely to capture, manipulate,
or communicate information".

When comparing this proposition to the basic REA pattern, it is important to emphasize that business
events comprise "a subset of the tasks" in the Geerts and McCarthy model and "they do not
participate in duality relationships" and hence they are support activities in the business value chain
(David et al 2002). The three types of events are important in the IT process evaluation domain and
should be designed, especially when the event type may be used as a special feature which
delimitates the accounting and non-accounting realm. The second extension to the core REA
terminology concerns the definition of economic resource. In his research McCarthy used the term
resource in the accounting context, and therefore we propose to use the definition proposed by
Romney and Steinbart (2006). They define resources as things that have economic value to the
organization, hence this definition embraces both tangible and end intangible resources. In order to
present the usefulness of the REA approach for modeling IT process evaluation, consider for example
IT cost allocation.

Several researchers (among others (Grabski, Marsh 1994), (David et al, 2002) have discussed a
relationship between the ABC costing model and REA. It is noticeable that there are considerable
difficulties to embed the ABC model within the organization-wide information system architecture.
However, we believe that these difficulties can be overcome by changing accounting information
system architectures from account-centric to process-centric. In order to achieve such a purpose, the
REA approach seems to be effective in practice.

If, having regard to the nature of the work executed by IT specialists, the time-driven activity-based
costing (TDABC) model appears to be more adequate than the traditional model. What is more,
according to (Kaplan and Norton 2004), TDABC "is simpler, less costly, and faster to implement, and
allows cost driver rates to be based on the practical capacity of the resources supplied". They argue
that only two parameters have to be estimated in this model: "(1) the unit cost of supplying capacity
and (2) the time required to perform a transaction or an activity".

Designing an IT process evaluation system requires integrating traditional accounting data, non-
traditional accounting data (from TDABC) and IT management data. It can be called a
multidimensional approach, where an individual IT process can be evaluated in terms of
various indicators of performance, quality, and efficiency.

We assume that the TDABC procedure begins by estimating the cost of IT department capacity
(Figure 2 and 3). Consider a centralized IT department (ITD) as a cost center, thus it means that it is
not directly responsible for the business profit. To simplify further steps, we assume that all IT costs
are incurred and accounted for in ITD. The IT manager has the authority to incur different direct and
indirect costs related to the services provided by ITD for internal users. Based on (Remenyi et al.
2000), the IT costs portfolio encompasses direct costs of hardware, software, services, overheads,
training, and maintenance, as well as indirect human and organizational costs. Having discussed the
nature of service, we have concluded that a service is an intangible product, and therefore, in the
REA logic, an IT service is an important type of IT resources. It can be useful to perceive the IT
domain as an economic system in which different inputs (Resources) are converted (Events) into
valuable outputs (Resources). The conversion events are performed by internal IT specialists
(Agents) to satisfy different information needs of information system users (Agents) (see Figure 2).

356

Ryszard Zygala

Figure 2: Exchange process of economic events in ITD
The term economic event in the REA ontology means either increment or decrement in the value of
economic resources, hence ITD can do that by either exchange or conversion process. The exchange
process usually encompasses relations (interactions) between ITD and its external partners (e.g.
software providers). Every increment economic event is related to a decrement economic event
(exchange duality). Inflow relationships connect increment economic events with economic resources.
Every economic event is related to an economic agent, using "provide" or "receive" relationship. The
developed set of REA principles enables to design the business logic according to the accounting
standards. The accounting logic is often not sufficient for internal purposes and therefore it is so
important to enlarge the scope of REA ontology of business and information events.

Cost (Value)
<<economic resurce>>
Hardware
<<inflow>>
<<inflow>>
<<information event>>
Allocate Cost
of Current Assets
<<information event>>
Allocate Cost
of Fixed Asets
(or)
Cost (Value)
<<information resurce>>
Depreciation File
<<information agent>>
Fixed Assets
System
<<provide>>
<<information agent>>
Purchasing
System
<<provide>>
Cost (Value)
<<information resurce>>
Purchasing File
Cost (Value)
<<economic resurce>>
Service
<<information event>>
Allocate Cost
of Services
<<inflow>>
<<provide>>
Cost (Value)
<<economic resurce>>
Labor
<<information event>>
Allocate Cost
of Labor
<<inflow>>
<<information agent>>
Payroll
System
Cost (Value)
<<information resurce>>
Payroll File
<<provide>>
<<information agent>>
IT Management
System
<<receive>>
<<receive>>
<<receive>>
<<receive>>
Priod (Date)
Cost (Value)
<<information resurce>>
ITD Costs File
<<information agent>>
Cost Management
System
Set of tools to
allocate and
manage costs
<<uses>>
<<uses>>

Figure 3: The REA design for information events
In this paper, we accept the extension of the REA ontology, proposed by David (David 1997) to the
information event notion. The information events in the picture 3 represent the cost allocation
357

Ryszard Zygala
procedures that "solely to capture, manipulate, or communicate information" (see above). The
information events are commonly identified in business practice, but it may be controversial to include
them in the REA ontology on the same level of an events hierarchy as economic and business
events. It seems that information events are rather a subset of economic and business events. As is
shown in the picture 3, we propose to perceive an information system entity as an information agent
in REA terminology because of their participation in information event. Generally, the REA ontology
does not descend to a data flow level but in the network economy a data flow level more often have
become de facto a business process level. To complete the modeling process in given domain, it is
necessary to descend to the lowest level, where all types of entities are digital (see Picture 4).
Modeling the functionality of the IT process evaluation, it seems to be crucial to reflect a usage scale
of information entities, such as information events, information resources and information agents. For
example, whereas a key performance indicators (KPI) are usually implemented as a properties of
given entity, it is more effective to model KPIs as a information resources. In consequence of this
change, the whole designed system gains new capabilities in data collection, processing and
analysis.


Figure 4: The REA design on data flow level
All pictures in this point are presented more as a demonstration than implementation diagrams,
thereby presented entities are incomplete and/or not applicable. In practice, the REA approach can be
used in different modeling language, esp. in UML and ERD. Recently, it is noticeable in the REA
literature that UML have gained more followers.
358

Ryszard Zygala
5. Conclusion and future research directions
Managing today's IT infrastructure and processes have become increasingly complex. IT managers
face the problem of meeting the growing requirements for a positive impact of IT solutions on the
business effectiveness of both operational and investment activities. To meet these requirements, the
high quality of information about IT processes and services must be ensured. For that reason, it is so
important to develop information system solutions which would be a sufficient source of data for the IT
process evaluation.

The REA ontology encompasses different characteristics that are important from the practical point of
view. First of all, it enables to perceive the business realm as a set of related events which can be
described and evaluated from different perspectives. In design practice the term event can be used
interchangeably as an equivalent of activity, process or task, despite their distinction in the REA
ontology literature. The nature of the REA approach to the information system design is the modeling
of business domain by a well-defined set of entity and associations types that have gained the
acceptance of the economic and accounting theory. There are various information needs concerning
each individual economic event, and financial accounting is one of the many perspectives that do
matter for IT management practice.

As emphasized earlier, modeling IT process management solutions should take into account publicly
available and implementable standards that concern: IT processes, IT data and tools, management
protocols, and IT process evaluation (e.g. ITIL, Cobit, Capability Maturity Models, IT Scorecard, TCO).

We recommend for future research to develop models of IT management systems, where process
evaluation will be fully involved. It is also hoped that this paper will trigger a debate about designing IT
process evaluation, and it is our belief that the REA approach may be helpful.
References
Bannister, F. and Remenyi, D. (2000) "Acts of faith: instinct, value and IT investment decisions", Journal of
Information Technology, 2000/15, pp. 231-241.
Davenport, T. (1993) Process Innovation: Reengineering Work through Information Technology, Harvard
Business School Press, Boston 1993.
David, J.S. (1997) "Three "Events" That Define an REA Approach to Systems Analysis, Design, and
Implementation". In Proceedings of the Annual Meeting of the American Accounting Association, Dallas, TX.
David, J.S., Gerard, G.J., and McCarthy, W.E. (2002) "Design science: An REA perspective on the future of AIS",
in Researching Accounting as an Information Systems Discipline, edited by V. Arnold, and S.G. Sutton, 35-
64. Sarasota, FL: American Accounting Association Information Systems Section.
Dubie, D. (2008) "ITIL, automation provide recipe for datacenter management", InfoWorld, April 01, 2008,
[online], https://www.infoworld.com/d/mobilize/itil-automation-provide-recipe-datacenter-management-888.
Dunn, C.L. and McCarthy, W.E. (1997) "The REA accounting model: Intellectual heritage and prospects for
progress", Journal of Information Systems (Fall): 251-266.
GAO (1998) "Measuring Performance and Demonstrating Results of Information Technology Investments".
Available: http://www.gao.gov/special.pubs/ai98089.pdf.
Geerts, G. (2008) "Introduction to the REA 25th Anniversary Special Section", Journal of Information Systems,
Sarasota, Fall 2008. Vol. 22, Iss. 22; pg. 215, 3 pgs.
Geerts, G. and McCarthy W.E. (2000) "The Ontological Foundation of REA Enterprise Information Systems",
[online], http://www.msu.edu/~mccarth4/Alabama.doc.
Geerts, G. and McCarthy, W.E. (2002) "An ontological analysis of the economic primitives of the extended REA
enterprise information architecture", International Journal of Accounting Information Systems 3 (1): 1-16.
Geerts, G. and McCarthy, W.E. (2006) "Policy-Level Specifications in REA Enterprise Information Systems",
Journal of Information Systems (Fall): 37-63.
Gruber, T. R. (1993) "A translation approach to portable ontology specifications", Knowledge Acquisition 5: 199-
220.
Gruman, G. (2010) "CIO priorities shift to lightweight services, virtualization", InfoWorld, January 19, 2010,
[online], http://www.infoworld.com/d/adventures-in-it/2010-cio-priorities-shift-lightweight-services-
virtualization-453.
ITGI (IT Governance Institute), "COBIT 4.1 digital edition", Rolling Meadows, IL 60008 USA, [online],
www.itgi.org.
ITIL Glossary, [online], http://www.itil-officialsite.com/InternationalActivities/ITILGlossaries_2.aspx).
Johnson, M.W., Hately, A., Miller B.A. and Orr R. "Evolving standards for IT service management", IMB Systems
Journal; Jul-Sep 2007; 46, 3 pg 583.
Kaplan, R.S., and Anderson, S.R., (2004) "Time-Driven Activity Based Costing", Harvard Business Review, 82
(11): pp.131-138.
359

Ryszard Zygala
Keel, A.J., Orr, M.A., Hernandez, R.R., Patrocinio, E.A. and Bouchard, J. (2007) "From a technology-oriented to
a service-oriented approach to IT management", IBM Systems Journal, Vol. 46, No 3.
McCarthy, W.E. (1979) "An Entity-Relationship View of Accounting Models", The Accounting Review (October):
667-686.
McCarthy, W.E. (1982) "The REA accounting model: A generalized framework for accounting systems in a
shared data environment", The Accounting Review (July): 554-578.
McKinsey Global Institute (2002) "How IT enables productivity Growth", [online],
http://www.mckinsey.com/knowledge/mgi/IT/.
Lech, P. (2007) Proposal of a Compact IT Value Assessment Method, The Electronic Journal of Information
Systems Evaluation, Vol. 10.
Posthumusa, S. and von Solms, R. (2005) "IT oversight: an important function of corporate governance",
Computer Fraud & Security, vol 2005, issue 6, June 2005, pp: 11-17.
Remenyi, D., Money, A. and Sherwood-Smith M. (2000) The effective measurement and management of IT costs
and benefits, Butterworth-Heinemann, Oxford.
Romney, M.B. and Steinbart, P.J. (2006) Accounting Information Systems, 10th edition. New York, NY: Prentice
Hall.
Shapiro, C. and Varian, V. (1998) Information Rules, Harvard Business School Press, Harvard.
Walker, K. and Denna, E. (1997) "Arrivederci, Pacioli? A new accounting system is emerging", Management
Accounting. Montvale: Jul 1997. Vol., 79, Iss. 1; pg. 22, 8 pgs.
Waters, D. (1996) Operation Management: Producing Goods and Services. (Polish Edition: PWN, Warszawa
2001).
Willcocks, L., Graeser, V. (2001) Delivering IT and e-Business Value, Butterworth Heinemann, Oxford.
Zygala, R. (2009) "Towards An Information Management Evaluation System: Defining Key Assumptions" [in]
Information Management, edited by B. Kubiak, A. Korowicki, pp. 375-385, Gdansk University Press,
Gdansk.
360

Non
Academic
Papers
361


362
A Process Model to Guarantee Information Quality in
Elective Surgery Information Systems
Rita Cristvo and Pedro Gomes
ACSS - Administrao Central do Sistema de Sade, UCGIC - Unidade Central
de Gesto de Inscritos para Cirurgia, Lisboa, Portugal
rcristovao@acss.min-saude.pt
pagomes@acss.min-saude.pt

Abstract: This paper describes the system created by the Central Unit of National Waiting List for Surgery
Management (UCGIC) to guarantee the quality of information extracted from National Health System (NHS)
hospitals about elective surgery, covering the process of extraction, validation and qualification of data detail and
indicators, to be carried out automatically by the information system that supports the waiting list for surgery
management control SIGLIC (Information System to Waiting List for Surgery Management). The need to build
an appropriate process model has been growing since 2007. In 2010 the central database of SIGLIC had to
receive data from nearly 164 hospital units, including public and private providers with conventions in the NHS for
elective surgery, with a volume of nearly 1000GB and an annual increment approximately of 500GB. The data
received was concerned to 881 different input variables and the volume of information transactions was nearly 5
million per year. Several problems concerning data quality in the central database started to arise once the data
extraction and integration in SIGLIC involved the interface with several different hospital information systems and
its volume started to increase fast. The solution found by UCGIC for addressing data quality and integrity
received from hospital units was to build in 2011 a process model with automatic redundant system with different
sources checking permanently data quality and interacting with all stakeholders involved with the same
information sources. The model includes monthly data extractions submitted to a qualification process at the level
of detail and indicators, against the defined standards and homologous variations, in order to provide accurate
intelligence about national elective surgery. The validation process includes a management system of incidents,
communications and escalation of problems, which reports the errors/incidents occurred, the communication to
the stakeholder involved for correction and the escalation of the problem resolution if needed. The process
management of data extraction and qualification is carried out through SIGLIC own screens/forms and reports. It
has a dashboard and a procedure to ensure process control. There were also built scorecards with data
aggregated by week. Studies have been conducted to assess the impacts and outcomes of this new approach.
By this new model, UCGIC was able to assure that SIGLIC information is reliable and its performance indicators
are correct and reflect the actual care provided to patients, the hospital performance according to care provided,
the accurate evaluation of demand and supply in elective surgery and the necessary funding for the NHS. Also
this automatic process is considerably less time and resource consuming, by saving nearly 5 days to a process
which took 10 days and by allowing automatic reports of errors, turning its resolution with the hospital units more
efficient and effective.

Keywords: data quality, qualification process, data validation, data extraction, business intelligence
1. Introduction
This essay is about a model to information quality of Portuguese National Health System (NHS)
elective surgery data stored in the central database of the Management System of the Waiting List for
Surgery (SIGIC), headquartered in the Central Administration of NHS - ACSS. The need to build an
efficient model came from the difficulties in assuring data quality within acceptable times due to the
fact that most of the tasks were manual and there were lack of resources to perform quickly and
efficiently indicators and reports. In Section 2, we describe the problem of ensuring data quality with
the past model and its implications in the monitoring and report about NHS elective surgery. In
Section 3, we explain the details of the new model for data quality assurance and its methodology. In
Section 4, we measure the results achieved with the application of the new model. Finally, in Section
5, we discuss a conclusion about the advantages and challenges of the new information quality model
developed by the central unit of SIGIC (UCGIC), its implications as an innovative system to certify
data quality in SIGIC and possible responses to best practices in implementing a data quality system.
2. The problem
SIGIC was created by the Portuguese Ministry of Health to manage and promote the care access in
elective surgery within the NHS. Since its implementation in 2004, SIGIC had to collect information in
elective surgery provided by all NHS hospitals and private hospitals with NHS conventions or
protocols to the central database in SIGLIC (SIGLIC is the information system that supports SIGIC
management and monitoring). From 2007, when SIGIC was already matured, the number of private
hospitals that joined NHS conventions for elective surgery grew very fast, and with it the need to build
363

Rita Cristvo and Pedro Gomes
multiple interfaces to collect data from all hospitals with the minimum quality standards. In 2010, the
central database of SIGLIC had to receive data from nearly 164 hospital units, including public and
private providers, with a volume of nearly 1000GB and an annual increment of approximately 500GB.
The data received was concerned to 881 different input variables and the volume of information
transactions was nearly 5 million per year. Several problems concerning data quality in the central
database started to arise once the data extraction and integration in SIGLIC involved the interface
with several different local systems in hospitals and its volume started to increase fast. There were
two main problems. One was the fact that most of hospital information systems werent certified by
SIGIC requirements, which means that they werent fully adapted to the SIGIC mandatory standards.
The other problem was the lack of an efficient and effective process of information quality control. The
process in place was too time-consuming and had too many manual tasks performed by a small team
in UCGIC. That represented a risk not only to the quality of information accessed by SIGLIC users,
but also to the quality of intelligence about national about elective surgery and the waiting list for
surgery in NHS officially released by the Ministry of Health, whose information source is SIGLIC.
The following table shows the data volume and transactions of SIGLIC in 2010.
Table 1: Figures of SIGLIC operational database
2010
Nr of hospital units in SIGIC 116
Nr of application users 6.611
Nr of application entrees 822.429
Nr of working hours in the application 520.896
Nr of data transactions 4.853.236
Nr input variables 881
Total Data Warehouse (DW) Volume 950 GB
Annual growth 450 GB
To solve the problems related to information quality, in 2011, the UCGIC started to build a new model
to guarantee the integrity and quality of data received from hospital units, by implementing an
automatic system, with redundant items, different sources checking permanently data quality and
interacting with all stakeholders involved in the process, using the same information sources.
3. Method/ methodology
UCGIC created an innovative model for validation and qualification of massive data extracted from
various information systems based on the existing international standards but adapted to SIGIC
special needs. The ultimate goal is to implement a system of data quality and data governance in
NHS elective surgery, that will allow the overall management of the availability, usability, integrity, and
security of the data employed in SIGIC.

The model covers all processes of information quality certification, from the hospital source to the
central database, including the process of extraction, validation and qualification of data detail and
indicators, to be carried out automatically by the information system that supports the waiting list for
surgery management control - SIGLIC. It also includes automated processes of detection and
reporting of errors, correction of errors in data source, continuous monitoring of the process, and the
team assigned in each task of the overall process and responsible for data's accuracy, accessibility,
consistency, and completeness.

The methodology used by UCGIC was first to build the model of the information quality system to be
implemented in SIGLIC, by defining the standards to information quality certification in elective
surgery. We designed the management processes and guidelines for the extraction, validation and
qualification of data uploaded from the local operational systems in hospitals, all compiled in a
manual, including the role of each team in each task of the process. This process was to be
monitored by SIGLIC. The second step was to implement the model in the field by developing the
technical and functional specifications in the information system, assigning the teams with elements of
SIGIC, SIGLIC and the hospitals and involving all the stakeholders in the process.

364

Rita Cristvo and Pedro Gomes
3.1 The process model
The information quality model can be divided in the following parts:

The model starts with the integration of hospital data from its operational systems into the central
database of SIGLIC via interface and it is submitted to the standards validation through RIS the IT
network supported by the Ministry of Health and that connects all public health institutions and care
providers.

After the data is extracted, it undergoes a process of qualification with 4 levels and in each level the
information is saved in a different data store. At level 1, the data produced and exchanged in the
interface with hospitals is gathered and kept in the first data store after passing through the first basic
qualification standards.

At level 2, the information is processed in order to be integrated in the operational database of
SIGLIC, after being approved by the second qualification stage. This procedure has 2 steps; first, it
checks the compliance of data and rejects the inconsistent one; second, it qualifies the data
integrated into valid, doubtful, invalid in HIS and invalid in SIGLIC.

In level 3, data is processed from the operational repository for the monthly extractions, by collecting
and filing data for analysis and report. At this level, the system identifies invalid data details;
calculates indicators excluding the invalid data details; points out indicators with deviations to
standards; generates warnings; data is stored in a warehouse database with the data details and
business indicators that will provide the business intelligence needed to perform the governance of
the system.

At level 4, the information stored in the repositories of level 3 is processed twice a year with the first
semester and annual data extractions. There are two steps concerning this level. First, after
processing the first data extraction (corresponding to the level 3), UCGIC team prepares a manual
analysis of the data which is reported to hospitals. Then, hospitals must evaluate the information and
correct it in their local information system in case there are any errors. The second step is to make a
second data extraction about the same period of analysis three weeks after the first extraction. The
hospitals must examine the new indicators and attach comments, while the analyst team of UCGIC
evaluates the updated business intelligence together with the hospitals observations and produces
the final reports for official publication.

The following picture (Figure 1) represents the process of data integration, where the information is
uploaded from the hospitals operational systems to SIGLIC operational database.

After the data is integrated in SIGLIC operational database, it is validated in the 2 initial levels of the
qualification process and stored after being certified according to SIGIC standards. By this way the
model detects and limits at its beginning the integration of inconsistent and nonstandard data records
in SIGLIC.

The validation process includes a monitoring and management system of incidents and problems.
This system allows detecting, classifying and assigning the incident/ problem to the responsible team
in a semiautomatic way and managing its solution within the time defined for each type of incident/
problem. The monitoring of incidents/ problems is a totally automatic process in SIGLIC. There is also
a system of communications between teams including escalation of problems, which reports the
errors/incidents occurred, the communication to the team of the stakeholder responsible for its
correction and the escalation of the problem resolution if required.


365

Rita Cristvo and Pedro Gomes


Figure 1: Data integration process
SIGLIC has a standalone version that can be used by hospitals as a local information system to
manage patient care. As this application is certified according to SIGIC standards, it limits the
introduction of data errors in the system by 100% and all data is integrated and qualified directly.

The extraction of data from the operational database to the central database in the data warehouse
(DW) of SIGLIC occurs monthly and it corresponds to the accumulated data since the beginning of
the year. The monthly extraction process is represented in the next figure:

Figure 2: Monthly data extraction process
366

Rita Cristvo and Pedro Gomes
The data qualification is made either at the level of detail and at already calculated indicators in which
are excluded data details classified as invalid.

In the data detail qualification process, the records are classified as valid according to standard
values for the purpose of being included or not for calculation of indicators. The data detail is
classified as "valid", "doubtful" or invalid by de central system, and then return to the hospital, where
the team of SIGIC maintenance, in the hospital, is responsible for resolving the last qualifying round.
There were established procedures for qualifying the data detail for each table and field to be
extracted. For each table theres a description of each field with its validations and qualifications. We
can always add more tables and fields and determine its validation according to the needs that may
arise in the future.

The detail elements identified as invalid in the hospital information system (HIS) are sent back to the
hospitals for analysis and eventual correction. If due to this analysis the hospital confirmed the
accuracy of some of the data reported as invalid, these elements are reclassified as "valid enforced",
being excluded from future automatic validations and reincorporated in the calculations of new
indicators related to new extractions. The detail elements identified as invalid in SIGLIC are the
responsibility of the SIGLIC maintenance team that should solve and correct them.

In the indicators qualification process, the calculated indicators according to the valid data detail are
checked with homologous variations and with deviations to the standard values by hospital service
unit. For the purpose of calculating indicators, the detail elements qualified as invalid are removed
from the equation. The indicators are then calculated at the level of the hospital service unit. More
validations may be added if necessary. At this level, indicators can be classified as "valid", "doubtful"
or "highly doubtful" or even as "not qualified", when the number of elements is too small. The
indicators classified as doubtful or highly doubtful are sent back to hospitals for enquiry. If due to this
analysis the hospital confirms the accuracy of some of the indicators reported as suspicious, these
are reclassified as "valid enforced", being excluded from future automatic validations and thus
presented as confirmed in new publications. This qualification process does not change the monthly
extraction indicators and reports but allows the correction of semi-annual and annual indicators and
reports, as well as for future extractions.

Some important and more demanding reports require at least two different data extractions, such as
the first semester and annual extractions. The hospital units are able to correct the data after the first
extraction and the ultimate analysis is conducted by UCGIC. The following pictures show the
extraction process of first semester and annual data.

Figure 3: First semester/annual data extraction process


367

Rita Cristvo and Pedro Gomes
The first extraction is made from the accumulated data of monthly extractions that are submitted to
the first and second level of qualification. After reporting any errors of doubtful data to hospitals, the
UCGIC monitors the institutions feedback and any data correction. After a period of time defined for
hospitals to perform corrections and confirm the accuracy of their data, UCGIC runs a second
extraction for indicators calculation purpose, as shown in the figure below.

Figure 4: Second semester/annual data extraction process
At this stage the data is submitted to level 4 of the qualification process. Hospital units can still attach
notes and explanations to indicators considered abnormal by SIGLIC and by the UCGIC analyst
team. The analyst team will consider the hospital observations and will comment on results and report
them officially to the Ministry of Health. The next figure shows the diagram of data extraction and
qualification to provide information for the business intelligence (BI) system of UCGIC.

Figure 5: Process of data extraction and qualification for indicators/business intelligence
368

Rita Cristvo and Pedro Gomes
In the indicators database there are tables with indicators daily calculated, but also weekly and
monthly. The indicators already calculated are referred to waiting list for surgery, waiting times, new
entries, clinical events, surgery production and productivity, cancelled episodes, process conformity,
among others. The indicators are calculated using monthly samples but with values grouped by day
or week. These tables are updated monthly with the extractions accumulated since the beginning of
the year. However after being updated with the semi-annual data, the update of the information will be
related to the second semester until the annual extraction is available. At the end of the year, all daily,
weekly or monthly indicators are updated with the annual extraction data.

The following figure represents the chronology of the extraction and qualification process to build
reports for official publication.

Figure 6: Flow and timeline of the process of extraction and validation of official indicators of first
semester and annual
The production of indicators should only be started if the validations have been performed
successfully. The indicators calculated will provide the hospital performance report, the operational
dashboard of SIGLIC and the BI system QlikView.
3.2 The implementation
The management of the quality information system is carried out through SIGLIC own screens/forms
and reports developed by the SIGLIC and UCGIC team. It includes a dashboard to monitor all the
extraction and qualification process. Only allowed users from the teams involved in the process can
access these forms in SIGLIC and restricted to the hospital unit information according to the users
profile. The screens to manage the all process, which includes the data extraction and qualification
process and production of business intelligence, are accessed through the following panel (Figure 7)
in SIGLIC: From this panel, the teams involved in the extraction and qualification of data can perform
the following procedures/tasks:
Data extraction:
Extractions planning: it allows planning data extractions. The planning states that the user can
indicate the date on which extraction should take place, for which period of time, the universe of
the data to be extracted and other settings, such as data qualification (detail or calculated
indicators), production of indicators for the operational database of SIGLIC, among others;
Queries: it allows queries of the extractions performed and to be made in the future, being able to
access the planning detail of each of the extractions.
Data qualification:
Qualification of data detail: it allows to access the classification given to each detail element, in
accordance with the rules set in the qualification process;
Qualification of indicators: it allows accessing the qualification given to each of the indicators
calculated, in accordance with the requirements settled;
Data detail and dashboard of indicators:
Data detail query: it allows accessing to the detail of data extracted;


369

Rita Cristvo and Pedro Gomes

Figure 7: Panel to access the screens of the information quality system in SIGLIC application
Indicators query: dashboard of indicators available at SIGLIC;
Monitoring the process of indicators production: it presents, by extraction, the current status of all
activities performed, as well as the list of network communications and the list of incidents/
problems generated in the process;
Incident/ problem management: it allows managing the list of incidents/ problems in the
framework of the qualification process. Incidents/ problems can be recorded automatically by the
system or manually by users through the communications network in SIGLIC. Only allowed users
can register incidents/ problems. If the incidents are not resolved in the time, whose period is
defined to each type of situation by rule, shall be carried over to problems. The life cycle of
incidents/ problems is also defined by rule, as well as the state diagram and the objects on which
it can fall in incidents/ problems;
Escalation of problems: it presents the different levels of escalation so far registered for the
unsolved problems. The escalation of unsolved problems is an automatic process that runs daily
and is carried out according to the standard of the waiting time to solve the incident, with two
goals:
Transfer incidents to problems; and
Escalate problems to the various levels in an organization.
Reporting of the process workflow: Reporting about the workflow of data extraction and
qualification process, incident/ problem management and monitoring is available in SIGLIC. This
reporting serves as an instrument to support process control, whether it is taking place within the
defined times and according to the rules. This reporting includes the following information:
Reporting of the extraction process: performance indicators related to the extraction process.
Reporting of the qualification process: number of detail elements classified as invalid or doubtful;
Reporting of the incident/ problem management process: number of incidents/ problems unsolved
and the number of incidents/ problems dependent on a particular person;
Reporting of the monitoring process: information on the monitoring procedures, process state,
unsolved problems, etc.
370

Rita Cristvo and Pedro Gomes
4. Results
The model for information quality in elective surgery is being implemented since the beginning of
2011 and will be finished by the end of 2012. Most of the manual procedures that exist nowadays are
going to be automated procedures performed by SIGLIC. The automated tasks for data extraction and
qualification at level 1 and 2 are already implemented and we can see important impacts and
outcomes of this new approach to guarantee reliable information in SIGLIC with lower costs in human
resources. By this new model, UCGIC is already assuring that the hospital performance indicators of
the existing dashboard in SIGLIC operational database are correct and reflect the actual care
provided to patients in real time, the hospital performance according to care provided and allows the
accurate evaluation of demand and supply in elective surgery. Additionally, the dashboard is
uploaded with information from the hospital operational databases in real time, so the indicators are
not only reliable but actual. Also this automatic process is considerably less time and resource
consuming. Studies conducted by UCGIC about the implemented solutions concluded that we are
saving nearly 5 days to a process which took 10 days. In fact, by allowing automatic reports of errors
to the teams involved in the process, the incident and problem management is a much more efficient
process which turns its resolution more easily and quickly with the hospital units.

Another result of the partial implementation of the information quality model is the ability to provide
customized reports on demand to several entities. With the development of the new features and
screens in SIGLIC, in 2011 UCGIC was able to satisfy an increasing number of indicators and reports
requests and conduct more regular reports to monitor hospitals performance. The volume of work in
providing indicators on demand increased 64% compared with 2010. This is an area that will continue
to grow and where the UCGIC is investing greatly in the qualification of human resources and
information technology.

The next table shows the evolution of performance indicators of SIGLIC related to the quality
information system from 2010 to 2011.
Table 2: Performance indicators of SIGLIC related to the information quality system

2010 2011
Annual
Variation
Nr incidents/ problems reported to the
helpdesk
4.888 18.321 275%
Nr of working hours in the application
(Nr of entrees times access time)
520.896 789.222 52%
Nr of users 6.611 7.495 13%
Nr of data transactions 4.853.236 5.857.978 21%
Nr of collaborators in UCGIC 11 11 0%
Nr of collaborators in hospitals 32.365 38.202 18%
Nr of hospital units in SIGIC 116 113 -3%
DW Total Volume 950GB 1441GB 52%
Annual growth of DW Total Volume 450GB 490GB -
Nr of clinical processes managed by
SIGLIC (entries in waiting list for
surgery)
565.971 600.331 6%
SIGLIC maintenance
Nr input variables 881 881 0%
Indicators/Reports Nr of reports conducted by UCGIC,
planned and on-demand
45 74 64%
5. Conclusions
The model to guarantee information quality developed in SIGLIC by UCGIC provides an innovative
process for validation and qualification of massive data extracted from various local information
systems. It includes automated processes of detection and reporting of errors, correction of errors in
source data and continuous monitoring of the process mostly automated according to best practices
in information quality. By turning the information quality system managed automatically by SIGLIC, it
is much less time and resource consuming, allowing UCGIC to provide reliable information to all
entities through customized reports on demand, the dashboard updated by SIGLIC operational
database and the UCGIC BI system powered by the official data for publication. In 2011 UCGIC was
able to provide indicators and reports on demand by more than 64% compared with 2010. Information
quality is essential to produce reports about historical, current and predictive analysis of elective


371

Rita Cristvo and Pedro Gomes
surgery production, which will conduct to the creation of knowledge and intelligence to support better
decision-making by all stakeholders involved, including the government with its policies.

The challenge is to continue to improve the information quality system according to the real needs of
all stakeholders involved, from the source of the information in hospitals to the end-user of the
indicators and reports. For that, UCGIC has to constantly adapt its structure and resources to satisfy
its clients information needs. Another challenge is to create an efficient system to certify information
quality in SIGIC, especially in local hospital systems, and improve continuously the defined standards
according to new realities. The capacity of adaptation is the key factor for the success of this
innovative information quality system.
372

PhD
Research
Papers
373


374
Information Risks and Their Connection With Accounting
Marie ern
Faculty of Economics - University of West Bohemia, Pilsen, Czech Republic
macerna@kfu.zcu.cz

Abstract: Risks can occur at any stage of the production process or the provision of mediation services.
Information are then important basis for all decision-making processes that take place in companies. They
represent one of the most scarce and important assets. Risks can be found at every step and the ability to
recognize, describe and analyze them is (especially for managers and executives) very important. In order to be
able to manage risks effectively, we use the help of risk analysis. It is divided into several parts, of which the first
one, risk identification, represents a very difficult matter. Proper identification of risk is an essential component for
the correct implementation of other parts of the risk analysis and the next steps in the process of risk
management, and finally also for finding an appropriate method of risk management in conditions of a particular
company. In practice, it is possible to meet many types of risks. They are usually not isolated. We can meet them
as a complex of several types of risks. Among the risks that are currently often discussed (in connection with
increasing importance of information and development of information technologies), are included information
risks. This type of risk has been studied in relation to many areas of human activity. Accounting is a scientific
discipline, whose faultless functioning in practice is significant for ensuring smooth running of the company.
Better utilization and adaptation of existing information coming from managerial and financial accounting to the
needs of risk management can be seen as a topic of future potential. Therefore, I believe that the analysis of
including information risk management among other business activities being conducted within this area, is a
topic which would be appropriate to continue to deal with in detail. This study describes the relationship between
accounting and information management. The analysis is prepared more theoretically using deduction and
literature review. The findings of proposed qualitative research may be used as a starting point for more
extensive research.

Keywords: information, risks, information risks, risk analysis, risk management, accounting
1. Introduction
Information, risks, risk management and other terms connected with business process management
represent often discussed topics. One of the reasons for such situation is the fact that risks affect all
daily activities conducted in connection with ensuring business processes. Risks are usually not
isolated. In most cases, we can meet them as a complex of many different types of risks, which have
to be managed carefully. Among the risks that are nowadays often mentioned, can be included the
information risk. Management of information risks represents the subject of study of relatively recently
(ten to twenty years ago) established scientific discipline called information management that is
currently seen as a necessary part of the company risk management.

Information risks are often described only as the risks connected with the security of information
systems. They represent wider issue with number of different perspectives. It is possible to meet here
concepts and knowledge known from other scientific disciplines, like for example human resource
management, project management, risk management, software engineering, team work
management, change management, etc. Information risks, the same way as other types of risks, can
be met in all areas of human activity. We use the help of risk management to manage information
risks effectively. The first part of risk management is risk analysis. It is divided into several parts, of
which the first one, risk identification, represents a very difficult matter. Proper identification of risk is
an essential component for the correct implementation of other parts of the risk analysis and the next
steps in the process of risk management, and finally also for finding an appropriate method of risk
management in conditions of a particular company. Existence of information systems enabled us to
deal with risk management using more effective way, because they help us to process a large amount
of data in a relatively short time and they provide a huge amount of information, which are necessary
for managing risks. Through it or just because of that, we can still meet new problems and topics
connected with risk management. It seems to be necessary to cooperate with experts from other
fields, not only informatics, but others, like for example accountants, to solve these topics.
2. Background
2.1 Information
Information can be explained using two different points of view. The first of them emphasizes the fact
that information is in principle seen as objectively existing, constant and independent value (entity).
375

Marie ern
Such a concept is the same as the definition of sources like money or material. The question, how to
evaluate this source, is secondary. Primary problem is the availability of this type of source. If we use
the first type of explanation, we basically accept the idea that to use generally applicable and solvable
universal information systems is possible. Information management can be then limited to work with
data sources. Information are most often seen this way within the hard (engineering) approaches. The
second point of view emphasizes that the classical sources are only the holders of information, data.
Information is seen as the purposeful selection of data that is dependent on the disposition of the
user, his subjective needs and his ability to interpret the information. This second point of view does
not expect direct transfer of information using information systems. It is mentioned as the provision of
data that represent the support of the managerial work (Vodek-Rosick 1997). The meaning of
information is important for this approach and this is typical for social systems. In practice
organizations usually use both starting points, but in varying degrees.
Table 1: The influence of the concept of information management approaches (Vodek-Rosick
1997)
The influence of the concept of information management approaches
The mechanistic approach of
organization views the information as
the objective source.
The organistic approach emphasizes
information as the interpretation of
data.
Focused on Production, quantity and efficiency Adaptability, quality and usefulness
Organizational complexity Low High
Centralization High Low
Formalization High Low
Number of levels Higher and clear Lower, often unclear
Adaptivity Low and short-term High and long-term
Output High and short-term Low with growth perspective
Work is significantly differentiated
and specialized. Publicity of the
relationship between the work tasks
and the aims of the organization is
not desirable.
Work tasks are interconnected. Clear
interconnection of the major tasks
and business strategy is
emphasized.
Work tasks are strictly defined
(rights, methods and technological
procedures are specified in detail for
each staff or functional position).
They are fixed and can be changed
only by the decision of the highest
level of control.
General functions are defined:
employees use the individual
approach to fulfilling their obligations
according with given standards. They
take their personal responsibilities
and encourage redefinition of given
tasks, relationships and needs.

Regulatory responsibility has a
strictly hierarchical structure.
Working behavior is derived from the
contract between the organization
and an employee.
Regulatory responsibility is
decentralized and network structure
of relations exits. Working behavior
takes into account the interests of
the organization and the individual
that accepts participation.
Information important for
management are accepted by the
organization and transferred to the
chief executive officer.
Important information are shared and
teamwork that increases the base of
knowledge for decision making is
often applied.
Communication is mostly vertical.
Communication is vertical and
horizontal dependent on information
needs and reachable data sources. It
takes into account competencies and
knowledge of individuals.
Crucial content of communication is
two fold:
- commands, decisions and the
superiors instructions,
- reports and supporting information
for decision making (made by
subordinates).
Communication has a form of
sharing information, transfer of
knowledge and advice. It includes
also sharing ideas, values and
experiences.
Information are nowadays viewed by the companies as a specific type of asset. They represent
valuable asset that has to be treated carefully, because they are used as a base for each decision
making process that takes place inside the company and that helps companies to achieve their given
goals. What is interesting about the information is the fact that they dont lose their value due to effect
376

Marie ern
of time. On the contrary, their value could be increased due to such an effect. To be able to indicate
information as useful for final users, we have to be sure that they meet the qualitative characteristics
which are their relevance, reliability, comparability and understandability.

As the bearer of information, we can identify the numeric data, text, sound, image, Information
represent the base for knowledge creation. Knowledge is sometimes described as the information
that are represented and transferred as data or signals. Further progress is the perception of
information not only as the signals, but also as the symbols. This point of view indicates
understanding of the information system as a cultural phenomenon rather than a technical
achievement.

Meaning of the terms information and knowledge overlaps in this article. Within the researches that
are currently made and articles about information management that are written, is such a concept
often found. Then we focus on the fact that information can be understood as the valuable asset, but
also as the source of risk. Problematic situation occurs, when we have lack of information in decision
making, but the same risks may arise, when we have too much information.
2.2 Risk
Risk is the term that can be already met in history. The use of this term in connection with shipping is
known since the seventeenth century. It comes from the Italian risico. First we meet it in
mathematics and later in economic subjects. In economic subjects is risk seen mainly as a possible
loss. Nowadays it is understood mainly as a possibility that the loss occurs or as a possibility of failure
during conducted activities. We have no generally accepted definition for the term risk. That is why
we can find a lot of definitions that tries to specify the risk. Here are some of them:
Risk is the probability of any result different from the expected one. (Smejkal-Rais
2006)
Risk is an uncertain event or condition, whose incidence has a positive or a negative
impact on project objectives. (Svozilov 2007)
Risk represents the change of profit caused by deviation of the monitored parameter from the planned
status. It is usually expressed in terms of impact, expressed in monetary units, and percentage
probability of occurence in a given time period. An uncertain outcome and the possibility of achieving
at least one undesirable result is connected with risk. The risk is usually understood as a negative
impact on given activities. On the other hand, it is also possible to see the risk as a positive thing, as
an incentive or motivation to find better alternative solution of any problem. The risk therefore has two
aspects, positive and negative. As was written in the introduction, we can identify many types of risk
and many possibilities how to classify them.
Table 2: Possible classification of risks
Possible classification of risks
Internal and external
Subjective and objective
Frequently occurring and rare
Financial and non-financial
Static and speculative
Stable and instable
Financial risks, security risks and environmental protection risks, risks of information
systems, project risks,
Risk

Besides the methods of classification mentioned above, we can find many others that differ author by
author.

For example information risks (risks of information systems) can be further divided to:
Risks connected with the possibility that the quality of information will be threatened.
Risks arising from the loss of information.
Failure during the work with information.
377

Marie ern
Each risk has its source, its cause and is connected with specific symptoms and effects. Sources and
causes of information risks can be unclearly defined project objectives or project objectives defined by
another person than the final user, different or unrealistic expectations of managers and product users
or ambiguous requirements for initial information. The final effects are then deviations from the setted
project objectives, time delays, different utilization rate of final information system possibilities by the
user or problems with utilization of all possibilities that gives such information system to the user and
return to the previous information system. To be able to work with risks and to manage them, we use
the risk management tools.
2.3 Risk management
The competition between companies is constantly growing and evolving. It becomes more complex.
Nowadays we can meet the theories that work with the term supercompetition that changes recently
established relationships.
Table 3: Relations before and after the supercompetition (Jirsek 2008)
Relations before the supercompetition Relations after the supercompetition
Demand economy Economy of supply
Market equilibrium Market imbalance
Sustainable advantage Unsustainable advantage
Market research New market creation
Profit Value added, cash flow
Standard production Order production
Quid pro quo The winner takes all
People Personalities
Tasks Authorization and motivation
Employment Employability
Information Knowledge
Hierarchical organizational structure Informal organizational structure
Designing Modeling
Companies meet risks very often in such an environment. Its important for them to be able to set the
priorities in risk management with respect to their impact and probability of their occurence and to
focus on those risks that can be seen as the key risks in terms of business management. Risk
management represents an important part of strategic management of each company. It is a complex
process that is seen as a subject of continuous development, improvement and allows companies to
focus on risks connected with all activities provided by them. Important questions that should be
discussed in connection with risk management are:
What do we want to eliminate?
What do we want to control?
Risk management process consists of several steps. It consists of the risk analysis (identification of
assets, determining the value of assets, identification of threats, weaknesses, determination of
relevance of threats and weaknesses), risk assessment and risk management. All these components
of risk management are important and their processing requires accuracy and continuous monitoring
of conducted operations. Incorrect implementation of any part can lead not only to poorly defined and
assessed risks, but also to failure in future business plans.

It is the same in the case of information risks. If we are not able to prepare correctly the first part,
identification of risks, we can for example unconsciously endanger the implementation of business
information system or activities that will be ensured by the final user of such system. In a case of the
employee of business department can be met the situation, when it will be necessary to find out the
state of the sales of given product immediately, clearly print these data and give them in an usable
form to the manager to assess them. If there are not clearly specified requirements at the beginning
(before the implementation of information system) or if the requirements are specified by other person
than the final user of this information system without taking into account comments of the final user
that is expected to prepare and print final reports, we will most probably meet the situation that the
information system wont meet the expectations of the user. For example it will be necessary to
transfer reports from one program to program or to prepare them manually and to calculate some
378

Marie ern
data and then transfer them back to information system and to print final reports without the possibility
to ensure these activities using information system module. Such a situation may lead to
dissatisfaction of the final user of the system or manager and to perception of the information system
as a tool that does not facilitate the work, rather the opposite. That is why it is so important to involve
in the process of information risks management the managers, but also other employees that will
probably use this information system in the future.
2.4 Information management
The term information management has no clear definition. It has many reasons, but widely spoken
are three of them:
ambiguous understanding of the term management,
ambiguous understanding of the term information,
Effect of the change in understanding the term information management during the period since
its establishment (Vodek-Rosick 1997).
Information management passed through three development stages. This term was used for the first
time by R. S. Taylor in the year 1966 during the conferention focused on the questions of system
approach and processing of engineering information and education. At that time, information
management was seen as a discipline primarily focused on solving technical problems. The second
phase that is represented by the period between the seventieth and eightieth was focused on the
efficiency of labor in the economy. We can find transdisciplinary approach to information
management, interconnection of informatics and management views on the given issue here. Main
tasks of managerial activities that are found through questions that discuss, which possibilities
information technologies have to support ensuring objectives of the organization, still remained in the
background. The emphasis was given to ensuring efficient information processes. Beginning of the
nineties represents the third phase in information management development. The emphasis is now on
effective achievement of objectives of the organization using computing and information technologies.
In this phase the need to ensure that all managerial activities will be made effectively (effectiveness
doing the right things) and efficiently (doing the things right) is also perceived. Currently we see
information management as a discipline, where applications of information systems and applications
of information technologies dont represent goals of the manager. They are used by managers as an
effective tool that helps them to improve their activities by ensuring their individual information needs.
This can be seen as the connection between modern management, informatics, system approaches
and views of other disciplines like economics, psychology etc. Information management has a wide
range of use (industrial sectors, education, health care system, ).

Currently we are talking about knowledge management or requirements management rather than
about information management within the risk management. This happens probably in connection
with the transfer from the learning enterprise to the knowledge-based enterprise. For such
management it is necessary to have the functional information system that enables us to detect,
monitor and measure possible risks.
Table 4: Differences between the learning enterprise and knowledge-based enterprise (Jirsek 2008)
Learning enterprise Knowledge-based enterprise
Work with information Collects, accumulates and uses
Work with knowledge
Collects, accumulates and forms the
new
3. Information management and accounting
As was written above, information management is a transdisciplinary scientific discipline. The close
relationship of information management to many disciplines is known, but we also have to say that its
relationship to some disciplines is not mentioned very often. Accounting belongs to that second group.
Without being aware of it, accounting is a discipline, whose activities influence many other areas of
human activity. It provides important information about the current market situation of supplied
products or services, information about the management within the company and many others. In the
situation when it is increasingly difficult to succeed in a competitive market environment, the
companies are looking for opportunities to help them at least to maintain their market position.

379

Marie ern
Michael Porter, who studied the strategy and competitiveness, described two alternatives for
achieving better competitive position of the company, diversity and cost reduction. Whereas the
achievement of diversity seems to be more difficult, companies tried to use the second way, cost
reduction, in the past. Currently the possibilities provided by the cost reduction arent the same as
they were years ago. That is why other possibilities that will allow the companies not only to hold, but
also to improve their future market position are considered. The possibility of changing the generally
accepted social values are even discussed in connection with that.

In the past accounting was established to be able to match the given state standards. The care was
taken to ensure the accounting to be kept correctly and no complaint from the supervisor, the state,
were met. It was basically current financial accounting. The need to use accounting another way, as
the source of advice in economics or as the help to managers of the companies, arised later. The
reason for this was in interpenetration of accounting activities to business. Nowadays the existence of
accounting information system that is prepared to be used the way mentioned above represents the
standard issue (commonly known thing), especially in a case of large enterprises. Information that
have to be provided to managers by accounting are more or less standardized. What can be different
is the way of their acquisition and transfer to manager. Final reports are, in terms of managerial
accounting, delivered in various forms. To be able to ensure the required format of the final report, the
cooperation of accountants (economists) and informatics is necessary. For the first sight it seems to
be the situation that brings no possibility to meet any kind of risk, but it is just an illusion. Risks can
occur in any phase of the risk management process, whose most expected part is its implementation.
Underestimation of any part of risk management process may lead to the disclosure of unpleasant
facts as was written in the previous part of this article. To be able to avoid such an unpleasant
situation, we can use the help of information management tools.

It seems that information management is clearly described discipline that is well-known to managers.
The reality is often different. If you ask managers of small and medium sized enterprises what
information management is, the answer is in many cases wrong. In my thesis I will try to describe
management, risk and information. I will also analyze and evaluate the current situation of integration
of information risks to complex risk management of the selected subjects. I will try to verify possible
relations between information risk management and other disciplines (accounting) and their impact on
improving the business activities. This should be done with respect not only to tools, but also to soft
factors, for example to skills and knowledge of final users of information system, to their
communication skills, their current physical and mental state, etc.

My research is based on collecting, analyzing and interpreting information connected with possible
cooperation of informatics and experts from other fields (accountants). To reach the aim of my thesis
(Description of information risks and identification of problems connected with them in activities of
selected business entities.), I use the methods like literature review, description, analysis, synthesis,
induction, deduction, qualitative research and interview (semi-structured, 45 minutes). The final output
should be prepared using qualitative method of research (expert interview, small number of
respondents). Used questions should start with:
Who?
Why?
How?
The first thing I had to do was preparing the questions for expert interviews and decide the type and
amount of respondents. Because of the fact that I expect semi-structured expert interviews, I decided
to contact accountants and informatics from 5 10 organizations (small and medium sized
enterprises) in West Bohemia. The questions were divided into four groups:
general information (name of the company, status, head office, ),
information, information system (organizational structure of the company, type of used information
system, which problems arise during the use of information system, ),
accounting data, accounting information system (number of accountants, type of accounting
information system, user rights, users of information, compatibility of accounting information
system and information system of the organization, ),
Risks, risk management (use of information management, risk analysis assets and risk
factors,).
380

Marie ern
I expect that this research will help me with writing the final report about information system user
requirements and activities as accurately as possible. Expert interviews conducted with boths,
accountants and informatics, should help me to detect and classify the main problems connected with
their cooperation. I have done some of those interviews and I can say that there are many similarities
between the studied companies. One of them is the fact that they usually use the tools of information
management instinctively and this can cause problems (no written statement - omission of important
asset, underestimation of any kind of risk, analysis only in the middle part of the project (activity), ).

The final output of suggested qualitative research should bring also new ideas for further
(quantitative) research. The aim of my thesis is not to do the statistical research, but to obtain credible
and realistic view of studied issue. That is why I prefer the form of qualitative research. The
subjectivity of respondents answers should be seen as a problem, but we can also see the positive
effect, because the aim is to focus on problems connected with practical use of information and
information systems. Those shouldnt be mentioned in any available literature.
4. Conclusions and outlook
The existence of computer and internet implies the possibility of more comfortable work with data,
information and knowledge. The possibility to process huge amounts of data in a relatively short
period is ensured that way. Many years ago, similar functions were provided by different technologies
and other media were used. Historical and current information systems dont differ in the way of
functioning, but in the used technology. Computer is now a routine part of each business. Existing and
functioning information system represents the necessary prerequisite of quality setting of business
processes and their control. This enables the managers quick orientation in constantly developing
competitive environment and to reach information required for further decision making. The smooth
cooperation between the individual departments of the company can be ensured only by well-adjusted
data (information) transfer. It is important to pay attention to the fact that during the use of information
systems the meaning of information may be limited to simple data and to work with them. The
meaning of information is the thing that makes it necessary for all managerial activities. To maintain
the role of information, we need the knowledge and the cooperation of people from different areas,
not only IT staff and managers. Some disciplines are significantly connected with information
management, others give it the knowledge only marginally. Some disciplines are seen as those that
cooperate with information management actively, others not. If we study this issue more detailed, we
will find out that also the disciplines that are not connected with information management have
anything to offer. Maybe, we just used to see all data provided by them as the given unchanging fact,
and thats why we dont pay too much attention to further analysis of this topic.
References
Alexander, F. and Stevens, R. (2002) Writing Better Requirements, U.K.: Pearson.
Blaha, Z. S. (2004) zen rizika a finann inenrstv, Praha: Management Press.
Blaek, V. (1997) Informan systm pro zen podniku (Problmy implementace), Modern zen, pp. 53-57.
Boehm, V. (1998) Informan zdroje, eskomoravsk profit, vol. 9, pp. 5.
Bro, I. (1997) Management na rozcest, Modern zen, pp. 4-5.
Cejpek, J. (2005) Informace, Komunikace a Mylen, Praha: Karolinum.
echov, A. (1997) Manaei budou v informan spolenosti mnit sv mylen, [online], FIRST Innovation
Park, www.park.cz/manazeri_budou_v_informacni_spolecnosti_menit_sve_mysleni/ (accessed on
07/04/97).
CzechInvest. (2012) Definice malho a stednho podnikatele, [online], CzechInvest,
www.czechinvest.org/definice-msp (accessed on 16/05/2012).
ermk, M. (2007) zen informanch rizik v praxi, Praha:Draft.
Doctorandus. (2012) Vdeck metody ve spoleenskch vdch, [online], Doctorandus,
www.doctorandus.info/info/e_kapitoly/vedecke_metody.doc (accessed on 15/05/12).
Doucek, P. (2010) Informan Management, Praha: Professional Publishing.
Doucek, P. (2004) zen projekt informanch systm. Praha: Mak.
Garlick, A. (2007) Estimating Risk. A Management Approach, Hampshire: Gower.
Hull, E. and Jackson, K. and Dick, J. (2011) Requirements Engineering, London: Springer.
Imler, K. (2008) Strategick systmy kvality. Pardubice: Lvay.
Jerskov, J. (2012) Literrn reere, [online], JCU, http:// kbe.prf.jcu.cz/files/studenti/Literarni_reserse.pdf
(accessed on 15/05/12).
Jirsek, J. (2008) Management Budoucnosti, Praha: Professional Publishing.
Kruli, J. (2011) Jak Vtzit nad Riziky. Aktivn Management rizik Nstroj zen spnch Firem, Praha:
Linde.
381

Marie ern
Laplante, P. A. (2009) Requirements Engineering for Software and Systems (Applied Software Engineering
Series), Auerbach Publications.
Leffingwell, D. and Widrig, D. (2003) Managing Software Requirements. A Use Case Approach, 2
nd
edition, U.K.:
Wesley.
Merna, T. and Faisal, F. A. (2007) Risk Management (zen Rizika ve Firm), Brno: Computer Press.
Mldkov, L. (2004) Management Znalost v Praxi, Praha: Professional Publishing.
Oracle. (2011) Internal Auditings Role in Risk Management, [online], Oracle,
www.oracle.com/us/solutions/corporate-governance/ia-role-in-rm-345774.pdf (accessed on 15/05/12).
Ortiz, L. V. (2004) Risk Management, [online], Adbi,
www.adbi.org/files/2004.04.12.cpp.risk.management.accounting.pdf (accessed on 14/05/12).
Petkov, R. (2010) Modern Management Znalost, Praha: Professional Publishing.
Prokpkov, D. (2007) zen rizik v orgnech veejn sprvy, [online], Bilance, www.bilance.cz (accessed on
12/02/12).
Smejkal, V. and Rais, K. (2006) zen rizik ve firmch a jinch organizacch. Praha: Grada.
Sokolowsky, P. (2002) Informan Poadavky Modernho Podniku 1, Praha: Karolinum.
Sokolowsky, P. (2002) Organizace a Management Podnikovho Zpracovn Informac 2, Praha: Karolinum.
Stonebumer, G., Goguen, A. and Feringa, A. (2002) Risk Management Guide for Information Technology
Systems, Gaithersburg: NIST Special Publication.
Stblo, J. (2008) Management Souasn a Budouc, Praha: Triton.
Svat, V. (2011) Audit Informanho Systmu, Praha: Professional Publishing.
Toth, P. (1993) Informan Systmy Sttn Sprvy a zemn Samosprvy, Praha: VE.
Vodek, L. and Rosick, A. (1997) Informan Management, Praha, Management Press.
Wiegers, E. K. (2008) Poadavky na Software. Od Zadn k Architektue Aplikace, Brno: Computer Press.
382
A Methodology for Competitive Intelligence Metrics
Rhiannon Gainor and France Bouthillier
School of Information Studies, McGill University, Montreal, Canada
rhiannon.gainor@mail.mcgill.ca
france.bouthillier@mcgill.ca

Abstract: The literature on competitive intelligence (CI) reveals that a significant challenge exists in measuring
the outcomes and impact of intelligence. There are process measures in use, but little attention has been given to
measure for outcomes and impact. This imbalance has been attributed to methodological and conceptual
problems of measurement. This paper proposes a case study methodology by which a model may be developed
for measuring the relationship between CI products and organizational outcomes. It examines a given decision,
the intelligence products that informed it, and links those products through the decision to the outcomes, and their
impact upon the organization. The methodology combines subjective assessments made by decision makers and
other employees of past decisions and their outcomes with organizational data obtained through document
analysis, to compare expert opinion to objective data. This triangulation of data seeks to link outcomes with three
indicators (financial outputs, innovation, and client relationships) to an organizations strategic plan to assess
impact of CI.

Keywords: information management; decision making; intangibles measurement; competitive intelligence;
impact measures
1. Introduction
Competitive intelligence (CI) is both a process and a product. It is the process by which an
organization, or an individual, takes information and analyzes it, to understand the competitive
environment. It is also the products that result from such analysis, such as reports and company
profiles, which are then used to inform decision making. CI monitors and attempts to anticipate the
competitive environment for competitive advantage, and is popularly believed to provide organizations
with a competitive edge. There is very little research proving a causal relationship where CI produces
organizational success, however, chiefly because CI suffers as a field of research and practice from
problematic measurement. As will be shown, although the literature states that there is a need for
measurement, there is little measurement in practice, due to inconsistent conceptualizations and
methodological complications.

It is necessary to preface this paper with a comment on an inconsistent conceptualization of the CI
field. The literature review in this paper draws upon research in business intelligence (BI) and
competitive intelligence activities. Bouthillier and Shearer (2003), in their review of BI and CI, note that
although in some instances the terms have been used interchangeably, the scope of BI is typically
larger, including internal and external information activities for the organization, while CI is narrower in
scope, focusing solely on the competitive external environment. As described the conceptual review
done by Buchda (2007) the terms BI and CI share common processes (sourcing data for analysis) for
similar purposes (to inform decision making, and support management). For this paper, CI will be
considered a synonym for BI, recognizing those commonalities.

The purpose of this paper is to propose a case study methodology by which a model may be
developed for measuring the relationship between CI products and organizational outcomes. This
methodology has been developed in response to discussions in the CI literature, as part of a doctoral
research project. The proposed methodology attempts to address some of the oft-cited challenges to
developing useful and accurate measures of CI outcomes and impact for organizations.

The paper contains the following elements: a brief review of proposed approaches and methods of
measurement in the literature found during searches of the databases LISA, LISTA, and Library and
Information Studies Full Text from September 2010 to October 2011, using terms such as measure*
AND competitive intelligence; a discussion of the conceptual issues involved, including presentation
of the conceptual framework for the proposed research methodology; a description of the proposed
methodology, relating it to issues identified in the literature; identification of the value and originality of
the proposed research method; and some concluding remarks.
383

Rhiannon Gainor and France Bouthillier
2. A review of models of measurement in the literature
The results of CI practitioner surveys have indicated that organizations tend to do little, if any, formal
measurement of CI processes, products, or outcomes (Herring, 1996; Marin and Poulter, 2004;
Prescott and Bharwaj, 1995). It is unsurprising then, that little has been written about CI performance
assessment. Blenkhorn and Fleisher (2007) point out in the literature review prefacing their study of
CI practitioners that most of what has been written about CI assessment consists of practitioners
anecdotal experiences rather than scholarly (meaning rigorous, valid, and reliable) studies. The
measurement methods and tools reviewed here are prescriptive, proposed by scholars, and have not
been subject to testing, although calls have been made for more empirical testing of CI measurement
methods (e.g., Pirttimki, Lnnqvist, and Karjaluoto, 2006).

Reviewing measures that have been proposed in the CI literature, Buchda (2007) proposed a
classification and analysis framework for them, grouping them into three types:
Measures of Effectiveness (MOE)
Return on Investment (ROI)
Balanced Scorecard-Related (BSC-Related)
In this section each of these types will be summarized and critiqued for its application to CI contexts.
2.1 Measures of effectiveness (MOE)
The MOE approach was advocated by Herring (1996) at the conclusion of multi-stage exploratory
research he conducted in the 1990s for the Society of Competitive Intelligence Professionals (SCIP),
to investigate possible methods of performance measurement for CI. MOEs, simply put, identify
outcomes that indicate CI did its job, with the intent of proving functionality and value to justify
investment. Herring suggests four MOEs: time savings, cost savings, cost avoidance, and revenue
enhancement.

In another example, a set of potential MOEs is provided in a study by The Futures Group (in Davison,
2001; Herring, 1996; McGonagle and Vella, 2002). The study consisted of interviews with US
companies to identify their MOEs. The most common measures identified were:
Actions taken
Market share changes
Financial goals met
Leads generated
New products developed
Herring (1996) supported the use of MOEs because he concluded that measurement of CI
effectiveness (effectiveness being how well CI is achieving its goals, see Fleisher and Blenkorn, 2001)
required a qualitative and quantitative evaluation of the entire CI cycle in cooperation with
management, in order to ensure that CI activities were aligned with strategic objectives, thus ensuring
value. He argues that an MOE approach allows the user to address management expectations, and
establishes a framework for the most valuable assessment of CI: that of the executives using and
overseeing the CI processes and products. The problem with using MOEs, however, is that
implementing them can be difficult, and as Buchda (2007) pointed out, MOEs typically are selected on
the basis of desirability, rather than research that has proven what benefits can be expected to result
from CI.
2.2 Return on Investment (ROI)
ROI methods attempt to prove profit by subtracting costs from the revenue generated by a given
activity. Davison (2000) brought together literature on CI and literature on advertising effectiveness
measures to conceptualize a Competitive Intelligence Measurement Model (CIMM). This model was
developed to provide a more tangible method than MOEs for determining CI effectiveness. The
formula below gives the return on investment for competitive intelligence (ROCII), with the intent to
produce an answer quantifiable in dollars:

ROCII = (CI outputs CI inputs) / CI inputs
384

Rhiannon Gainor and France Bouthillier
Kilmetz and Bridge (1999) provide a practitioners report of how to use ROI methods, recounting a
business case, to illustrate the need for users of ROI methods to engage in modelling potential
scenario outcomes of current decisions. These potential scenarios are developed in order to forecast
likely and hoped-for returns, a task that provides data in the form of most-likely outputs, to be used in
formulas such as the one above.

Davison (2000) acknowledges three significant weaknesses of a ROI method. These weaknesses are
uncertainty regarding the accuracy of forecasts used in the model, the inability of the formula to
account for intangible results and qualities of CI, and that the model does not include any
consideration of organizational strategy. Davison suggests possible solutions for two of these
weaknesses. First, to gauge the potential accuracy of current predictions by evaluating the accuracy
of previous predictions. Second, to use Likert scales to evaluate decision makers satisfaction
regarding the intangible qualities of CI products and processes, such as quality, relevance, accuracy,
etc. For the third weakness, however, Davison provides no solution or proposed additional measure,
simply stating that a ROI model, not having long-term data, cannot measure strategic outputs and
outcomes.
2.3 Balanced scorecard (BSC)
The balanced scorecard, originally developed by Kaplan and Norton (1992), is a measurement tool
that allows users to examine organizational functions from multiple perspectives, in relation to one
another, with the intent of monitoring and improving performance. Lnnqvist and Pirttimki (2006), in
their literature review to identify and assess measurement approaches for determining value and
managing processes for CI in an organization, suggest that a BSC approach is the most beneficial.
Pirttimki, Lnnqvist, and Karjaluoto (2006) undertook a case study, applying a BSC to the CI unit of a
company, and argued that this approach has to be tailored to the needs of a specific context and
organization, including tracking of usage rates for CI products, satisfaction surveys, and win/loss
ratios for specific decisions.

While the BSC allows the user to address some of the complexity of CI processes, its weakness is
that it does not show clear evidence of causal relationships between CI inputs and outcomes
(Buchda, 2007; Nrreklit, 2000).

All of the approaches named in this section are useful in providing to the user some quantification of
CI performance. However, as described above, the weakness of these measurement approaches is
that they do not provide evidence of causal relationships, which is also a problem for adapting
intangibles measures in the fields of knowledge management (KM) and intellectual capital (IC) to CI,
for example Alpha IC and Skandia Navigator. Although these measures are useful for identifying and
representing intangible assets, they do not provide evidence of dynamic causal relationships. These
relationships are key to measuring the value of CI through its outcomes and impacts, as will be
argued below.
3. Measurement of CI in practice
Prescott and Bharwaj (1995) did a large-scale survey of members of the Society of Competitive
Intelligence Professionals (SCIP) to understand the components of CI programs. Respondents
indicated that although they believed that CI benefits could be seen in decision making, sharing
information, and identifying new opportunities, they were uncertain as to how CI impacted strategic
areas in their organizations, namely market position, revenues, customer service, and increased
capabilities. The authors suggested that metrics needed to be developed to enable CI units to better
assess their role and impact within organizations.

Nearly a decade later, Marin and Poulter (2004) did another survey of SCIP members, with some
interviews of survey participants. The purpose of their study was to better understand CI practices and
practitioners. One result of the study was that few organizations have any mechanisms in place to
measure the value of competitive intelligence, though some organizations made an attempt to track
usage of electronic CI resources (2004: 172). The authors attributed, at least in part, the non-
existence of measures to the problem of quantifying CI value and effectiveness.

Studies such as these have repeatedly found that organizations using CI are not measuring CI
processes or outcomes, although there are documented exceptions (e.g., Pirttimki, Lnnqvist, and
385

Rhiannon Gainor and France Bouthillier
Karjaluoto 2006). Herring (1996) conducted a small field survey and found that, as did Marin and
Poulter (2004), organizations using CI do almost no evaluation or measurement of CI. None of the
executives he surveyed were using any formal evaluation, although they might do unexpected and
informal evaluation when reviewing budgets and/or trying to control costs. Blenkhorn and Fleisher
(2007) confirmed there are few formal measures in use, or often no measures at all.

Authors reporting on measurement repeatedly call for measures to be developed, recognizing that
they are necessary to conceptually develop the field, and strengthen research. Practitioners of CI,
when surveyed, have stated that they are aware of the need for measurement and consider its
development a priority for their field of practice (Hannula and Pirttimaki, 2003; Qingjiu and Prescott,
2000).
4. Why is measuring CI so difficult?
One problem with CI measurement is related to the difficulty of conceptualizing CI value in order to
distinguish between effectiveness, results or benefits, outputs, outcomes, and impact. Another
problem is isolating CI processes and products. Methodological issues include identifying intangibles
related to CI, and relating results to actionable CI. Some of these conceptual and methodological
issues are briefly discussed here in the context of studies done on CI measurement.

Wright and Calof (2006) examined three empirical studies carried out in Canada, the UK and Europe,
to draw comparisons of their respective approaches and findings. In reviewing their studys findings,
the authors commented that
there is little consistency in terms of measurement and output value. In addressing the
critical area of intelligence, there has to be some agreement within the field on how to
operationalise the intelligence construct. While the definition is generally accepted,
measurement is not (2006: 462).
Calof and Wright (2008) found in their bibliometric assessment of CI literature that part of the problem
with conceptualization of measurement lay in studies attempting to measure the entire CI model, both
processes and products, rather than closely examining and testing elements of the model and
attempting to link CI to the performance measures of the organization. As a result, they recommend
that measurement studies, rather than attempting to focus on the entirety of CI functions, roles, and
products, should instead examine smaller components of CI to improve the validity of the research.

As part of this effort to more rigorously examine CI, it is necessary to first, more clearly conceptualize
what the constructs of the field are. What, for example, is the purpose of CI? What are the benefits of
CI?

As discussed earlier in the section on measurement methods and approaches in the literature, CI
benefits can be identified as indicators of effectiveness, such as time savings. Hannula and Pirttimaki
(2003) found that for Finnish companies, their most-expected benefits of CI were better information for
decision making; ability to anticipate threats and opportunities; growing knowledge; and savings of
time and money. Marin and Poulter (2004) who studied CI practitioners discovered that CI often is
intended to help decision makers to make decisions. Jaworski and Wee (1992) found that CI was
designed to help increasing the quality of strategic planning by improving knowledge of the market.
Qingjiu and Prescott, (2000) who studied Chinese CI practitioners, found that respondents believed
that CI should result in improvements for decision making and customer service.

Of all lists of reasons given by researchers, providing help to decision making - often described as
strategic decision making - is the most commonly cited reason for implementing CI (see also Herring,
1996; Bose, 2008). It is our contention that most indicators of this improved decision making, as
described in the literature reviewed in the previous paragraph, can be loosely grouped under three
categories: financial outputs, improved client relationships, and innovation in products and services.
Since supporting, and presumably improving, decision making is the most commonly-cited reason for
CI to be developed and used in organizations; this presents a challenge for developing measures.
Unlike process measures which can, for example, calculate employee-hours against numbers of CI
products developed, measures attempting to connect CI practices and products to decision making
cannot be as straightforward. Decision making is not a linear process, and it can be complex. Any tool
that measures decision making and its effects will necessarily involve qualitative methods. Further
complications are related to questions of accuracy in self-reporting on what is essentially an internal
386

Rhiannon Gainor and France Bouthillier
and subjective activity, and in allowing time to lapse for intangibles to appear (Kujansivu and
Lnnqvist, 2009). Although no measurement model has yet been developed that addresses CIs role
in decision making, there has been a call by a practitioner to examine how CI factors into decision
making (Sawka, in Blenkhorn and Fleisher, 2007; Lnnqvist and Pirttimki 2006; Marin and Poulter,
2004) which has been echoed by scholars comments.
5. Conceptualizing a revised model of measurement
CI outputs and outcomes are not clearly distinguished in the literature. In both the articles by Buchda
(2007) and Davison (2000), outputs are described as both the tangible and immediately visible
results, such as use of CI products, and the intangible long-term results such as fulfilment of an
objective. In their case study, Pirttimki, Lnnqvist, and Karjaluoto (2006) more clearly distinguish
outputs from other results of CI, describing outputs as assignments completed and user satisfaction,
which produce in turn intangible effects such as improved decision making, that may then lead to
financial consequences for an organization. In the model proposed by this paper, outputs and
outcomes are distinguished from one another, borrowing a definition of outcomes from Boyce,
Meadow, and Krafts text on measurement in the information sciences: Outcomes are the results of a
systems operations. Desirable outcomes are really the broad goals or objectives for which the system
was created...Outcomes are generally not tangible (1994: 242). For this model, outcomes build on
outputs, outputs being the tangible and immediately observable results of CI use in the decision-
making process.

The literature frequently states, when describing the problems of CI measurement, that industries and
organizations are too diverse to permit the creation of one standard measure (e.g., Kilmetz and
Bridge, 1999; Lnnqvist and Pirttimki, 2006; Rothberg and Erickson, 2005). The research
methodology proposed by this paper suggests that if we conceptualize the primary value of CI as
being improved strategic decision making, and select indicators of decision making effectiveness that
address both the tangible immediate outputs of a decision, and the longer-term intangible outputs of
the decision, a basic, more generic model of measurement could be developed that would be
applicable across multiple organizations and industries.

Given the issues raised in the literature, we suggest that a measure is needed that places an
emphasis on effectiveness, meaning how well CI is meeting its goal is in making positive contributions
to decision making. Because this proposed measurement model would focus only on effectiveness,
rather than CI as a process or product, it would follow the recommendations made by Calof and
Wright (2008) to build measurement models by examining smaller components, rather than
attempting to measure all of CI. Instead it would examine only the outcomes of a decision influenced
by CI, to determine whether CI truly improved the decision-making process a benefit all authors
agree should be the result of CI use. By focusing on strategic decision making, and tying the
outcomes of decisions to the larger strategic plans of the organization, it would also attempt to
measure the organizational impact of CI (Kujansivu and Lnnqvist, 2009; Poll and Payne, 2006). In
our conceptual model, CI is viewed as one several inputs factoring into a decision. Building on a
three-stage model of decision making developed by Nicholas (2004) during his four-year study of
decision making in 92 firms, the figure below has been developed. Please note that Nicholas (2004)
heavily based his model on Herbert Simons concept of organizational decision making. In the model,
a problem is identified. Various inputs, including CI, go into identifying a range of options to be acted
upon, and the choice perceived at the time to be the most optimal is chosen. From the decision,
immediate and tangible outputs appear. As time goes on, intangible outcomes based in the outputs,
manifest, and when related to an organizations strategic plan, demonstrate impact of the decision.
The figure below provides a visual:


Figure 1: CI as an input into the decision-making process
387

Rhiannon Gainor and France Bouthillier
In the proposed model, a good measure becomes one that allows time for impact to manifest,
accounts for the role of CI in decision making, and relates CI to organizational strategy. Outcome
indicators of value or benefit could then be traced through tangible outputs in multiple dimensions of
the organization. Some examples are given in the following table:
Table 1: Outcomes and outputs of a decision
Sample outputs Outcomes
Cost savings
Time savings
Revenue generated
Improved financial outputs
Increased CI requests by internal clients
External client retention
Increased sales
Improved client relationships
New products
New services
Increased innovation
Such a conceptual model of outputs and outcomes would then allow the scholar to obtain the
elements needed for the good, multi-dimensional CI measurement as suggested by Blenkhorn and
Fleisher (2007), for which tangible/intangible, qualitative/quantitative, subjective/objective data can be
collected.
6. Proposed methodology
Wright and Calof (2006) and Marin and Poulter (2004) recommended case study research and direct
observation of CI practices to provide empirical data that can inform the development of a CI
measurement model. The research methodology proposed here is a case study of an organization,
either a government agency or public company, with a CI unit that employs between 2 and 10
dedicated full-time CI employees, to ensure it has a significant amount of resources dedicated to CI,
and yet, for practical reasons, is not too large. The CI unit will have been in operation for at least 5
years, to assure maturity in its operations, and has had a strategic plan detailing goals for the
organization in place for more than 3 years.

The choice of a public company or government agency means that much of the information sought,
such as market share and profitability, is within the public domain, and the organization will hopefully
be more open to sharing related information than a strictly private firm might be.

This case study design owes much to Dalkir and McIntyre (2011) and their suggested result-based
management accountability framework (RMAF) approach to intelligence evaluation research,
specifically how they aligned their measure with assets of strategic importance to the organization,
and used indicators specific to the initiative to collect quantitative, qualitative, and anecdotal data. The
emphasis of this research is slightly different however, in that a goal of the research is to prove the
value of one specific factor (CI) in the decisions made.

The research methodology calls for the use of two data collection methods: interviews and document
analysis. Interviews will be used with decision makers to identify and analyze the role of CI in several
significant strategic decisions made in the past 3-5 years that were informed by CI products. This
timeframe was chosen to try and balance the need for time, for results to appear from a decision, yet
not be so far distant that those involved with the decision would be impossible to interview. This
retrospective decision analysis hopefully will circumvent one of the obstacles to measuring CI
outcomes and impact, namely time. Interviews will also be held with CI unit employees, for the
researcher to understand what performance measures they may already use, and what organizational
processes and usage exist around CI in the organization. Both groups, decision makers and CI unit
employees, will be asked for their subjective evaluations of the strategic decisions identified, and what
outcomes and impacts they attribute to those decisions.

Data will also be collected through document analysis, triangulating subjective opinions of the study
subjects with references to company documents such as industry analysis reports, meeting notes,
press releases, and the organizations strategic plan. Review of these documents will provide some
more objective data, possibly clarifying some of the difficulties in retrospective analysis of decision
making.
388

Rhiannon Gainor and France Bouthillier
Once this data has been collected, the researcher will do some preliminary data analysis and
develop:
Influence diagrams representing influences at play for each decision made (Diffenbach, 1982);
Brief sketches of the organizations competitive environment at the time of each decision and at
the time of the study (snapshots as described by Dumay, 2009);
Visual models of organizational processes around CI unit use and products;
Visual models of organizational supports for decision-making used for each of the decisions; and
A summary of the subjective evaluations given by each participant of the outcomes and impacts
of the decision(s) with which s/he was involved.
These documents will then be taken back to study participants. Each study participant will be shown
the documents relating to his or her interviews and asked to check the accuracy of the researchers
understanding of their subjective evaluations (Yin, 1994). Data will then be analysed to try and
discover first, what weight CI had in each strategic decision; and second, outcomes related to
innovation, relationships, financial outputs. The interviews and the outcomes will then be related to
the organizations strategic plan (both past and current), to identify impact, answering the question:
did CI help the organization make a decision that successfully accomplished some element of the
strategic plan?
7. Value/originality of approach
The value of this proposed conceptual framework and research methodology is threefold. It goes to
the heart of CIs role in advising/influencing a decision, which other measurement methods and
recommendations do not currently examine as an activity subject to a variety of influences. By
developing a measurement model of outcomes and impact, with baseline or commonly agreed-upon
indicators of value and benefit (financial outputs, client relationships, and innovation), it takes a step
toward developing a model that is applicable to more than one organization. It also provides
opportunity to discover if outcomes and impacts of CI can actually be measured.
8. Conclusion
This paper presents a research methodology that could potentially address some of the conceptual
and methodological issues which have historically challenged the development of CI measures. To
that end, this paper has reviewed some of the measurement approaches and tools presented in CI
literature, summarizing some of the most pertinent discussion around problems of CI measurement
and conceptions of value and benefits. This review has hopefully made evident some of the
justification and rationale for the proposed research methodology.
References
Blenkhorn, D.L. and Fleisher, C.S. (2007) Performance Assessment in Competitive Intelligence: An Exploration,
Synthesis, and Research Agenda, Journal of Competitive Intelligence and Management, Vol. 4, No. 2, pp
4-22.
Bose, R. (2008) Competitive Intelligence Process and Tools for Intelligence Analysis, Industrial Management
and Data Systems, Vol. 108, No. 4, pp 510-528.
Boyce, B. R., Meadow, C. T., & Kraft, D. H. (1994) Measurement in Information Science. San Diego, CA:
Academic Press.
Buchda, S. (2007) Rulers for Business Intelligence and Competitive Intelligence: An Overview and Evaluation of
Measurement Approaches, Journal of Competitive Intelligence and Management, Vol. 4, No. 2, pp 22-54.
Calof, J. L., and Wright, S. (2008) Competitive Intelligence: A Practitioner, Academic, and Inter-disciplinary
Perspective, European Journal of Marketing, Vol. 42, No. 7/8, pp 717-730.
Dalkir, K., and McIntyre, S. (2011) Measuring Intangible Assets: Assessing the Impact of Knowledge
Management in the S&T Fight against Terrorism, in B. Vallejo-Alonso, A. Rodrguez-Castellanos, and G.
Arregui-Ayastuy, ed.s. Identifying, Measuring, and Valuing Knowledge-Based Intangible Assets: New
Perspective. Hershey, PA: Business Science Reference, pp 156-176.
Davison, L. (2000) Measuring Competitive Intelligence Effectiveness: Insights from the Advertising Industry,
Competitive Intelligence Review, Vol. 12, No. 4, pp 25 38.
Diffenbach, J. (1982). Influence Diagrams for Complex Strategic Issues, Strategic Management Journal, Vol. 3,
No. 2, pp 133-146.
Dumay, J. C. (2009) Intellectual Capital Measurement: A Critical Approach, Journal of Intellectual Capital, Vol.
10, No. 2, pp 190-210.
389

Rhiannon Gainor and France Bouthillier
Fleisher, C. S. and Blenkhorn, D. L. (2001) Effective Approaches to Assessing Competitive Intelligence
Performance. In C.S. Fleisher and D.L. Blenkhorn, ed. Managing Frontiers in Competitive Intelligence.
Westport, CT: Quorum Books, pp 110-122.
Hannula, M., and Pirttimaki, V. (2003) Business Intelligence Empirical Study on the Top 50 Finnish Companies,
American Academy of Business, Vol. 2, No. 2, pp 593-599.
Herring, J. (1996) Measuring the Effectiveness of Competitive Intelligence: Assessing and Communicating CIs
Value to Your Organization. Society of Competitive Intelligence Professionals: Alexandria, VA.
Jaworski, B. and Wee, L.C. (1992) Competitive Intelligence and Bottom-Line Performance, Competitive
Intelligence Review, Vol. 3, No. 3-4, pp 23-27.
Kaplan, R. S., and Norton, D. P. (1992, January-February) The Balanced Scorecard: Measures that Drive
Performance, Harvard Business Review, Vol. 70, No. 1, pp 71-79.
Kilmetz, S.D., and Bridge, A.S. (1999) Gauging the Returns on Investments in Competitive Intelligence,
Competitive Intelligence Review, Vol. 10, No. 1, pp 4 11.
Kujansivu, P., and Lnnqvist, A. (2009) Measuring the Impacts of an IC Development Service: The Case of the
Pietari Business Campus, Electronic Journal of Knowledge Management, Vol.7, No. 4, pp 469-480.
Lnnqvist, A. and Pirttimki, V. (2006) The Measurement of Business Intelligence, Information Systems
Management, Vol. 23, No. 1, pp 32-40.
Marin, J., and Poulter, A. (2004) Dissemination of Competitive Intelligence, Journal of Information Science, Vol.
30, No. 2, pp 165-180.
McGonagle, J., and Vella, C. (2002) Bottom line Competitive Intelligence. Westport, CT: Quorum Books.
Nicholas, R. (2004) Knowledge Management Impacts on Decision Making Process, Journal of Knowledge
Management, Vol. 8, No. 1, pp 20-31.
Nrreklit, H. (2000) The Balance on the Balanced Scorecard: A Critical Analysis of Some of Its Assumptions,
Management Accounting Research, Vol. 11, 65-88.
Pirttimki, V., Lnnqvist, A., and Karjaluoto, A. (2006) Measurement of Business Intelligence in Finnish
Telecommunications Company, The Electronic Journal of Knowledge Management, Vol. 4, No. 1, pp 83-
90.
Poll, R, and Payne, P. (2006) Impact Measures for Libraries and Information Services, Library Hi Tech, Vol. 24,
No. 4, pp 547-562.
Prescott, J. E., and Bharwaj, G. (1995) Competitive Intelligence Practices: A Survey, Competitive Intelligence
Review, Vol. 6, No. 2, pp 4-14.
Qingjiu, T., and Prescott, J. E. (2000) China: Competitive Intelligence Practices in an Emerging Market
Environment, Competitive Intelligence Review, Vol. 11, No. 4, pp 65-78.
Rothberg, H. and Erickson, G. S. (2005) From Knowledge to Intelligence: Creating Competitive Advantage in the
Next Economy. Amsterdam: Elsevier Butterworth-Heinemann.
Wright, S., and Calof, J. L. (2006) The Quest for Competitive, Business and Marketing Intelligence: A Country
Comparison of Current Practices, European Journal of Marketing, Vol. 40, No. 5-6, pp 453-465.
Yin, R. K. (1994) Case Study Research: Designs and Methods (2nd ed.). Thousand Oaks, CA: Sage
Publications.
390
The use of Virtual Public Space and eCommunities to Kick-
Start eParticipation Timisoara, Romania
Monica Izvercianu and Ana-Maria Branea
Politehnica University of Timisoara, Timisoara, Romania
monica.izvercianu@mpt.upt.ro
anabranea@yahoo.com

Abstract: Romania passes through a period of administrative reorganization as it has to align itself to the
regulations of the European Union. Top down decentralization and reorganization into regions and euro-regions
must be coupled with a bottom up restructuring based on communities in order to near as much as possible the
decision making process to the real problems. Even though for the European Union, as stated in the Bristol
Accord, 2005, citizens participation and sustainable communities are issues of major importance, in Romania
these regulations are adopted but not appropriated. Citizens inquiries are thus carried on as to minimise the
probability of appeals while neighbourhood consulting councils are political stepping stones with no power in the
administrative decision making process, unknown to the public and uninterested in consulting it. Citizens
participation and the spirit of community are closely related to public space, but in Romania public space is dying,
increasingly being used only for transfer and transformed into parking space. As during the 50 years under
communist leadership an intense state policy was carried out to abolish communities and discourage any
participation unguided by the state, all forms of gatherings being prohibited for fear of uprising, public space was,
and still is, viewed as belonging to the state and not the people. The younger generation, untainted by the
communist induced disregard for public space abandons it instead of taking ownership, as it lives, an average of
4 hours a day, in a virtual, global community hardly rooted in its physical location. In order to attain a community
based restructuring of the administration, in this context, it is necessary to double the physical public space by a
virtual one and create a framework for public involvement. The use as incentive of a percentage of the citys
funds for citizen promoted projects to improve the quality of life in their communities, accessed through a
competition on feasibility and public support is the only method of attracting interest and trust in the process.

Keywords: community, eParticipation, eGovernment, citizens' empowerment, virtual public space
1. Introduction
The process of reorganizing, decentralizing and alignment to the European Unions regulations of new
member states has been extensively studied in recent years.

In Romanias case new administrative levels, Regions and Euro regions, have been introduced.
Despite their purpose of increasing efficiency, the lack of governing status or power per se has
resulted in increased bureaucracy. At the same time, the European Unions emphasis on citizens
participation created tension and confusion.

Governments worldwide have increasingly become closed bureaucratic institutions having only
sporadic contacts with their constituencies (Millard, 2009). They have, as suggested by Ferro and
Molinari (Ferro & Molinari, 2010), developed a view on public services provision centred on
administrative fulfilment. Consequently, a decrease in the public interest in political issues, democratic
deficit (Sttefek, et al., 2008), was registered in direct relation to the opaqueness of the decision
making process. Public participation is regarded as a means of mending not only local problems
through sustainable communities but also attempt to solve the widespread political deficit problem.

Even though participation might be vital to democracy, depending on each regions historical
background, it can be a sensitive or even controversial subject. In liberal cases self-interested political
actors strive for private for private goals in a market like arena (Wiklung, 2005). However Romania
still bears the marks of communitarian traditions where, under the pretext of supposed common
interests, an abuse of participation occurred. Mass, homogenous, coercive and mandatory activities,
supporting only the approved opinions, left the population sceptical and reluctant to re-engage. The
shift towards an individual heterogeneous participation, in pursuit of both personal well being and
grater common goods, is difficult and requires further research.

The aim of this study is to present a framework for public participation based on partnerships and
coalitions, in a context of an increased blurring of the boundaries between and within public, private
and non-governmental/non-profit sectors (Smith & Dalakiouridou, 2009) for an open, transparent and
collaborative environment for government-citizens-stakeholders interactions, defined as Connected or
391

Monica Izvercianu and Ana-Maria Branea
Networked Governance by the UN in 2008. The particular character of Romanias history with
participation is taken into consideration when investigating the citizens interest and willingness to
participate.
2. The European Unions emphasis on participation and Romanias response
The importance of citizens participation is internationally recognized both as direct individual
participation and as NGO mediated. The 2001 European Government: A White Paper (Commission of
the European Communities, 2001) emphasises the need for a stronger interaction of regional, local
governments and civil society as a responsibility of Member States. The same year The Citizens as
Partners: OECD Guide to Information, Consultation and Public Participation in Policy-Making,
(Gramberger, 2001) defined three levels of citizen and authority cooperation: Informing (unidirectional
communication initiated by government or citizens), Consultation (acceptance by authorities of
citizens feedback after previous informing) and Active participation (implication of citizens in drafting
public policy while the final decision remains with the authorities). The Code of Good Practice for Civil
Participation in the Decision-Making Process (Council of Europe, 2009), an analysis and identification
structure for the steps and actors involved in civil participation, defines however four levels. While the
first two are identical to those of the OECD guide the latter consist of Dialogue and Partnership, the
highest form of participation, (shared responsibility through co-decision making bodies).

Based on the Arnsteins ladder of citizens participation (Arnstein, 2000) and a critical analysis of the
above mentioned classifications a participation framework was developed, illustrating the proposed
levels of involvement and the effect it can have on citizens. (fig.1)

Through the eGovernment Action Plan 2011-2015 the European Commission promotes a new
generation of government services, open, flexible and collaborative, aimed at engaging and
empowering European citizens and companies. The four priorities (Empower citizens and businesses,
Reinforce mobility in the Single Market, Enable efficiency and effectiveness and Create the necessary
key enablers and pre-conditions to make things happen) are the basis for new international,
interoperable systems and key elements to form an inclusive and sustainable knowledge economy as
proposed in the Europe 2020 Strategy. It is also stipulated that by 2015 50% of citizens and 80% of
companies should use eGovernment.

Figure 1: Levels of participation and effect on participants
Unfortunately, Romania is not prepared, neither for participation nor for eGovernment. Law 52/3003
concerning public administrations decision transparency, also known as the sunshine law dictates
central and local government administrations to consult citizens and civil society organizations but
under active participation the minimum conditions stated are open meetings and archived records of
debates and nothing on actual citizen involvement. A 2011 government requirement to publish local
budgets on the municipalities sites has uncovered that over 50% did not have a web site while
approximately 80% of those which did only published information specifically required by law.


392

Monica Izvercianu and Ana-Maria Branea

Figure 2: European eGovernment usage, source Eurostat
2.1 Participation and eParticipation - opportunities, risks and challenges
While collaboration between government agencies was described by Bardoch in 98 as an unnatural
act between two non consenting adults the new level of openness, transparency and interaction
needed brings the complexity of collaborations with both agencies and citizens to a whole new level.
This is not conditioned to unanimous participation but to the existence of the opportunity to participate
(Witteveen, 2000) which can be facilitated by supplementing traditional methods with ICT based ones
removing the confinement of time and space. However from the administrations point of view it
seems that cooperation between citizens and the government in interactive policymaking is valuable
as long as politicians can continue to do their work and make the final decisions (Michels, 2006).
Resistance is encountered from administrations and government agencies due to the loss of control,
labour intensity, difficulty in balancing access and security. Thus arises the need for a shift in the civil
servants approach from a command and control attitude to one based on connection and
collaboration, both internally and externally. (Friedman, 2007)

Defined as direct citizen involvement in, or influence over governmental processes (Bucy &
Gregson, 2001) political participation through the use of ICT, eParticipation, enables citizens to take
part in the decision making processes and develop social and political responsibility (Maier-Rabler &
Huber, 2010) merging the top down (system oriented) and the bottom up (actor oriented)
perspectives. Also referred to as online public engagement, e-participation can serve to encourage
two-way communication between government and citizens, educate citizens about the rationale and
complexity of policy-making, legitimize government decisions and provide opportunities for mutual
learning (Coleman, S. and Gtze, J., 2001).

According to a comparative analysis on eParticipation initiatives (Peart & Ramos-Diaz, 2008) the
majority are characterized by a closer alignment to the governments interests than that of the citizens,
a lack of guidance and learning support for novice users and more importantly poor opinion
aggregation and visualization capabilities, namely traceability of ones contributions, others
responses on the same issue and the policy makers feedback and responses in an easily navigable,
multi-threaded, cross referenced database. This occurs due to the fact that generally, the online
service infrastructure is more of an electronic mirror of the physical one, with no innovation of internal
working processes (automation) based rather on available, on budget, technologies rather than the
actual needs and expectations of the citizens. Coupled with a low promotion of the infrastructure,
limited to making it available, and resistance on the part of the administrative bodies, out of mistrust,
lack of necessary skills and intensity of the work needed these characteristics result in high rates of
perceived inefficiency and low citizen take up. (Ferro & Molinari, 2009) (Verdegem & Verleye, 2009)

The issues of participation and the knowledge gap between administration, experts and citizens,
irrespective of education level, are similar, for eParticipation, to those concerning the levels of digital
literacy the ability to access, navigate, critique and create content through ICT (Mansell, et al.,
2009) - not being equally distributed within society. The gap between the Digital Natives and Digital
Emigrants (Prensky, 2001) based on differences in age, class, gender and education level can
however be bridged through participation in affinity spaces based on their skills and interests
(Jenkins, et al., 2006) citizens related participation being just a skills translation. The European
Commission through its Digital Agenda (2010) identifies as a priority the fostering of digital literacy
among citizens as a key competence in a knowledge based society. Another risk is the low benefit-
high costs scenario (Curtin, 2007) namely the actual needs of each government for high levels of
393

Monica Izvercianu and Ana-Maria Branea
participation versus the populations demand and interest. Still other are related what Tsoukas defines
as the tyranny of light obscuring citizens needs behind quantifiable indictors, targets and
benchmarks. (Tsoukas, 1997)

The benefits of participation translate into increase in service efficiency and effectiveness through a
better understanding of the publics needs and desires, quality and legitimacy of the decision making
process through greater transparency, awareness, acceptance and commitment to policies; active
citizenship by generating social capital and mobilizing voluntary labour (Smith & Dalakiouridou, 2009);
education of both citizens and politicians on their rights and responsibilities. To these eParticipation,
through the tools of social computing, usually referred to as Web 2.0 (Punie, 2009), adds a reduction
in the costs to coordinate discussions and collaborations between the stakeholders, enhanced
deliberativeness and information processing capabilities, exceeding all expectations when used as a
tool for collaborative actions coordination. (Shirky, 2008) By facilitating co-creation of public services
eParticipation enables the citizens transformation from a consumer to a prosumer (Tapscott, 1995).
2.2 Citizens participation and sustainable communities
The Bristol Accord, 2005, lists citizens participation and sustainable communities as issues of major
importance. These communities are defined as are places where people want to live and work, now
and in the future as they meet the diverse needs of existing and future residents, are sensitive to their
environment and contribute to a high quality of life, are safe and inclusive, well planned, built and run
and offer equality of opportunity and good services for all (The Office of the Deputy Prime Minister,
2005). Among the eight established characteristics of a sustainable community, its method of
governance well run ranks high, defined as having effective and inclusive participation,
representation and leadership, in this order. Governance systems that must be representative and
accountable are seen to equally facilitate leadership (strategic and visionary) and participation
(inclusive, active, effective) of both individuals and organizations.

The local level has become an excellent laboratory for democratic innovations (Alonso, 2009) as it is
here that the state is most clearly seen as a negotiating state (March and Olsen, 1995) as there is a
considerable ease of implementation and lower risk in case of failure. According to Alonso
governance at the local level requires the substitution of the hierarchy, as an instrument of
coordination with a variety of networks comprised of individual and collective actors with different
degrees of institutionalization (i.e. governance as an alternative to hierarchies) (Alonso, 2009)
characterized by a continuous consensus and group decision-making. The traditional hierarchies (top
down governance) are substituted by Habermasian communicative rationality, which is grounded in
negotiation with and among responsible citizens. Traditionally local administrations make decisions in
the name of the public interest, but often relying exclusively on expert knowledge for the structures
of hierarchical coordination and administrative rationality proves to have little in common with the true
or perceived necessities of the citizens and results in dissatisfaction and distrust (Alonso, 2009).
2.2.1 Participation and communities in Timisoara
Some forms of participation, be it in person or through some use of ICT, are made available to the
citizens of Timisoara but not facilitated and in some cases even hampered. Access to data is one of
the main complaints of would be participants. Even public enquires mandatory by law prior to
approving any urban development plan have no guidelines for organizing, namely no minimum
number of participants, feedback, metrics of a projects exposure to the public or causes any
questions to be raised if no objections were made. Thus, these create the premises for enquiries for
show, with public announcements of their due date in reader less newspapers and posters placed in
areas that have no interest in the project so as to minimise the probability of appeals.

Unfortunately, in Romania regulations are often adopted but not appropriated. In 2003,
neighbourhood consulting councils were formed with the purpose of increasing the level of
communications between citizens and administration and that of citizens involvement in the decision
making process and neighbourhood event organizing, finding clear solutions to neighbourhood
problems and encouraging social cohesion. With no power in the decision making process these
councils do no inspire citizens confidence and as it has happened in many cases worldwide were
taken over by political parties. (Michels, 2006) becoming political stepping stones, unknown to the
public and uninterested in consulting it. The main problem laid in their basic structure oriented more
towards a false representation, one is part of a council after filing a CV at the City Hall and being
394

Monica Izvercianu and Ana-Maria Branea
approved, instead of a true citizens participation. About half of these councils ceased their activity
after a couple of years and most of the remaining ones have an average o three public
announcements per year.
2.2.2 Virtual public space
As citizens participation and the spirit of community are closely related to public space one can easily
assert the situation by its state, as Romania public space is dying, increasingly being used only for
transfer and transformed into parking space. More and more people move from home to car to work
and back again hardly perceiving anything outside this system, therefore it is of no surprise if parking
spaces gradually take over any available space (previous green areas, playgrounds, pedestrian
pathways) and are a key criterion in the search for a new residence even above apartment layout
quality. As public space exists only conditioned by the existence of private space, the past 50 years
under communist rule and the following 20 of transition have left their mark on the Romanians
perception of it.

If we consider the communisms definition as a system of social organization based on the holding of
all property in common with actual ownership ascribed to the community as a whole in reality it had a
completely different effect on the perception of propriety as seen over the years following the
Revolution. Having it forcefully taken away and then denied for many years left the Romanian
population with a hunger for property that materialized in the highest percentage of privately owned
homes in Europe 96%, followed by Lithuanians and Slovaks 89% and Hungarians with 87%, all
having in common a similar former type of leadership.

Public space was, and still is, viewed as belonging to the state and not the people. Despite the initial
satisfactory facilities public space in residential units was provisioned with - playgrounds, squares,
green areas - all were abandoned when attention was focused on ones property, further widening the
gap in the perception of public vs. private space.

As during the 50 years under communist leadership an intense state policy was carried out to abolish
communities and discourage any participation unguided by the state, the result was a general state of
distrust, a reminiscence of the fear of informants and no community spirit. Forced participation,
simulated enthusiasm for meetings and processions left a bitter taste and a general attitude of not
voicing ones opinions and discontents, half-heartedness and disbelief in better times and ironically an
expectance of the state to fix everyones problems.

The younger generation, untainted by the communist induced disregard for public space abandons it
instead of taking ownership, as it lives, an average of 4 hours a day, in a virtual world hardly rooted in
its physical location. Increasingly, the time spent online surpasses in quantity that spent with the
family and it affects all aspects of their life, social contacts and interactions. But regardless, the new
generation is willing and accustomed to being part of a community, participating and voicing ones
opinions and it could mean the salvation of citizens participation in Romania if a successful transfer of
the virtual habits can be accomplished to a local network and after that to the real world.

In order to attain a community based restructuring of the administration, in this context, it is necessary
to create a framework for public involvement by doubling the physical public space by a virtual one
taking advantage of Romanians, especially the younger generation, interest in online communities
and collaboration.

Figure 3: Types of internet users (a) and evolution of social network users (b) in Romania
395

Monica Izvercianu and Ana-Maria Branea
Even though according to Eurostat data almost 40% of Romanians do not have access to the internet
through a computer, the unplugged, the allure of social networking sites decreases that number
daily through the use of mobile devices. In little over 2 years, Facebook alone reached 4,719,000
users, over 21% of the entire Romanian population and 64,67% of its internet users. As the
predominant users age are between 18 and 34 this provides an opportunity as it comprises the most
active and pro-involvement groups. By classifying the types of internet users, creators/activists 13%,
critics 19% joiners 24% and spectators 44%, the structures similarity to physical participatory actions
and the potential it holds can be easily noticed. Affinity spaces, forming virtual communities, have
been recognized as opportunities for active citizenship (Department of Constitutional Affairs, 2007)
should the risk of disengaging people from shared responsibilities, obligations and duties toward
fellow citizens and the state be overcome and address their shared interests in a way that stimulates
them.

To facilitate participation and transform neighbourhoods into communities and thereafter sustainable
ones it is necessary to translate online communities to the local realm. The key to the online
counterparts lays in their structure based on common interests, passion for photography, sports, or
similarity in characteristics, young mothers, more defining than the physical location. Thus
neighbourhood communities would be in fact a network of interlinked sub communities each
belonging to o greater community from which it can derive support for common interests.

In order to measure Timisoaras citizens knowledge of the participation means available and their
willingness to use them a questionnaire research was carried out. Even though 96% of the
respondents were interested in the problems facing their neighbourhood approximately 50% only
knew of participation and e-participation tools available to them but could say little else and only 4%
actually used them. 94% expressed a willingness to participate, of which most aged between 18 and
45 favoured ICT means while respondents over 45 generally preferred traditional methods, physical
presence at the debate.

Based on a cross analysis of the participants interest in their neighbourhoods problems and their
willingness to participate in the decision making process it can be noted that the majority of the
somewhat interested and over half of the not interested would participate through ICT tools should
they be encouraged. Divided by age groups, the most willing to participate are fortunately the most
abundant internet users, aged 18-34. However when asked about their knowledge of the means
available to them and to which they would turn in case of a problem the preferred choice was the
neighbourhood councils (NC), proving the feasibility of the decentralization concept. Unfortunately the
vast majority of the ones turning to NCs were those who either knew of their existence but little else or
who had never heard of them previously, a proof of their malfunction and consequent citizens
mistrust.

Figure 4: Preference for traditional or ICT enabled participation based on interest in neighbourhood
problems (a) and age group (b)
The general conclusion of the questionnaire research was a high interest and willingness to
participate mostly through the use of ICT but low initiative and an even lower knowledge of their rights
and options. In order to achieve an effective civic participation a three stage process is needed, each
stage deriving from the completion of the previous one: information citizens education on the local
governments policies, strategies, projects and the means to participate; control accountability of
396

Monica Izvercianu and Ana-Maria Branea
local government resulted from increased transparency of actions and public interest and
involvement; participation participative and deliberative democracy through citizens empowerment.

Figure 5: Preference for partners in solving neighbourhood problems based on knowledge of
neighbourhood council activity
The key, however, in attracting interest and earning the citizens trust in the process and it can be
achieved through the use as incentive of a percentage of the citys funds for citizen promoted projects
to improve the quality of life in their communities, accessed through a competition on feasibility and
public support.

Based on a comparison of three well known participatory budgeting initiatives, which according to
Sintomer are a paradigm for participation, (Sintomer, et al., 2008), especially at the local level,
allowing citizens to have a say in the way the city budget is spent, we devised the framework of an
eParticipation process based on Romanias needs and characteristics. In the case of Brazils Porto
Alegre, the citizens are able to state their preferences for the citys future projects, forming thematic
investment categories, and vote on their individual regions priorities, creating the basics of a
budgeting matrix. This in turn divides the available funds of each thematic category to the 17 city
regions based on their population, dysfunctions and selected priorities. In Solford, UK, the final
decision over the citys budget is taken by the City Council using a resource matrix based on the
needs expressed by the citizens, by post or online, and the areas needs. In the third case, Getafo,
Spain, citizens are able to both create proposals and vote for their favourite. The top five are
afterwards examined by the local authorities technical staff to determine their technical, economical
and legal feasibility and through a public debate are approved for implementation. (Alfaro, et al.,
2010)

In Timisoaras case the citizens are not prepared for the responsibilities endowed by the Brazilian
model while the other two are limited, the Salford model using citizens as a database for perceived
needs and problems while the Spanish one limits their power to a restricted number of projects.

A compilation of these initiatives represents the base of our process with a particularity of limiting the
level of participation based on a projects scale, to ensure that the correct know how and expert
knowledge is applied and limit potential backfires. Therefore, for priority one city projects, such as
large infrastructure projects, the citizens participation is limited to drafting the citys hierarchy of
priorities and taking part in public debates, while the final decision, arbitration, is left in the hands of
the local authorities. However for smaller scale, community level problems, the solution can be mainly
citizen generated, validated by the local authorities and voted in a city wide competition for funds,
backed by a negotiation stage security fail safe.

397

Monica Izvercianu and Ana-Maria Branea

Figure 6: Proposed two stage framework for participation
3. Conclusion
Through participation and more feasibly e-participation, government accountability and project
subsidiarity can be accomplished together with a balance between economic competitiveness, social
cohesion and environmental quality. In order to overcome the Romanias participation deficiency it is
necessary to adapt the strategy for citizen empowerment, as required by the European Union, to the
younger generation, its problems but most importantly the communication means it already uses.
Scepticism and reluctance to engage, a result of Romanias experience with participation and the
opaqueness of the current decision making process, can only be dissipated through immediately
visible results to citizens proposals, making local, small scale interventions the most efficient.
Acknowledgements
This work was partially supported by the strategic grant POSDRU 107/1.5/S/77265, inside
POSDRU Romania 2007-2013 co-financed by the European Social Fund Investing in People.
References
Alfaro, C., Gomez, J., Lavin, J. M. & Molero, J. J., 2010. A configurable architecture for e-participatory budgeting
support. eJournal of eDemocracy, 2(1), pp. 39-45.
Alonso . I. (2009) e-Participation and local governance: case study, Theoretical and Empirical Researches in
Urban Management, Number 3(12) / August 2009, CCASP TERUM, Pp. 49-62.
Arnstein, S. R., (2000). A Latter of Citizen Participation. In: R. T. Gates & F. Stout, eds. The City Reader: 2nd
Edition. New York: Routledge, pp. 240-252.
398

Monica Izvercianu and Ana-Maria Branea
Bristol Accord, (2005) Conclusions of Ministerial Informal on Sustainable Communities in Europe, UK
PRESIDENCY, The Office of the Deputy Prime Minister
Bucy, E. & Gregson, K., (2001). Media Participation: A Legitimizing Mechanism of Mass Democracy. New Media
& Society, 3(3), pp. 357-380.
Commission of the European Communities, (2001). European Government: A White Paper, Brussels
Council of Europe, (2009). Code of Good Practice for Civil Participation in the Decision-making Process,
Conference of INGOs of the Council of Europe.
Coleman, S. and Gtze, J., (2001), Bowling Together: Online Public Engagement in Policy Deliberation, Hansard
Society, London, UK (2001); http://bowlingtogether.net/bowlingtogether.pdf.]
Curtin, D., (2007). Transparency, audiences and the evolving role of the EU Council of Ministers. In: J. Fossum &
P. Schlesinger, ed. The European Union and the public sphere: a communicative space in the making?.
London: Routledge, pp. 246-258.
Department of Constitutional Affairs, (2007). The Future of Citizenship, United Kingdom
Eder, K., (2007). The public sphere and European democracy. Mechanisms of democratisation in the
transnational situation. in: J. Fossum & P. Schlesinger, ed. The European Union and the public sphere: a
communicative space in the making?. London: Routledge, pp. 44-64
Ferro, E. & Molinari, F., (2009). Making Sense of Gov 2.0 Strategies: No Citizens, No Party. Vienna,
Ferro, E. & Molinari, F., (2010). Framing Web 2.0 in the Process of Public Sector Innovation: Going Down the
Participation Ladder. European Journal of ePractice www.epracticejournal.eu, March.Volume 9.
Friedman, T., (2007). The world is flat 3.0: A brief history of the twenty-first century. New York: Picador.
Gramberger, M., (2001). Citizens as Partners OECD Handbook on Information, Consultation and Public
Participation in Policy-making, Paris: Organisation for economic co-operation and development.
Jenkins, H.; Clinton, K.; Purushotma, R.; Robison, A.; Weigel, M., (2006). Confronting the Challenges of
Participatory Culture : Media Education for the 21st Century.
Liebert, U., (2007). Transnationalising the public sphere? The European Parliament, promises and anticipations.
In: J. Fossum & P. Schlesinger, ed. The European Union and the public sphere: a communicative space in
the making?. London: Routledge, pp. 259-278.
March, J. and Olsen, J. (1995), Democratic Governance, Free Press, New York.
Mansell, R., Avgerou, C., Quah, D. & Silverstone, R., (2009). The Oxford Handbook of Information and
Communication Technologies. Oxford: Oxford University Press.
Michels, Ank M. B., (2006) Citizen participation and democracy in the Netherlands, Democratization, Vol.13,
No.2, April 2006, pp.323339, ISSN 1351-0347, Taylor & Francis
Millard, J., (2009). Everyday Government. Malmo, Sweden,
Peart, M. & Ramos-Diaz, J., (2008). Taking Stock: Local e-democracy in Europe and the USA. International
Journal of Electronic Governance, 1(4), pp. 400-433.
Prensky, M., (2001). Digital Natives, Digital Immigrants. On the Horizon, 9(5).
Punie, Y.; Misuraca, G.; Osimo, D, (2009). Public Services 2.0: The Impact of Social Computing on Public
Services, s.l.: JRC Scientific and Technical Reports. European Commission, Joint Research Centre,
Institute for Prospective Technological Studies.
Shirky, C., (2008). Here comes everybody: The power of organizing without organizations. Penguin Press Group.
Sintomer, Y., Herzberg, C. & Rcke, A., 2008. Participatory budgeting in Europe: potentials and challenges.
International Journal of Urban Regional Reearch, Volume 32, pp. 164-178.
Smith, S. & Dalakiouridou, E., (2009). Contextualising Public (e)Participation in the Governance of the European
Union. European Journal of ePractice, Volume 7. March.
Sttefek, J., Kissling, C. & Nanz, P., (2008). Civil Society Participation in European and Global Governance: A
Cure for the Democratic Deficit?. Basingstoke: Palgrave Macmillan.
Tapscott, D., (1995). The Digital Economy in the New Network Economy: Promise and Peril in the Age of
Networked Intelligence. New York: Mcgraw-Hill.
Tsoukas, H., (1997). The tyranny of light. The temptations and the paradoxes of the information society. Futures,
29(9), pp. 827-843.
Verdegem, P. & Verleye, G., (2009). User-centred E-Government in practice: A comprehensive model for
measuring user satisfaction. Government Information Quarterly, Volume 26, p. 487497.
Wiklung, H., (2005). A Habermasian analysis of the deliberative-democratic potential of ICT-enabled services in
Swedish municipalities. New media & society, 7(2), pp. 247-270.

399
Strategic Management and Information Evaluation
Challenges Facing Entrepreneurs of SMEs in ICT
Maroun Jneid
1
and Antoine Tannous
2

1
Doctoral school Cognition-Language-Interaction, Universit Paris 8, Saint
Denis, France
Faculty of engineering, Antonine University, Beirut, Lebanon
2
Marketing Department, faculty of Business, Lebanese University, Tripoli,
Lebanon
mjneid@etud.univ-paris8.fr
antoine_tannous@yahoo.fr

Abstract: Achievements of young Entrepreneurs have impact on a country development, thus the governments
are creating awareness about entrepreneurship to encourage young people to choose Entrepreneurship as a
career path therefore number of entrepreneurs is growing continuously; while a start-up company exists in a
world of uncertainties and needs two to four years to break even then another two years to become stable so the
survival of the entrepreneurs in the early stage rely more on an efficient strategic management, competitive
advantages and well organized internal knowledge management. The entrepreneurial process model for small
and medium business in its related stages is based on activities, the efficiency of a stage-activity depends on the
accuracy of the information evaluation and differs from an economy to another; a pertinent evaluation of the
external environment influences the efficiency of strategic management in a start-up company. This paper
presents the challenges in information evaluation and strategic management facing entrepreneurship since the
early-stage, the important role of the Information System (IS) in the survival of the entrepreneur business; In
addition, to the economic context elements to be considered when designing a Competitive Intelligence system
solution. This study is based on a qualitative approach and will be of use for the entrepreneurs of SMEs that
needs to stand on a competitive edge since their early stage but they have lack of awareness and limited
capabilities to invest in the integration of a Competitive Intelligence and Information Evaluation process systems.

Keywords: competitive intelligence system, efficiency-driven economy, entrepreneurial process, information
evaluation, strategic management
1. Introduction
In this globalization era, a start-up company exists in a world of uncertainties and the management of
the survival phase is becoming more complex and rely more on efficient strategic management and
competitive advantage.

We will present the information evaluation and strategic management challenges facing
entrepreneurship since the early-stage, the role of the Information System (IS) and competitive
advantage to gain; In addition, to the economic context elements to be considered when designing a
Competitive Intelligence system solution.

This article is divided into three sections; in the first section the entrepreneurial process and
challenges context are presented, in the second section the challenges and opportunities in an
efficiency-driven economy are presented and in the third section the result of a qualitative research
conducted on 24 entrepreneurs in their early-stage is presented.
2. Entrepreneurial of SME process
Based on Moores model, Bygrave (2004) presents the entrepreneurial process as a set of stages and
events that follow one another. These stages are: the idea or conception of the business, the event
that triggers the operations, implementation and growth. In his Model of the entrepreneurial process
Bygrave (2004) presented a framework that highlights the critical factors that drive the development of
the business at each phase, and according to Bygrave (2004, p5), entrepreneurial traits are shaped
by personal attributes and environment factors like opportunities, competition, resources, customers,
etc.

The figure below presents the entrepreneurial process with the related attributes and factors.
400

Maroun Jneid and Antoine Tannous

Source: Bygrave, W.D., (2009). The Entrepreneurial Process. In The Portable MBA in
Entrepreneurship. Hoboken, NJ, USA: John Wiley & Sons.
Figure 1: Model of the entrepreneurial process
3. Strategic management and information evaluation challenges facing the
entrepreneurs
At each of the above stages in the Entrepreneurial process, there are serious information evaluation
challenges facing the entrepreneurs, Said Hussein & Maryse Salles (2003) have grouped the
information requirements into the thirteen topics: 1.Market opportunities 2.Anticipation of the market
behaviour 3.Continuous amelioration of skills on the acquired markets 4.Identification of the new
qualified human resources 5.Partnership possibility with other companies 6.Partnership possibility
7.Foreign market opportunities 8.Amelioration of skills regarding the external market 9.Adaptation
constraints on the exportation markets regarding competitors 10.identification of competitors
11.Competitor functional strength and weakness 12.Adaptation capacity regarding environmental
change 13.Latest information and communication technologies. This information will be the source for
supporting any strategic decision and a Competitive Intelligence (CI) system tool. According to
Grabova (2010), a CI system is the organizational process for systematically collecting, processing,
analysing, and distributing information to decision makers about an organizations external
environment. And, according to Hohhof (1994) a CI system may track competitors, markets,
technological developments and sources.

According to Hohhof (1994), Information is analysed in a specific environment of problems,
transformed into intelligence, and delivered to decision makers. In order to be considered
intelligence, information must be relevant to the decision at hand and must support an action or
decision. Unfocused or nice-to-know information is not appropriate in CI activities.

However, SITE (2007) considers the following challenges: how and what information would be
collected? How the information would appear when presented? Where it would be distributed? And
who would have access to it?

Nowadays, Competitive Intelligence becomes an essential part of any enterprise, even an SME or an
entrepreneurial in its early-stage. According to Grabova (2010) this necessity is caused by the
increasing data volume indispensable for decision making. Existing solutions and tools are aimed at
large-scaled enterprises; thereby they are inaccessible or insufficient for SMEs because of high price,
401

Maroun Jneid and Antoine Tannous
complexity, and high infrastructure requirements. SMEs require light and affordable CI system
solution.
4. Efficiency-driven economy factors and impact on the entrepreneurs in their
early-stage, the case of Lebanon
According to Schwab (2011), Lebanons economy is classified as an economy in transition from
efficiency-driven stage of development to innovation-driven stage of development in three stages
classification of economic development; and according to Dutta (2011) Lebanon is included for the
first time in the ICT readiness report and held the rank 95 in the ICT readiness index over 138
economies studied in the 2011 report.

As stated by BMI (2011) a continuous growth of 12% in ICT Lebanese market is forecasted till 2015.
And among the seven countries studied from the MENA region, Lebanese adults are the most likely
to foresee good opportunities for starting a business, and perceive they have the knowledge, skills
and experience to do so stated (IDRC, 2010).

According to Porter (1980) entrepreneurs are more attracted by the competitive forces presented in
certain economy sectors such as industry growth, concentration, and rivalry and entry barriers. The
continuous economic growth of ICT sector in the global economy attracts the entrepreneurs and there
are potential opportunities for entrepreneurship in ICT sector such as telecommunications, banking,
utilities, real state and government, but it depends on the political stability needed to implement
economic reforms in the country from one side and on the capabilities of the Entrepreneur to survive
the challenges in information evaluation and strategic management in the early-stage and on
recognising the important role of the Information System (IS) and the competitive advantage which is
the issue of our study.
5. Qualitative research
One aspect of this qualitative study is to explore the role of the information technology in facing the
information evaluation challenges and supporting an efficient strategic management. A qualitative
approach was selected for this study because if suits better our field of study. The targeted actors was
60 entrepreneurs in their early-stage qualified to the second round of a competition of the best
entrepreneur business plan in north region of Lebanon, candidates undergo a rigorous selection
process and must meet specific criteria prior to being selected to the second round of a three rounds
competition. 24 of 60 qualified entrepreneurs have answered our questionnaire half of them were
entrepreneur in the ICT sector.
6. Results
We all know that decisions maker needs information to support their decisions making, but a critical
element of this study involved exploring the strategic view of an entrepreneur in the early-stage, his
awareness about collecting information about the competitors context and about the supporting tools
and his capability to achieve these complicated tasks. So all the questions were asked in the same
logic, beginning with the current Available Information (A.I.) the entrepreneur has about the specific
topic than followed by the Needed Information (N.I.) that the entrepreneur is searching for in the
related topic than the Information Collection Methods and Tools (I.C.M.T.) that the entrepreneur is
intending to use.

The table below recapitulates the information evaluation of the related topics as described by the
interrogated entrepreneurs.
Table 1: Questionnaire recapitulation
Information Evaluation (available, needed, collection methods and tools) and Analysis
1. market opportunities
(A.I.): A minority of the entrepreneurs have not identified their target markets. In other hand half the
entrepreneurs, their target markets are not adapted to their ideas. Only 9 entrepreneurs specify their
market niches identification and 11 the market to avoid. 13 of them dont explain based on what they
choose their market. Knowing that, 21 of them have not done market research to select their target
markets.
Consequently, we deduce the existence of lack of objectivity of entrepreneurs, in their choice of target
markets. They were not based on market research to verify the choice of their target market
(N.I.): The majority of the entrepreneurs seek information about the market size, market growth, market
402

Maroun Jneid and Antoine Tannous
needs and life cycle, consumers revenue, the social classes and their purchase power, how to calculate
the risk, the Lebanese governmental rules and regulations, etc. they also search for specific information
related to their field of activities such as:
Recycling factories and storages; The business model of interior engineers contractors; The natural gas
market in the region; Etc.
(I.C.M.T.): The majority of the answers were as follow: Survey, interviewing experienced people, focus
group, qualitative and quantitative customers studies, search information form associations, market
opportunity analysis, and secondary data about existing surveys published online, attending all events and
forums, to phone contacts suppliers and intermediaries, statistics, consultant, market research companies,
observational research method
2. anticipation of the market behaviour
(A.I.): The majority of the entrepreneurs arent able to identify their actual needs of information. Their
replies have nothing to do with the question. The same situation appears for the second part of the
question. They arent able to specify their need of information regarding the anticipation of the market
behaviour. This means that they have no awareness of the importance of an information system in the
success of a company
(N.I): Few entrepreneurs dont give answer. The majority of them specify many needs in information related
to qualitative and quantitative market research as follow: Weekly and monthly economic Statistics and
surveys. More information on Lebanese users online. Industry behavioural report through the last 5 years.
How much the product is close to client need? The size of the market. The size of potential new entries;
data concerning the introduction of a new product on the market; Etc.
(I.C.M.T.): Answers to this question are quite similar to the collection method and tools for market
opportunities, moreover some entrepreneurs added the following: Monthly economic reports, direct contact
with my customers, existing reading and research (secondary data) as well online questionnaire and data,
follow up by direct call, interview with specialist.; To contact universities since the majority of the startups
comes from there; Local marketing company; Etc.
3. continuous amelioration of skills on the acquired markets
(A.I.):
i. How you are commercializing your products
The majority of the entrepreneurs dont have accurate information to look for during the marketing of their
products. This mean they have weaknesses to identifies needed information in this phase. They ignore
what type of information they need for their successes.
ii. The latest technologies in your business domain
Few of entrepreneurs indicate their needs in the latest technologies in their business domain as follow:
Phone application; A marketplace system has been implemented by WePay. We are in direct contact with
them; the new wireless mixing technology of perfumes; Website online, inside store PC and later I-phone
and android application
iii. How the market call for proposals are identified
Very few of entrepreneurs stipulate through online research, university publications, and governmental
institutes. Others entrepreneurs have no idea how to get the information.
iv. The credibility, solvability and reliability of your actual clients
Entrepreneurs have no idea where to get the needed information
v. How new financial aids for SMEs are identified
Very few of entrepreneurs know to get financial aids for SMEs. Through Bank reviews, that usually
provides good data especially about SME according to some entrepreneurs. They stipulated also that it is
identified through meeting different financial companies to discuss the best financing solution with low
interest rate and that meet the entrepreneur required budget. Others they will get information from
Business seminars, social media, newspaper advertisements, networking.
vi. The new law on the actual markets
The majority of the entrepreneurs dont have accurate information about the new law on the actual markets.
But a very small number of entrepreneurs indicated how they will get the information as follows: Conduct
market research. Identify products on the market, both online and in stores and note which companies
make them and where they are sold; Developing a great prototype this which will be evolved to a
professionally made product; Connect with other entrepreneurs in my local area or online to share
information; Etc.
(N.I): Entrepreneur would like to keep up to date with new trends, new designers, new quality control
processes, new machinery that can fasten productions and new solution for saving while decorating and
creating fashion. As well as staying tuned to recycling movement. They also need to be acknowledged
permanently with the latest technologies, analysis, researches and publications regarding their business,
follow up observations and consultations. And easy access to these researches. Entrepreneurs seek
information about the market growth rate in Lebanon for each sector of activity. Online consumer
behaviour, merchants requirements, customer reporting needs. Tracking changes of the client need.
Continuous review of the market trends and experience that we got, following up with suppliers and
customers. And information about laws about entering the market and about getting out of the market. In
addition they look for ministries laws and standards and for SME financial aids from the EU or the UN.
(I.C.M.T.): Entrepreneurs answers as follows:
403

Maroun Jneid and Antoine Tannous
Internet, factories, designers and engineers experiences. Following up and analysing and studying every
information regarding their business leading to a continuous amelioration of their skills on the acquired
markets; Reading and surfing for new studies made by universities, companies, chemical experts in the
domain and implementing a usage tracking system to detect the behaviour of the merchants and shoppers,
as well follow up with randomly selected clients in order to take feedback. Open and participate to a forum
to exchange information; Keep searching in the internet and other sources for new technologies and
publications. Also seeking information at banks and European Unions or United Nations websites and
offices. And by looking always to what the competitors are offering.
to find the relevant partners for these tasks might not be easy in Lebanon; Etc.
4. identification of the new qualified human resources
(A.I.): According to the answers below. Entrepreneurs possess little information in this domain.
The source for new individual skills
Entrepreneur answers are as follow: Universities, Training centers and associations, studies, training and
continuous training. Learn from previous experiences. Existing friends network, social media
Entrepreneur would like to contact students of top universities in the Lebanese market. Also reputed
recruitment agencies.
The new organization mode for production
Entrepreneurs have no accurate opinion about this topic
Protection from the internal skills of the competitors
Some of entrepreneurs said that the best way to protect the company form negative competition is to be
ahead and strive for value production and excellence in the services delivery. Few entrepreneurs will offer
bonuses according to criteria related to Grade, Quality, and Time
Training and continuous training
Few entrepreneurs indicated that there are a variety of centers that can provide us with continuing training
and in the same company Training is important to be having a learning organization and keeping the
company up to date. The company should always update the team with periodical training sessions and
seminars.
The labour law (employees status)
Few of them said, we will attend a workshop related to the topic. Or we will follow studies at university.
And other said no deep understanding of labour law and others said that the full Lebanese labour is
accessible on the internet and at the Labour Ministry.
(N.I): The majority of the entrepreneurs look for information about candidates regarding their education
levels, experiences and about salary range according to each activity field; some of them seek records
about the best graduate from the local universities, ministries and specialized agencies in Lebanon and the
best legal status about the candidate. And how to put job announcement in newspapers
(I.C.M.T.): Entrepreneurs will conduct behavioural interview and CV discussion. Others will make
interviews, analysis and then they will decide. Some of them didnt specify their needs. And some of them
said they will ask third parties, friends, online search, contact recruitment agencies, universities and head
hunters. And another would like to create a wide professional network
5. partnership possibility with other companies
(A.I.): The majority of the entrepreneurs search for information regarding partnership. They knew that it is
useful because they share revenues, experience and skills
(N.I): Some of them ignore how to benefit from partnership. Others seek information about local and foreign
companies: The terms of partnership and the legal status and structure; The technological needs; Etc.
(I.C.M.T): Some entrepreneurs do not specify their needs while others indicated them as follows: Meeting
with potential partners; Universities, Research and development labs; Etc.
6. partnership possibility with
(A.I.): The majority of the entrepreneurs do not specify their needs regarding research and development
labs, universities and institutions and consulting companies. Some of them indicate that partnership with
universities and institutions includes internships of students. And consulting companies help them through
training, planning, coaching, assessments, market study
(N.I): The majority of the entrepreneurs ignore what to answer. Few of them need to know the added value
given to their business and type of agreements they should sign and what would be the fees. Also related
laws
(I.C.M.T.): Some of entrepreneur said that they will ask experienced people, universities, studies agencies,
consulting companies and research, online search, yellow pages, lawyers, audit experts and development
labs
7. foreign market opportunities
(A.I.):
Possibility of foreign market entry
The majority of respondents dont have clear idea about foreign markets. Few of them indicated Germany,
Dubai and Abu Dhabi
Potential foreign market
The majority of respondents dont know precisely which foreign market could be potential for them. Few of
them stipulated the following countries: Egypt, Jordan and all the Arabic golf countries. USA, Germany,
Canada, USA, India, UK, Netherlands, Sweden. The majority of the entrepreneurs are aware that Lebanon
404

Maroun Jneid and Antoine Tannous
is a small market, so foreign market is essential in order to grow up
Call for proposal identification in foreign market
The proposal stage is in advance for entrepreneurs. They are not able to answer to this question because
they got no information about it
(N.I): The information needed about foreign market opportunities were as follows: Competitions,
advertising and marketing companies conditions and regulations, financial system of the countries;
Markets that have the highest potential of customers; All the actors and factors, which could influence our
decision making and the supply and the demand; The market size and the size of the population, tourists
and their incomes and intention to purchase.
(I.C.M.T.): The majority of the entrepreneurs dont know how to get information about the foreign markets.
Some entrepreneurs will seek information from consulting companies, traders, studies, on-line researches,
mobile market research, market research, calling the embassies and asking friends, partners and contacts.
Law consulting companies. Ministries
8. amelioration of skills regarding the external market
(A.I.): Need permanent studies about the external market, which the amelioration of skills is highly
demanded through conferences, consulting, and analysis. Entrepreneurs said that a lot of information is
available on the internet. Without being able to specify available information about product technologies,
quality and promotion methods
(N.I): Entrepreneurs answers were as follow:
Consulting & Researches, Analysis of the external market, market size, potential segments. Where, When,
How and who to promote. Cost of TV ads in foreign countries; Latest technologies in science journals and
publications, latest results from research institutes, graduates from universities with degrees relevant to our
business; How best they are recruiting qualified staff? What would be their main strengths? Etc.
(I.C.M.T.): Direct statistics, research, Market research and intelligence. Searching for a local
representative to get all required info. Consultancy; Through following specialized blogs, websites and
companies (mobile games and technology); and through downloading and testing existing games; Etc.
9. adaptation constraints on the exportation markets regarding competitors
(A.I.): Strategic and political constraints, Operational constraints, Regulatory constraints, contractual
constraints and the potential of product. Languages adaptation according the country. Another means to
assess company's potential in exporting is by examining the unique or important features of products. If
those features are hard to duplicate abroad, then it is likely that this will be successful overseas. To know if
the product is patented and cannot be duplicated
(N.I): Which products should be selected for export development? What modifications, if any, must be
made to adapt them for overseas markets regarding the competition? In each country, what is the basic
customer profile of our competition? What marketing and distribution channels are used by the competition
and what should be used to reach our customers? Etc.
(I.C.M.T.): Some entrepreneurs state that they build first a mini project in the foreign country, and then they
observe how it works. Followed up by an economical analysis. If it works they will do a direct investment.
Others they will do online deep internet searches, they will consult agencies, market study in the foreign
country, sales office representative embassies, and ministries of foreign countries. And also the European
Union Office
10. identification of competitors
(A.I.):
Competitor identity (Locals, Numbers, Volume)
Entrepreneurs have little information about their competitors identity. They knew only their names and
ignore other information related to their volume, turnover and profitability.
New patents
Few of entrepreneurs have information about products, services and prices of their competitors. The same
answers appear concerning the competitor products technology and their methods of promotion. Also the
deadlines distributors of competition, as well as on their patents
(N.I): The majority of the entrepreneurs search for information about the competitions size, market share
and details about their offers: products, services, technology, prices, distribution channels and promotional
campaign. Also, their strengths, weakness, objectives, strategies and their consumers behaviours
(I.C.M.T.): The majority of entrepreneurs will get information from surveys, market research, online search,
consulting. They will also contact some friends in different local market research companies and will follow
the progress of the competitors and gather information about their clients. Others will get information from
the industrial or commercial register
11. competitor functional strength and weakness
(A.I.): Entrepreneurs have very few information about their competitors.
(N.I): Entrepreneurs need to know the strategy of their competitor, their capacity of production,
organization, financial and human resources capabilities, and also to evaluate their markets. As well about
new entrants, competitors fusions, acquisitions and the main features and services that return high
income. Others require sales and profits by market and brand, market shares and size, distribution
systems, profile of senior management, advertising strategy and spending, consumer profile and attitude
405

Maroun Jneid and Antoine Tannous
furthermore Customer retention levels
(I.C.M.T.): Same answer as for topic 10, plus majority of the entrepreneurs need to search information
about their competitors and few of them stipulate that they will buy competitors products and analysing
them.
Other entrepreneurs will collect their needed information as follow:
Recorded Data: annual report & accounts, press releases, newspaper articles, analysts reports,
regulatory reports, government reports, presentations and speeches; Observable Data: price lists,
advertising campaigns, promotions, tenders, patent applications; Opportunistic Data: focus group, meetings
with suppliers, trade shows, sales force meetings, seminars, conferences, recruiting ex-employees,
discussion with shared distributors, social contacts with competitors
12. adaptation capacity regarding environmental change
(A.I.): Personal experience. Consultancy. Developing partners with universities
(N.I): What is the time to adapt and what is the cost? Entrepreneurs need also to know the culture changes
and the technology changes in order to adapt with the changes
(I.C.M.T.): Consultancy and checking statistics about all the markets. Annual report & accounts, press
releases, newspaper articles, analysts reports, regulatory reports, government reports, presentations and
speeches
13. latest information and communication technologies
(A.I.):
Information collection tools (Internet, database, ):
Using the smarter technologies like, CRM, IOS, Android, windows phone, internet, external databases,
and third parties. Using client server apps Etc.. Other entrepreneurs specified, the personal interviews,
Self-administered surveys, focus groups, internet, media outlets, questionnaires, data bases, press,
exhibitions, contacts, etc.
Information treatment tools:
Some entrepreneurs added Semantic, linguistic, statistic and neuronal networking methods
Information storage (database, documentary database):
Some entrepreneurs added (database, documentary database), Etc.
Information diffusion tools:
Some entrepreneurs added mailchimp, google companies
(N.I): The latest methods in information collection, information treatment, information storage and
information diffusion, and the capability to implement, store and use the information in a running business.
New changes in communication protocols, the new databases, the latest technological advances for the
web and the mobile, mainly in web and security of e-commerce and the new programming languages. As
well, the internet growth and adaptability, cost and performance, this includes .net framework latest
versions, mailing systems, phone applications
(I.C.M.T.): Using internet websites, related blogs, newsletters, ministry of telecommunication, visiting
expositions, reading magazines concerning the smarter technologies. Signing up in news and professional
sites for close follow-up of the technologies progress. Taking special courses and training. Taking advices
from friends. Then Building first a mini project followed by an economical observations and analysis
7. Conclusion
From this qualitative study we can conclude the following about the entrepreneurs of SMEs including
the entrepreneurs in the ICT sector:
A lack of objectivity and absence of market research for some entrepreneurs, in their choice of
target markets
A lack of awareness about the importance of an information system support to face the
information evaluation challenges
A lack of information about the strategy of the competitors potential markets, production,
organization, financial and human resources capabilities and weaknesses in identifying the
source of the information related to the amelioration of skills on the acquired market in their early-
stage
A need for a continuous update about the latest regulations, technologies, analysis, researches
and publications regarding their business from surveys, market research, online search,
consulting about competitors
The importance of a market behaviour Information Tracking System for the survival
Product and Language adaptation constraints on the exportation markets regarding competitors
The results of this study highlight the important role of the Information Systems in the implementation
and development of the entrepreneur business and as Information technologies have brought more
406

Maroun Jneid and Antoine Tannous
visibility to strategic management, more entrepreneurs have gained awareness of their survival
conditions and competitive positioning.

There is also a need to create awareness about successful entrepreneurial process and Competitive
Intelligence software solutions supporting the entrepreneurial activities since the early-stage; while
existing solutions are mostly designed for large-scaled enterprises, SMEs require CI solution with light
architectures that considers also the economic factors (language, source of information, lack of skilled
human resources in this domain) where the entrepreneurial process is taken place.

For that an affordable software solution architecture that can be integrated by SMEs since their early-
stage can profit from the advantages of a Service Oriented Architecture deployed in a cloud
computing (SaaS, PaaS and IaaS) service environment.
References
BMI, 2012. Lebanon Information Technology Report, Business Monitor International.
Bygrave, W.D., (2009). The Entrepreneurial Process. In The Portable MBA in Entrepreneurship. Hoboken, NJ,
USA: John Wiley & Sons.
Dutta, S. & Mia, I., (2011). The Global Information Technology Report 20102011, Geneva, Switzerland: World
Economic Forum & INSEAD.
Grabova, O. et al., (2010). Business Intelligence for Small and Middle-Sized Entreprises. SIGMOD Record, 39(2).
Hohhof, B., (1994). Developing information systems for competitive intelligence support. Library Trends, 43,
pp.226238.
Hussein, S. & Salles, M., (2003). Une Classification des besoins en intelligence conomique Exprims par les
dirigeants des PME. In IERA (Intelligence Economique : Recherches et Application). Nancy, France.
IDRC, (2010). Global Entrepreneurship Monitor: Rapport Rgional GEM-MOAN 2009 (Moyen-Orient et Afrique
du Nord), International Development Research Center.
Porter, M.E., (1980). Competitive strategy: techniques for analyzing industries and competitors. New York: Free
Press.
Schwab, K., (2011). The Global Competitiveness Report 20112012, Geneva, Switzerland: World Economic
Forum.
Team SITE, (2007). Modeling and Development of Economic Intelligence Systems, Activity report, INRIA.
407
Method Engineering Approach to the Adoption of
Information Technology Governance, Risk and Compliance
in Swiss Hospitals
Mike Krey
1
, Steven Furnell
1
, Bettina Harriehausen
2
and Matthias Knoll
2

1
Centre for Security, Communications & Network Research - Plymouth
University, Plymouth, UK
2
Department of Computer Sciences; Faculty of Economics and Business
Administration - Darmstadt University of Applied Sciences, Darmstadt,
Germany
Mike.Krey@zhaw.ch

Abstract: Against the background of the current reforms and an aftermath of increasing regulation in the health
care sector, hospitals enhance and integrate concepts of IT Governance, Risk and Compliance (IT GRC). Based
on experiences with isolated and often immature partial concepts in these fields, the major challenges for the
adoption of IT GRC in hospitals are close-meshed organisational structures, legal restraints and over the years
increased heterogeneous IT systems, which are just a few aspects that make hospitals a sensible field for the
implementation and governance of IT. In this paper a method that supports the adoption of integrated IT GRC
concepts is been developed. The proposed method is comprised of different method elements that support the
relevant conceptual, organisational, technical, and cultural aspects of the hospital environment.

Keywords: method, design science research, health care, IT governance, risk, compliance
1. Introduction
As many developed countries, Switzerland is facing increasing expenditures for their health system:
11.4% of the Swiss gross domestic product (GDP) in 2009 has been spent on health care the
OECD average was 9.6%. The Swiss health system is characterised by three key features: the first is
its federal structure and a complex system of powers and responsibilities by level of government; the
second is a political tradition of direct democracy and governance through consensus; and the third is
an emphasis on managed competition within the health system rather than state-run (e.g. integrated
health care model) arrangements. Legislative reform efforts therefore primarily aim at increasing
productivity within health care, while ensuring the quality of care. Central catalyser of these changes
is the setup of incentives in in-patient care by diagnostic related groups (DRGs) in many European
countries. Since 2009, hospitals in Switzerland have been transitioning to a new remuneration
approach which provides case-based payments. This new system is been introduced in 2012 and is
becoming the dominant payment mechanism for hospitals in Switzerland. As in many other countries
which have already introduced DRGs (e.g., Germany and Austria) the new tariff system promises
transparency and comparability of in-patient services across hospitals. For the affected hospitals it is
necessary to develop concepts and reforms to work more efficient and have control of their business
(medical and administrative) processes. As in many other industry sectors, when it comes to
optimisation and reorganisation of processes, information technology (IT) can be an effective
instrument for improvements (Berensmann 2005). IT can be a driver for diversification in competition
and creation of innovative strategic competitive advantages (Piccoli and Ives 2005). This endeavour
requires besides a vast understanding of the related business processes and the ability to deal with
innovation, a goal-oriented effort of given resources. This calls for IT governance in terms of an
integrated and comprehensive approach to effectively align, invest, measure, deploy and sustain the
strategic and tactical direction and value proposition of IT in support of the business with respect to
the management of risk and compliance issues (IT GRC) (Weill and Ross 2004).
1.1 Problem statement and objectives
Although the paper at hand focuses on the Swiss health system it may be applicable to different
countries which share the same problems. In addition to the given fact of an aging society other
challenges to be met by the endeavour to fundamental reorganisation of the health care sector are,
e.g. close-meshed organisational structures within the hospital and between its stakeholders, legal
restraints based on a concurrent federal and decentralised structure and over the years increased
heterogeneous IT systems (VIG, 2005) which are just a few aspects that make the health care sector
a sensible field for the implementation and governance of IT (Porter and Teisberg 2004). In the last
408

Mike Krey et al.
years a range of best practice models (e.g., Control Objectives for Information and Related
Technology (COBIT) or Information Technology Infrastructure Library (ITIL)) as well as proprietary
frameworks have been developed (e.g., Microsoft Operations Framework (MOF)). However, these
frameworks promises that they can be implemented independent from the size, industry sector and
status of IT innovation of the enterprise (Johannsen et al. 2007). Representative studies which have
been conducted in both, Switzerland and international let assume that the concepts and models
meeting the multi-layered challenges in hospitals so far can be classified as inadequate and non-
sustainable (Hoerbst et al. 2011; Krey et al. 2010; Koebler et al. 2010). Reasons for a less
comprehensive spread of best practices in the hospital environment are as varied as the challenges
do deal with. In the case of COBIT the lengthy documentation which consists of several hundreds of
pages can be seen as an obstacle therefore plain and simple ask too much of IT executives or the
circumstance that a COBIT implementation approach starts from the premise that a basic
understanding of IT governance principles, issues and presuppositions like an IT strategy, corporate
goals or a process-oriented IT department are already defined or still in place. According to Krey et
al., an inadequate maturity of these issues right up to a complete lack however can be found in Swiss
hospitals which make a one size fits all approach questionable and calls for an anticipative and more
fundamental approach to the implementation of IT GRC (2010). The main objective of this research
work is therefore to develop a design artifact for the adoption of IT GRC in hospitals taking domain
specific requirements, limitations and considerations into account. Based on the previous problem
statement the following research question is addressed:
How can the adoption of IT GRC in hospitals, with respect to domain-specific
characteristics and requirements, be systematically supported?
The paper at hand is structured as follows: In the second section the methodology and research
framework are presented. In Section 3 related work on the definition and implementation of IT GRC
are discussed. The related domain-specific requirements for the adoption of IT GRC are presented in
Section 4. Section 5 discusses the derived meta-model of the method and its method elements. In the
final section, conclusions are drawn and an outlook on further research is given.
2. Methodology and research framework
The domain-specific problem and the thereof derived objectives are assigned to the research
discipline of information systems (IS). Although IS has been affected by different sciences, a common
opinion about adequate research methods to be used, is still missing. Hence, two research paradigms
are currently discussed in IS: 1) behaviour-oriented research (BS) and 2) design science research
(DSR). Whereas BS aims at the description and prediction of phenomena through the application of
appropriate theories, DSR aims basically at the understanding of the truth in relation to a natural
object of observation, and causes artificial changes on the research object itself (March and Smith
1995). March and Smith have proposed a list of four general artifacts which can be classified in terms
of DSR output: 1) constructs, 2) models, 3) methods, and 4) instantiations (p. 256). Methods describe
a number of activities (e.g., guidelines or algorithms) used to carry out tasks or develop findings.
Methods are goal directed plans for manipulating constructs so that the solution statement model is
realised (March and Smith 1995). Because of their problem-solving feature, methods are considered
as an appropriate artifact of this research work. The method to be developed is thereof the artifact of
the development process in terms of DSR and is been understood as a systematic guidance, which
describes in a comprehensible way how to address IT GRC principles considering possibilities and
constraints in hospitals. In order to systemise the DSR development process, the approach by Hevner
et al. (2004) has been applied. The following phases and their purposes have been identified for this
research work (cf. Figure 1). Within the (1) problem identification phase, the weaknesses of existing
approaches and the thereby arising possibilities and goals of new artifacts are derived. In this work,
the lack of dissemination of IT GRC methods in the health care sector is the initial problem. The
research gap is characterised by the lack of methods and practices that meet the identified
characteristics and requirements. Phase (2) includes the requirements analysis and the review of
alternative solutions. The actual development of the artifact which includes the definition of related
elements and the rigor application of research methods are part of the third phase (3) method
development. The testing of the proposed method aims to point out its utility through suitable domain
experts (hospital executives) and is part of the evaluation phase (4). The (5) communication and
reflexion of the research results towards hospital executives is already part of the phases problem
identification, method development and evaluation. Towards the science community the
communication and reflexion of the results takes place through the publishing of academic papers.
The environment describes the subject area of research in the dimensions people, (business)
409

Mike Krey et al.
organisation and technology. In the research work at hand the environment is represented through
the hospital and its particularities (cf. Section 4). The knowledge base provides the underpinnings
and methodologies for the implementation of the research such as the principles of method
engineering (ME) (cf. Section 5).


Figure 1: Overall research process, according to Hevner et al. 2004
The following section aims at the investigation of existing approaches and reflects therefore on the
first research phase "problem identification (cf. Figure 1). The review on the different IT GRC areas
has been conducted with regard to their ability to provide methodical support for the adoption of IT
GRC concepts.
3. IT governance, risk and compliance a literature review
In this research work the GRC context means IT related governance and describes the topics and
methods that IT executives needs to address to govern IT within their hospital under consideration of
imminent risks and legal requirements. As stated by PWC (2006) GRC is not new. As individual
issues, governance, risk management and compliance have always been fundamental concerns of
business and its leaders. What is new is an emerging perception of GRC as an integrated set of
concepts that, when applied holistically within an organisation can add significant value and provide
competitive advantage. Driven by the current economic situation, an integrated approach to IT GRC
is emerging rapidly (Gill and Purushottam 2008). This organisation-wide approach help all
stakeholders collaborate effectively, reduce overall business risk, ensure better compliance and
establish competitive advantage in the market place (Gill and Purushottam 2008, p. 38). Within the
work by Gericke et al. (2009) 21 method fragments have been identified for the support of GRC
solutions in large companies. These method fragments have been classified into conceptual,
strategic, organisational, technical and cultural aspects and finally roles have been defined and
assigned to corresponding method fragments. Although each method fragment describes important
activities, techniques and results, they lack of a defined process model in order to specify the
sequence of activities needed to maintain the GRC system. Furthermore, as part of the development
process of situational methods, Mirbel and Ralyt (2006) claimed for a derivation of rules for the
assembly of method fragments (rule base). The approach by Gericke et al. (2009) did not meet this
obligation and makes it still an open issue. In the following section, the different elements of IT GRC
found in literature are examined. This step can be understood as part of requirements analysis
described in Chapter 4.
410

Mike Krey et al.
3.1 IT governance
IT governance is recognised as an integral part of enterprise governance today. It consists of the
leadership and organisational structures and processes that ensure that the organisations IT sustains
and extends the organisations strategies and objectives (IT Governance Institute 2003, p.10). IT
governance has been defined in various ways (e.g., Haes and Grembergen, 2008, Weill and Ross,
2004). These definitions differ in some aspects; however, they focus on the same issues, such as
including management oversight, processes and rules for conducting various activities and a
measurement and reporting mechanism of status and quality. However, a variety of concepts have
been developed in the field of IT governance and value-based management (e.g. Arnold and Davies
2000; Strenger 2004) without elaborating on implementation or adoption issues. COBIT is one of
these concepts and seeks to ensure that IT resources are aligned with an organisations business
objectives in order to balance IT risk and returns. COBIT has been developed based on practical
experiences from business which have been consolidated to generally accepted rules, processes,
and characteristics. While a few method elements can be found in the COBIT framework (process,
activity, role, goal, result, metric etc.) it lacks of a defined underpinning meta-model in order to present
the logical and semantic relationships between the elements. The existing structures and tools within
the framework make it a holistic concept, which provides the user navigation through the different
aspects but at the same time hamper the integration of new framework elements or organisation
specific issues. Such an approach was even performed in the aftermath by Looso (2010).
3.2 IT risk management
In the field of IT risk management a view contributions can be found dealing with risk management
frameworks which allow for the identification of and response to risks in organisations (e.g., Jallow et
al. 2007; Steinberg et al. 2004). Well known frameworks are, for example, the COSO framework or
the risk management framework of the Software Engineering Institute (SEI). However, only few
sources (e.g., Bruehwiler 2003) address the implementation of such risks management frameworks
and the challenges to overcome. These sources provide some valuable instructions, but lack of a
structured and methodical contribution to the problem-solving.
3.3 IT compliance
Recent regulations such as SOX, Basel II or Solvency II have driven research in the field of
compliance management. Such research usually focuses on the identification of appropriate controls
(Proctor, 2005) or on instructions on how to integrate controls into business processes (Rausch,
2006). However, only little advice is given for the integrated implementation of appropriate compliance
concepts. Sharing of sensitive health information has been a concern for regulators for years.
Different approaches have been taken: the U.S., e.g. relies on a sectoral regulation, which specifically
protects health information through legislations such as the Health Insurance Portability and
Accountability Act of 1996 (HIPAA) (USDHHS 2003). The HIPAA is a regulative framework that
includes among other things a privacy standard as well as a security standard. Covered organisations
are health care providers, health plans, health care clearinghouses, as well as their business
associates. However, many hospitals in Switzerland are owned or partly financed by the government;
a stricter regulation applies for them. Data is protected in a similar manner as in the U.S., it is
distinguished between sensitive data and profiles and non-sensitive data (EDOEB 2012) However,
the Swiss health system is based on a federal and decentralised structure at the same time. The
twenty-six cantons in Switzerland are responsible to implement federal laws and ordinances, provide
health care and cover parts of the hospitals costs (Walter 2008). The high degree of cantons
independence enables them to supplemented federal regulations by cantonal legislation. This partial
autonomy has led to a great fragmentation with slightly different health systems across Switzerland
and eventuated in twenty-six health departments which make additional co-ordination efforts between
the federal and cantonal entities necessary. For that reason, the health care sector in Switzerland
needs to confirm compliance of, e.g. IT policies, standards, procedures and methodologies with legal
and regulatory requirements. Taking the characteristics of the IT GRC approaches and their
methodical support into account, it becomes obvious that existing approaches from the disciples of
SME and best practice do not suffice for the defined problem at hand. The following section picks up
on the topic of tailored approaches to a given context and aims at the investigation of the hospital
environment and IT GRC approaches and therefore reflects on the second research phase
"requirements analysis (see Figure 1).
411

Mike Krey et al.
4. Requirements analysis
Since the method in this research work does not seek for the design or reorganization of the hospital,
its departments or its organizational principles directly, but rather aims at the support to the adoption
of IT GRC, requirements in this work describe given circumstances, principles as well as limitations in
order to ensure the completeness and consistency of the method as such and its elements as well as
ensure the method`s applicability and acceptability. Based on these understandings requirements
engineering (RE) can be described as a totality of all activities for finding out, analysing, documenting
and checking requirements (Sommerville 2011, p.83). The use of the term "engineering" is intended
here to imply the systematic application of repeatable techniques to ensure that requirements have
been elicited complete, consistent and with high context relevance. Therefore a structured approach
following RE principles is crucial for this research work to guarantee on the one hand the
completeness and relevance of the requirements in terms of its purpose and the application within the
desired environment and on the other hand to comply with the demand for scientific rigor of the whole
research project (cf. Figure 1). Following the approach by Sommerville (2011) RE process of this
research work consists of three high-level activities: (1) requirements elicitation, (2) requirements
specification, and requirements validation.
4.1 Requirements elicitation
The (1) requirements elicitation describes the discovering of the related characteristics and then
taking this unstructured collection and organises them in to coherent clusters. In doing so, domain-
related restrictions, influences as well as peculiarities of hospitals (cf. Figure 1 "market view") are
identified and applied to various enablers and inhibitors to IT GRC approaches. The reviewed
characteristics are building the basis, which might help to understand the challenges that a
sustainable IT GRC adoption has to meet within hospitals. The following list of characteristics is not
exhausted but helps to understand the specific challenges to an IT GRC adoption (cf. Table 1).
Table 1: Characteristics of IT GRC approaches for health care
Characteristic (C.xx) Description
C.01 Autonomy
A life of its own can be found in different organisational units within health care
organisations, as a result of the specialisation and division of labour, which are
based on the complexity of medical services provided by physicians (e.g.,
internal medicine, surgery, radiology, etc.). The autonomy of organisational units
lead consequently to decentralised decision-making, management of structures,
information and authorities (Rockwell and Johnson 2005).
C.02 Variety and
complexity of IT GRC
approaches
The variety and complexity of IT GRC approaches relates to the several
frameworks, reference models and best practices, issued by both international
standardisation organisations and private organisations exist in addition to the
de facto standard COBIT for managing the different aspects of IT (Haes and
Grembergen 2008).
C.03 Role of IT
The CIO position within the organisation must develop a trusted relationship with
top management to succeed in this responsibility. The health care organisation
is a political arena (Hoerbst et al. 2011). An IT department has relatively little
influence compared to other (medical) organisational units within the
organisation. With limited organisational influence, the IT department with the
CIO must educate management on the necessity of an integrated IT architecture
to avoid bounded rationality during IT development.
4.2 Requirements specification
The (2) requirements specification describes the process of putting the various characteristics into a
standard which guarantees the unambiguity, completeness and consistency of the derived
requirements. This activity is taking the investigated relations between hospitals and its particularities
and IT GRC approaches and their characteristics and analysis the requirements needed to the
adoption of IT GRC in the hospitals. As this research is assigned to the discipline of DSR, just a few
contributions can be found dealing with the systematisation and provision of patterns/principles
supporting the actual method development process. The following requirements analysis is based on
the approach by Gericke (2009), who has expanded existing DSR patterns for the artifact construction
by including patterns from engineering. The derivation process consists of three steps and aims
taking all described characteristics into account. In the first step (1), the given similarities and different
levels of granularity of the identified characteristics will be balanced logically (generalisation). The
generalisation is a common methodology in DSR, as it reduces on the one hand the complexity and
412

Mike Krey et al.
number of relations between characteristics and guarantees on the other hand the validity and
traceability of requirements through logically identifiable relationships. As a final outcome, a list core
statement (categories) is obtained. Based on the core statements in the second step (2) the central
requirements can be derived. In the third step (3) it will be examined, if each characteristic is
addressed through minimum one requirement. However it is not mandatory that all characteristics of a
category address the particular derived requirement. It is more important that all characteristics are
considered. As an output of this analysis the following list of requirements to a method for the
adoption of IT GRC within the hospital environment is obtained (cf. Table 2).
Table 2: Requirements to a method for the adoption of IT GRC within the hospital environment
Requirement
(R.xx)
Description
R.01 The method has to take existing power structures and ways of decision-making adequately
into consideration.
R.02 The method has to give strong support to the different maturity levels of the areas of IT
Governance, IT Risk Management and IT Compliance by making them individually
appraisable and improvements predictable (adjustment between AS-IS and TO-BE
structures)
R.03 The method requires both an initial phase of the project, as well as the
establishment/transition into defined sustainable structures (projectoperational)
R.04 The method must be able to rely on existing, practical concepts and tools (use of tools
within the method)
R.05 The method must support a stringent view on GRC, however, allow an appropriate use
(flexibility in dealing with GRC)
R.06 The method must have adequate conditions for their use. The conditions for the use of the
method must not hamper the general discussion of IT GRC (benefit contribution must be
greater than the threshold, e.g. the lack an IT strategy as a precondition of IT GRC may not
in general prevent the examination of the topic as such)
R.07 The method must provide ways to take existing best practices already in use (e.g. ITIL,
COBIT) adequately into account (openness of the method)
4.3 Requirements validation
In the (3) requirements validation the derived requirements from the specification are reviewed
according to existing approaches grounded in science and practice. As the research question and the
defined requirements have a significant influence to the development of the method, this chapter has
to be understood not only as a basic discussion, but rather as the initial phase of the method
development (cf. Figure 1).
5. Method development
Before the actual method for the adoption of IT GRC in hospitals will be presented a conceptual
background to ME and the meta-model of the method are given. ME as a research discipline deals
with the systematic development of methods. According to Braun et al. methods are characterised
through a clear goal orientation, systematisation and repeatability (2005). In addition, Greiffenberg
lays a special emphasis on the guidance character of methods (2003, p. 11). Gutzwiller has analysed
numerous approaches to ME and has derived generally applicable method elements (1992, p. 12).
According to his analysis a method is represented through five generic elements: 1) activity, 2) result,
3) role, 4) technique and 4) data model. His method approach is characterised on the one hand
through an universal usability e.g., the conscious omission of cardinalities between the method
elements allow a certain level of latitude in terms of the purpose of the specific method elements and
their arrangement and on the other hand show a wide acceptance in the IS world as it has been
applied by various researches for different purposes (cf. Gericke et al. 2009; Wortmann 2006) and
therefore promises a successful implementation within the hospital environment. For the research
work at hand, the contribution by Gutzwiller (1992) is been adapted and modified. To meet the
requirements the following adjustments may have been carried out compared to the initial approach
by Gutzwiller:
Cardinalities have been used between the method entities in order to underline the significance
and scientific rigor of the method.
Additional method elements were added such as "phase" and process model to express the
stronger relation of a task to a process phase (cf. Figure 3).
413

Mike Krey et al.
The definition of hierarchy conditions is deliberately omitted. According to the understanding of
the present work decomposed activities may be at the same time part of different aggregated
activities (reuse), which would contradict a strict hierarchy.
However, it seems to be useful and necessary to elaborate the meanings of the single method
elements as well as their interaction explicitly in order to derive an underlying method understanding
for this research work (Table 3).
Table 3: Definition of method elements for the adoption of IT GRC in the hospital environment
Method
Element
Definition
Activity Task (e.g. assure support of top management) which create certain results. Task uses
various techniques and can be assigned to an IT GRC area. In order to integrate
existing best practices activities of the method can be mapped to given approaches.
Each task has a sequence and is hierarchically structured. Furthermore each task is
performed by a role and belongs to a project phase.
Role /
Stakeholder
Describes which organisational unit or person is applying the technique or activity. With
regard to the hospital environment the roles will be performed by members from the
medical or administrative departments. As IT GRC plays a minor role in hospitals is can
be assumed that e.g. a chief physician of a clinic as a stakeholder is performing a
temporary role as a project member. This element refers to requirement R.01.
Phase /
Process Model
A phase is the result of bundled activities. The sequences of activities are assigned to
self-contained phases which allow the project-driven view on each IT GRC area
independent of the maturity of each area. These elements refer to requirement R.02,
R.03, R05 and R.06.
Result Results are recorded in previously defined and structured documents and can be taken
to pieces. According to the activity assure support of top management the result can
be a reduction of resistance and increased project support.
Technique
(Formal / Soft)
Techniques are assigned to formal tools or soft factors used in specific situations in
order to deal with e.g. continued and hardened situations that require more sensible
tools. Techniques can be understood as a detailed instruction for the development of a
certain type of result. Techniques for the given activity assure support of top
management are e.g. gain top management as project sponsor or establish regular
meetings to elicit top management requirements. The distinction between formal and
soft tools refers to requirement R.01 and R.04.
IT GRC Area This element allows the hierarchical order of dedicated IT GRC areas in order to handle
each aspect in more detail. E.g. in the area of IT governance further aspects of
strategic alignment, value delivery, resource management or performance
measurement can be supported with each activity. This element refers to requirement
R05 and R.06.
Best Practice
Model
This element refers to the requirement R.04 and R.07 and allows e.g. based on the
approach by Looso (2010) the systematic adoption of existing best practices such as
COBIT.
Figure 2 provides the meta-model of the method. The different method elements are modelled
according to the UML 2.0 class diagram.

Table 4 aims at giving the reader an understanding of the practical relevance and the final outcome of
the application of the meta-model. The different activities needed for the adoption of IT GRC are
preformed according to the proposed spiral model by Boehm (1986) as it combines advantages of
top-down and bottom-up approaches. At the same time the iterative process model allows the hospital
organisation to become accustomed to the changed situation e.g., through quick wins and prototyping
and thereby reduces the risk of broad resistance up to a complete failure of the project. For the
adoption of IT GRC six phases have been defined. The first two phases of the process model
represent an initial setting of the IT GRC topic. As the majority of the Swiss hospitals have an
underdeveloped maturity of IT GRC processes the actual process model starts amongst other things
with the establishment of a common IT GRC understanding and a detailed stakeholder analysis
including the various inhibiting and enabling power structures which might influence the adoption of IT
GRC. Phase 1 Initiation aims therefore at a common understanding of IT GRC issues within the
hospital IT department seeks for the mitigation of project risks by conducting a structured stakeholder
analysis and furthermore aims at the official commitment by the hospital senior management. After
dealing with the topic internally, the second phase (Establishment) aims at the establishment of IT
GRC issues with dedicated favourable business partners from clinics or wards. This phase includes
therefore the project marketing and lobbying with the business. After these two initial phases the
414

Mike Krey et al.
iterative part of the process begins. In the third phase (Problem Understanding & Analysis) the
actual structured analysis of the different IT GRC areas and their maturities are investigated. Based
on the agreed concept phase 4 (Action Planning) aims at the planning of concrete actions in order to
achieve the defined goals. The actual implementation of the agreed actions is performed in phase 5
(Realisation). Phase 6 (Operations) turns the project result into operations and guarantees the
consistency and sustainability of the achieved results by provided the resources in term of
organisational capabilities or budget to make IT GRC an on-going and lasting approach for hospitals.
However, the innermost loop is the actual staring point of the process, and thus emphasises the
organisational, financial or even political considerations (uncertain power structures and resistance)
which may influence the design of the first prototypes (quick-wins). Each activity performed in one of
the previous described six phases is subdivided into tasks which themselves are following a defined
sequence. The roles and responsibilities according to each activity are documented according to a
responsibility assignment matrix (Jacka and Keller 2009). The actual RACI matrix has been expanded
by adding a support view which expresses the possibility that a role may assist in completing the
task. The adoption of existing best practices such as COBIT 4.1 is been guaranteed by a mapping of
the IT GRC area to related COBIT 4.1 processes.


Figure 2: Meta-model of the method for the adoption of IT GRC in the hospital environment
6. Conclusion
As ascertained by Krey et al. (2010) the self-contained organisational responsibility, the complexity of
the processes and the lack of cross-organisational coordination make an overall comparable
approach to management of assets, information and IT difficult. A great variety of responsibilities,
notations, level of abstractions, tools and terminologies are the result. The present paper provides a
brief introduction into the key concepts of IT GRC and the current debate about the reforms within the
415

Mike Krey et al.
Swiss health system. It shows the drivers and benefits of both and leads to the research objective,
i.e., the application of IT GRC to the field of health care. The proposed meta-model is a first approach
to challenges at hand. However, its evaluation is still subject to further research (cf. Figure 1). It
becomes obvious, that the selection of relevant validation criteria and the use of an appropriate
validation method are crucial for the result. On the other hand the validation framework (cf. Figure 1)
should not only include the review of the validity of the IT GRC method but furthermore evaluate the
previous identified research gap and its correctness. The development of the IT GRC method refers to
an identified problem in the real world, in addition to Cole et al. the proof of its utility can therefore be
brought through its application in the real world (2005).
Table 4: Phase, activity and further method elements according to the IT GRC area of business-it
alignment

References
Arnold, G. and Davies, M. (2000). Value-based Management: Context and Application. Wiley.
Berensmann, D. (2005).Die Rolle der IT bei der Industrialisierung von Banken. Gabler.
Boehm B. (1986). A Spiral Model of Software Development and Enhancement.
Braun, Chr., Wortmann, F., Hafner, M., and Winter, R. (2005). Method construction - a core approach to
organizational engineering. St. Gallen.
Bruehwiler, B. (2003). Risk-Management als Fuehrungsaufgabe. Bern.
Bucher, T.; Riege, Chr.; Saat, J. (2008) Evaluation in der gestaltungsorientierten Wirtschaftsinformatik -
Systematisierung nach Erkenntnisziel und Gestaltungsziel. Mnchen.
Cole, R., Purao, S., Rossi, M., Sein, M. (2005). Being Proactive: Where Action Research Meets Design
Research. Las Vegas.
EDOEB, Eidgenoessischer Datenschutz- und Oeffentlichkeitsbeauftragter Leitfaden zu den technischen und
organisatorischen Massnahmen des Datenschutzes
http://www.edoeb.admin.ch/dokumentation/00445/00472/00935/index.html?lang=de, accessed 2.04.2012
Gericke, A; Fill, H-G.; Karagiannis, D.; Winter, R. (2009). Situational Method Engineering for Governance, Risk
and Compliance Information Systems. New York.
Gericke, A (2009) Problem Solving Patterns in Design Science Research - Learning from Engineering. Verona.
Gill, S. and Purushottam, U. (2008). Integrated GRC Is your Organization Ready to Move?
Gutzwiller, T. (1992) Das CC RIM-Referenzmodell fuer den Entwurf von betrieblichen, transaktionsorientierten
Informationssystemen. Heidelberg.
Greiffenberg, S. (2003). Methoden als Theorien der Wirtschaftsinformatik. Mchen.
Haes, S. de and van Grembergen, W. (2008). An exploratory Study into the design of an IT Governance
Minimum Baseline through Delphi Research. Communications of the Association for Information Systems,
22.
Hevner, A. R. et al. (2004). Design Science in Information Systems Research. MIS Quarterly.
416

Mike Krey et al.
Hoerbst A, Hackl WO, Blomer R, Ammenwerth E. (2011). The status of IT service management in health care -
ITIL in selected European countries.
IT Governance Institute (2003). Board Briefing on IT Governance. IT Governance Institute.
Jacka, M.; Keller, P. (2009). Business Process Mapping: Improving Customer Satisfaction. John Wiley and Sons.
Jallow, A. K. et al. (2007). Operational risk analysis in business processes. BT Technology Journal.
Johannsen, W., Goeken, M., Just, D., and Tami, F, (2007). Referenzmodelle fuer IT-Governance: Strategische
Effektivitaet und Effizienz mit COBIT, ITIL & Co. 1st ed. Heidelberg: dpunkt.
Krey, M., Harriehausen, B., Knoll, M., and Furnell, S. (2010). IT Governance and its spread in Swiss Hospitals:
Proceedings of the IADIS International Conference e-Health 2010. p. 58 ff. Freiburg.
Koebler, F., Faehling, J., Leimeister, J.M., Krcmar, H. (2010). IT Governance and Types of IT Decision Makers in
German Hospitals.
Looso, St. (2010). Towards a structured application of IT governance best practice reference models.
March, S. T., and Smith, G. F. (1995). Design and natural science research on information technology. Decision
Support Systems 15.
Mirbel, I. and Ralyt, J. (2006). Situational method engineering: combining assembly-based and roadmap-driven
approaches. Requirements Engineering.
OECD and WHO (2011). OECD Reviews of Health Systems: OECD Publishing.
Piccoli, G., and Ives, B. (2005). IT-dependent Strategic Initiatives and Sustained Competitive Advantage: A
Review and Synthesis of the Literature. MIS Quarterly.
Porter, M. E., Teisberg, E. O (2004). Redefining competition in health care. Boston, MA.
Proctor, P. E. (2005) Select and Implement Appropriate Controls for Regulatory Compliance.
PWC (2006): Governance, Risk and Compliance, An Integrated Approach for the Insurance Industry. Washington
D.C.
Rausch, T. (2006). Holistic business process and compliance management. In: Pour, J. and Vorisek, J. (Eds.):
14th International Conference Systems Integration (SI 2006), Prague.
Rockwell Schulz, Alton C. (2005). Johnson Management of Hospitals and Health Services: Strategic Issues and
Performance.
Sommerville, I. (2011) Software Engineering, 9/E, Addison-Wesley.
Steinberg, R. M. et al. (2004). Enterprise Risk Management Framework.
U.S. Department of Health & Human Services (2003) Health Insurance Reform: Security Standards,"HHS,
Washington, 2003 http://aspe.hhs.gov/admnsimp/final/fr03-8334.pdf, accessed, 21.04.2012
VIG (2005), Verein fuer Informatik im Gesundheitswesen (VIG). E-Health-Strategie fuer die Institutionen im
Gesundheitswesen des Kantons St. Gallen: Grundlagenpapier. accessed February 2012.
Weill, P.; Ross, J. (2004). IT-Governance. How top performers manage IT decision rights for superior results.
Boston, USA: Harvard Business School Press.
Wortmann, F. (2006). Entwicklung einer Methode fuer die unternehmensweite Autorisierung. Universitaet
St.Gallen, St.Gallen.
417
Collaborative Methodology for Supply Chain Quality
Management: Framework and Integration With Strategic
Decision Processes in Product Development
Juan Camilo Romero
1,2
, Thierry Coudert
1
, Laurent Geneste
1
and Aymeric De
Valroger
2

1
Laboratoire Gnie de Production / INPT-ENIT / University of Toulouse, Tarbes,
France
2
AXSENS SAS, Toulouse, France
juan.romero@axsens.com
thierry.coudert@enit.fr
laurent.geneste@enit.fr
aymeric.devalroger@axsens.com

Abstract: The new generation of network-based organizations has triggered the emergence of distributed and
more complex contexts for the analysis of firms strategies. This gradual change in the way we understand
enterprises has induced radical evolutions on the Quality Management domain. As a consequence, the Problem
Solving Methodologies (PSM) widely used in industry and positioned up to now as one of the key elements for
achieving continuous improvement efforts within local scopes are now insufficient to deal with major and
distributed problems and requirements in this new environment. The definition of a generic and collaborative
PSM well-adapted to supply chain contexts is one of the purposes of this paper. Additional requirements linked to
specificities carried out by the introduction of a networked context within the methodology scope, the relational
aspects of the supply chains, complexity and distribution of information, distributed decision-making processes
and knowledge management challenges are some of the aspects being addressed by the proposed
methodology. A special focus is made on benefits obtained through the integration of those elements across all
problem-solving phases and particularly a proposal for multi-level root-cause analysis articulating both horizontal
and vertical decision processes of supply chains is presented. In addition to laying out the expected benefits of
such a methodology in the Quality Management area, the article studies the reuse of all the quality-related
evidence capitalized in series phase as a driver for improving upstream phases of product development projects.
This paper addresses this link between series and development activities in light of the proposed PSM and
intends to encourage discussion on the definition of new approaches for Quality Management throughout the
whole product lifecycle. Some enabling elements in the decision-making processes linked to both the problem-
solving in series phase and the roll-out of new products are introduced.

Keywords: problem solving methodology, supply chain quality management, product development, experience
feedback, collaborative supply chains
1. Introduction
In the past, firms used to work based essentially on achievement of local objectives. This way of
working produced short-sighted, standalone and conflicting strategies between firms and their
stakeholders, which led up to misalignment and poor global performances. In that context, the quality
was consequently managed in a reduced perimeter characterized by local continuous improvement
efforts (Foster 2007). For instance, centralized PSM well-adapted for dealing with local problems
gained a place as a cornerstone element within firms strategies for meeting quality exigencies.

Nonetheless, the higher levels of competition and the intensification of cost, quality and delivery
requirements have forced enterprises to cross their own boundaries towards more collaborative
models involving stakeholders (Derrouiche 2008). Thereby, approaches based on the notion of an
Extended Enterprise and including principles such as objectives alignment, strategy synchronization,
collaborative practices and common-to-all processes appeared (Cao et al. 2011). This new
environment led up to the emergence of Collaborative Supply Chains defined as self-organized
networks of organizations acting as a whole and looking for global fulfillment of final customer needs
(Knowles et al. 2005).

The introduction of this concept has radically changed the quality management paradigms. The
continuous improvement practices, such as the PSM, are now faced to deal with networked,
distributed and more complex environments including amongst others a larger number of partners,
huge quantities of fragmented and distributed information, higher impact problems defined at a supply
418

Juan Camilo Romero et al.
chain level, distributed and no more centralized activities and collaborative aspects not addressed
before.

The aim of this paper is to meet the challenge of successfully extending the PSM and their benefits to
Supply Chain contexts and intends to propose a generic and collaborative methodology adapted to
new-generation of network-based organizations. The proposed methodology aims to be a driver for
the enhancement of quality management and continuous improvement efforts at a supply chain level.
It has been improved with a two-layered structure for modeling all the technical and collaborative
aspects of supply chains and synchronized with a distributed Experience Feedback System enabling
the methodology dealing with knowledge management across distributed contexts. This methodology
is presented in Section 2.

Once the proposed global methodology has been deployed across all the stages of the Supply Chain,
access to huge quantity of meaningful and structured quality-related information capitalized for
products and partners in series phase is available to be exploited. All the problem solving experiences
can thus be reused not only to enhance solving of new problems in series phase but also as an input
for improving and enhancing new products design in development ones. In Section 3, the general
guidelines for a global continuous improvement approach linking series and development phases are
presented.
2. A collaborative problem solving methodology adapted to supply chain
contexts
In order to deal with all the critical aspects of new generation of network-based organizations, the
proposed PSM has been defined as a whole solution composed by three cornerstone elements as
shown in Figure 1. The first element is defined by the Methodology itself and aims to structure the
whole solving process. The second element is based on a Two-layered Model dealing with the study
of distributed contexts. This model is composed of two complementary levels handling with both
technical and collaboration dimensions of supply chains. The third pillar of the methodology
corresponds to the distributed Experience Feedback Process dealing with the capitalization and reuse
of problem solving experiences through networked and distributed contexts.


Figure 1: The global problem solving methodology
The Methodology considers that the Problem Solving is a generic process that can be understood
from a simplified approach with four phases: Context, Analysis, Solution and Lesson-Learnt (Kamsu-
Foguem et al. 2008). The fact that existing methodologies for problem solving such as the plan-do-
check-act (PDCA), the 8-Disciplines (8D) and the six-sigma DMAIC (Define, Measure, Analyze,
Improve and Control) can be expressed in terms of these four standard phases provides this choice
419

Juan Camilo Romero et al.
with a generic reasoning contributing to adaptability and deployment in a wide range of industrial
contexts (Jabrouni et al. 2010).
2.1 Specification of the problem context
The Context phase of the Methodology aims to border the problem by keeping only relevant
information contributing to problem understanding and providing with meaningful evidence for further
solving phases. Due to the nature of problems to be handled within the frame of network-based
organizations, the quantity and quality of information become a critical issue (Cantor et al. 2009). The
proposed methodology specifies a relevant problem context through the accomplishment of four
phases: Formalization, Filtering, Pilot Team constitution and Reusing.
2.1.1 Formalization
When dealing with complex contexts characterized by the multiplicity, fragmentation, distribution and
heterogeneity of information across a network, the first step is to define common-to-all frameworks
and harmonized mechanisms for formalization and sharing of that information (Buzon et al. 2007)
(Zhou et al. 2007). This first objective has been achieved by coupling the methodology with a
proposal for the modeling, diagnostic and analysis of supply chains in the light of problem solving
processes. This model corresponds to the second cornerstone element of the global methodology as
shown in the Figure 1.

Such approach is defined by two complementary levels addressing the whole technical and
collaboration dimensions of Supply Chains through a unique two-layered model. The first technical
layer aims to address the entire product, process and network related aspects while the second one
intends to complete the model by dealing with all the relational and collaboration aspects of supply
chains. Each level is materialized in the proposed model through a breakdown structure modulating
both technical and collaboration aspects in more manageable units. Then, the Technical Breakdown
Structure (TBS) or first layer is thus defined by the set of all the Technical Packages (TP)
summarizing all the technical attributes for each node of the network while the Collaboration
Breakdown Structure (CBS) or second layer is defined by the set of all the Collaboration Packages
(CP) representing the clustering of partners across the same network for solving purposes. Unlike the
first level, characterized as static and aiming to provide robustness to the model, the second one is
defined dynamically with regard to supply chain collaboration aspects evolving constantly through the
time and being specific for each problem. Aspects such as confidentiality, trust and power between
partners correspond to criteria being critical on the effectiveness of Supply Chain practices operation.
Other characteristic of this second level is that it can be modulated through multiple sub-levels
representing nested and dynamic collaboration structures. This adaptability and flexibility allow the
model to provide up-to-date and reliable contexts in the light of problem solving processes. Figure 2
synthesizes and positions the different aspects of the TBS+CBS model.


Figure 2: The two-layered approach for the modeling of supply chains
420

Juan Camilo Romero et al.
The Formalization phase, enabling the specification of relevant contexts within networked
environments, is based on: (1) the first level of the two-layered model summarizing the technical
context of the supply chain where problems are occurring and (2) the entire Collaboration Knowledge
generalized from past problem solving experiences. At this stage, available Technical Breakdown
Structure (TBS) and Collaboration Knowledge (CK) provided by the model are defined at a global
Supply Chain level and do not include yet all the specificities linked to current problems being solved.
2.1.2 Filtering
This phase focuses on the filtering of the networked contexts issued from previous steps with regards
to specific problems in order to keep only the relevant problem-related information. To do so, a
mechanism intending to filter the context issued from the two-layered model and get thus a simplified
TBS adapted to current problem has been specified. This Filtering Mechanism (FM), aiming to keep
the most relevant TPs for the current problem solving experience, is based on the assumption that
when a problem appears, the partner directly concerned by this problem is able to define a reduced
number of relevant problem-related criteria. These criteria are the inputs for the filtering mechanism
and are formalized through a Preliminary Problem Context Record (PPCR) including:
Impacted element: Product on which the problem has been detected. After it has been identified,
the corresponding TP summarizing all the product, process and network dimensions for this
element can be identified.
Problem width: Criterion based on the selection of the more relevant n-1 level TPs. It allows
reducing the number of branches (or width) of the TBS to be analyzed.
Problem depth: Criterion defined in function of the number of levels to be included at the analysis
and filtering stages. It allows reducing the depth of the TBS.
Problem domain: Criterion allowing characterizing current problem with regard to pre-defined
types of problems. The domains characterizing current problem are then matched with the
domains characterizing the TPs of the TBS in order to reduce the already filtered scope.
Relevant processes: Criterion defined in function of processes being considered as relevant for
current problem solving experience. This choice is done taking into account the Design,
Industrialization, Fabrication and Transport dimensions studied by the TPs. It allows reducing for
already filtered TPs, the quantity and nature of information to be kept.
The filtering process computes and matches all these problem-related criteria with the attributes
characterizing the TPs across the whole TBS. The definition of these criteria is initialized based on
preliminary analysis performed with regard to problem evidence available at early stages of the
solving process. Nevertheless, it is completed throughout the methodology in order to improve filtering
results. The output of this phase is a simplified TBS adapted to current problem (identified as TBS).
2.1.3 Pilot team constitution
This phase aims to identify the actors that will pilot the solving efforts. Based on the previously
simplified TBS and taking into account that partners are linked to TPs, the model is able to identify all
the relevant actors owning key processes and having problem-related competencies. This set of
eligible contributors, grouping the partners being well-positioned across the network to potentially
contribute to problem solving, can be updated manually throughout the methodology by adding actors
in function of specific-to-problem requirements (e.g. domain experts or authority representatives). The
selection of the pilot team members, issued from this set of eligible contributors, is executed by the
partner directly concerned by the problem through a Collaboration Mechanism (CM). This
mechanism, aiming to optimize the pilot team constitution in regards to additional behavioral and
collaborative aspects of the supply chains, led up to the definition of the second level of the two
layered model. The inputs for this mechanism are based on the one side on the Collaboration
Knowledge generalized for the concerned Supply Chain and on the other one on the Partner
Preferences Record (PPR) summarizing partners preferences in regards to critical supply chain
collaboration criteria such as power, trust, control, objectives alignment, information sharing and
conflict (Cao et al. 2011). The output of this phase is the pilot team which is positioned as the header
element of the second layer (CBS
0
) of the two-layered model (TBS+CBS).
421

Juan Camilo Romero et al.
2.1.4 Reusing
This step intends to exploit the shared knowledge database. The purpose is to find the most relevant
past problem solving experiences in order to enhance the current one. This phase is executed by the
pilot team and is based on the completion of the previously defined problem-related criteria, this time
through a Consolidated Problem Context Record (CPCR). This record is used as the input for the
Reusing Mechanism (RM) whose output is the set of all the relevant past solving experiences having
the higher similarity degree and providing with meaningful information being able to be
adapted/reused in the current solving experience.

The output of the Context phase corresponds to a problem context characterized by the
completeness, relevance, readiness and accuracy of its content. This reduced but relevant context
provides problem solvers with meaningful and added-value information that enhances the decision-
making processes all along the methodology. The possibility of reducing from a huge space of
research to a more reduced scope with prior and more relevant problem-related information and the
opportunity of reusing and adapting past problem solving experiences are fundamental gains that can
be reached through the deployment of the proposed methodology. All the key elements of the Context
phase are synthetized in Figure 3.


Figure 3: Key elements of the context phase
2.2 Multi-level root-cause analysis
The Analysis is performed on the basis of the context issued from the previous phase. The objective
is to perform a deep analysis of the problem and available information in order to find the root causes
producing the problem. The identification of those causes is a critical factor within the quality domain
as their identification is a mean for allowing a full eradication of problems (Wilson et al. 1993). The
problem analysis efforts within the frame of supply chains demands from the methodology additional
assets such as: (1) synchronization of distributed partners in regards of common decision processes,
(2) clustering in order to optimize competencies, promote synergies and mitigate risks, (3)
establishment of shared processes in order to improve decision flows and (4) dealing with relational
and collaboration aspects of the supply chains in order to create positive communication and
establishment of intensive long-term relationships. Within this context and as part of the current
methodology, a proposal dealing with a dynamic and multi-level root-cause analysis adapted to
Supply Chain problems has been specified.

This proposal is based on the coupling of two elements: the causal tree of a problem and the two-
layered model. The first element is widely used within the continuous improvement area and the
second one aims to enhance distributed decision processes by incorporating the whole technical and
collaborative dimensions related to Supply Chains. The causal tree, breaking down problems until the
root-causes are found, is considered as the driver of the proposed approach while the two-layered
422

Juan Camilo Romero et al.
model is considered as the facilitator of this proposal within distributed contexts. The inputs for the
analysis phase are the problem context, the pilot team and the set of all the eligible contributors
issued from previous phase. These elements are materialized in the two-layered model by the TBS
and the header level of the CBS containing the pilot team.

For each level of the causal tree, a corresponding collaborative structure being able to deal with the
analysis of elements included at that level must be defined. Each collaborative structure,
corresponding to one level of the CBS and being defined by a set of interconnected CPs, must fulfill
the whole Supply Chains requirements. As a consequence, during the clustering of partners with
problem analysis purposes not only technical issues and competencies owned by actors but also all
the relational and collaboration aspects must be considered. For the header level of the causal tree
dealing with the problem (H
0
), this requirement has been fulfilled through the application of the
Collaboration Mechanism (CM) enabling the constitution of the pilot team. This team, as shown in
Figure 4, is positioned at the header level of the CBS (CBS
0
) and has the accountability of analyzing
the corresponding header level of the causal tree (H
0
).

Once the pilot team has analyzed the problem level in the light of available technical context issued
from the TBS, it must breakdown the problem into more manageable level-1 causes (H
1
) potentially
producing this problem. These are the causes answering to the question: Why has the current-level
cause (H
0
) occurred? When level-1 causes (H
1
) are identified, the first level of the CBS (CBS
1
) is
initialized. This level includes one Collaboration Package (CP) for each cause on the corresponding
level of the causal tree. For the definition of the team members belonging to each CP, the same
Collaboration Mechanism (CM) used for the constitution of the pilot team is deployed. This
mechanism, computing all the Partner Preferences Record (PPR) for partners included in the eligible
contributors set, optimize the resources distribution and the clustering of partners in regards of the
analysis and validation of current problem causes and behavioral aspects. After each CP is defined,
the accountabilities and the different roles inside the package are distributed with regard to agreed
Collaboration Agreements governing CP operation. To favor communication between different levels
of collaborative structures, the CP coordinator at one level must be at least a CP member at the n-1
level. At this stage, the CPs are ready to start the analysis of the level-1 causes (H
1
) in the light of
available technical context issued from the TBS. From this point, all steps are reproduced within the
frame of a recursive approach intending to breakdown in a synchronized way the problem and the
collaborative structures until the root-causes (n-level) are finally found. The principles of this
collaborative approach positioned as a driver for complex problems analysis on extended and
distributed contexts and its integration with the two-layered model (TBS + CBS) are illustrated on
Figure 4.

A top-down and a bottom-up flow covering the causal tree and the definition of dynamic teams or
Collaboration Packages (CP) across the CBS in a recursive way are the backbone elements of the
global decision process of this model. The top-down flow aims to analyze and distribute causes
through the CBS levels while the bottom-up flow aims to validate analysis and provide consolidated
results. In both descending and ascending flows team collaborative work is deployed in order to align
and coordinate efforts. After both flows have been completed, a consolidated causal tree listing all
relevant root-causes producing the current problem is capitalized into the global knowledge database.
Synchronization of Supply Chain and local decision processes are included as a way to promote the
enhancement of Learning Organizations. Through this approach the resources participating to the
problem analysis process are optimized and both individual and network competencies and
knowledge are consolidated.

From a more global point of view, the possibility of having a central repository containing all the root-
causes producing problems being detected on products moving through supply chains and the
corresponding collaborative structures performing these analyses provides a very important
competitive advantage in regards of strategic decision processes at a supply chain level. All this
quality-related information can consequently be exploited for enhancing the Supply Chain Quality
Management (SCQM) through the definition of generalized continuous improvement efforts tackling
and eradicating recurrent and high impact Supply Chain problems.

423

Juan Camilo Romero et al.


Figure 4: A multi-level approach for root-cause analysis on distributed contexts
2.3 Solution and lesson learnt phases
In this phase, the two-layered model allows focusing the team collaborative efforts on the definition of
an action plan addressing the root causes. The same top-down and bottom-up flows can be used now
to deploy an action plan distributed horizontally through the different stages of the network and
vertically through the different organization decision levels. A global and aggregated approach
synchronizing vertical and horizontal flows of supply chain ensures the effectiveness of the solutions
deployed and contributes to the achieving of global continuous improvement objectives defined within
the frame of the SCQM efforts (Harland et al. 2004).

The Lesson Learnt phase intends to encapsulate the whole PSM, related activities and associated
knowledge in one individual experience being capitalized into the shared knowledge database. At this
stage, great knowledge management benefits can be obtained because both global and local quality-
oriented competencies are created, shared and distributed across the network. This allows gaining on
higher performances and obtaining superior Supply Chain competitiveness levels.
3. The proposed methodology as a driver for enhancing strategic making
decision processes during new product development
The new product development projects include some strategic activities having important impacts
over the whole product lifecycle. Two of these crucial activities concern on the one hand the system
design process aiming to define and freeze product plans through both preliminary and detailed
design reviews (Mavris et al 2011) and on the other one the supplier selection phase aiming to
identify, evaluate, and contract with suppliers (Beil 2010). These two phases, being part of the
classical product development approaches, have been retained for the purpose of this article as they
allow highlighting some concrete links existing between the proposed global methodology for quality
management in series phases and the new products roll-out in the development one (see Figure 5).
3.1 System design process
The system design process encompasses the process during which a new product is brought from
the conceptual stage to readiness for full-scale production. All along this process including some
preliminary, detailed and critical reviews, the system structure evolves through different maturity
stages with different kind of business, technical, industrial, quality and risk factors being leveraged
424

Juan Camilo Romero et al.
(Handfield 1999). In addition, sometimes the lessons learnt from previous development projects are
also included throughout these stages to improve the current system specification (Vareilles et al.
2012). Nonetheless, the information provided by past development experiences and by classical
product development approaches can be completed and enhanced through the application of
structured knowledge processes defined in a larger scope including the whole product lifecycle. For
instance, when the system is being specified, it could be useful to have access to all the quality-
related information capitalized for similar and/or same family systems in series phase through the
proposed PSM.


Figure 5: Integration of the PSM into strategic decision-making in development phases
As detailed on Section 2, after the global methodology has been deployed across all stages of the
supply chain, a central knowledge base including meaningful information in regards of new product
development is available to be exploited. List of some structured, added-value and meaningful
information available from this central quality repository includes but is not limited to:
All problems detected for constituents, similar and/or same-family systems,
Relevant root-causes analysis used for solving those problems,
Solutions proposed and related action plans,
Generalized knowledge issued from the lesson learnt phases and,
Relevant technical context (TBS) and collaboration structures (CBS) deployed.
All this global relevant information encapsulated in the problem solving experiences can be exploited
through the application of the Reusing Mechanism (RM) defined in Section 2.1.4. A Design
Requirements Record (DRR), characterizing the whole criteria specific to design/development
activities, enables the reusing/adaptation process. This last one aims to reuse and keep only the more
relevant information in order to: (1) highlight risks not before considered for the current system
development, (2) improve current design by leveraging all problems occurred on similar or same
family systems/components, (3) justify functional and structural choices for materials and/or
components in the light of proved performances, (4) find design alternatives for evaluation of
economic scenarios and finally (5) boost the supplier selection phase.
3.2 Supplier selection
It could be useful to access during the supplier qualification/selection process to relevant information
measuring partner involvement on collaborative practices already deployed across the whole supply
chain. The proposed approach analyzes relevant information capitalized through the global PSM in
425

Juan Camilo Romero et al.
order to come up with two complementary indicators aiming to measure this involvement and enabling
decision-making at this early stage of suppliers qualification/selection:
A first collaborativity index intends to define the degree of involvement, adherence and alignment
of partners with quality strategies of collaborative supply chains. So, more the firms work in
partnership with stakeholders to solve supply chain problems and more they deploy the proposed
PSM, more its collaborativity index improves. This index, measuring the disposal degree of a
partner to work in a collaborative way, can be reasoned from the past problem solving
experiences including amongst others the entire CBS and CK context.
A second risk-oriented index defined for each supplier/product couple can be aggregated from
information available in the central knowledge repository. The number of high-impact problems
detected for this couple and the related analyses performed can represent meaningful evidence
during supplier assessment. This measure can be aggregated in order to define a supplier risk
index aiming to reflect the trustworthiness associated to partners and their already industrialized
products. The risk-oriented index must be analyzed in parallel with the collaborativity in order to
favor the involvement of partners. The fact of increasingly cooperating with the other partners to
solve problems will allow firms consolidating competencies and enhancing local processes, which
consist in the long-term on a driver for improving product quality and reducing risks. Then, the
involvement and engagement of partners with the proposed methodology represents a double
gain.
Proposed measures enhance supplier qualification process by providing with additional meaningful
information not being before deeply analyzed due to: (1) complexity of gathering this kind of
information on distributed and more complex contexts such as the supply chains and (2) the lack of
maturity of links existing between series and development phases in the quality management area.
These new indicators must be understood as two complementary and interdependent elements for
assessing suppliers performance on problem solving and continuous improvement processes.
Finally, this set of index can be articulated by the buyers with more global sourcing strategies in order
to promote higher involvement from suppliers in supply chain collaborative practices.
4. Conclusion
In this paper, a collaborative methodology for problem solving adapted to supply chain contexts has
been defined. It has been positioned as a key driver for achieving an effective Supply Chain Quality
Management. The methodology addresses the whole technical and collaboration aspects of the
supply chains and deals with knowledge management across distributed contexts. This proposal
contributes to achievement of supply chain continuous improvement objectives and promotes the
emergence of Learning Supply Chains.

This paper has shown that the quality-related information capitalized through this methodology is very
useful not only for enhancement of future quality efforts in series phase but also for enabling and
improving system definition and suppliers selection during roll-out of new products. The risk and
collaborativity index enhance decision making process at these phases and allow the synchronization
of SCQM with the new products development projects.
References
Beil, D. (2010) Supplier Selection, Wiley Encyclopedia of Operations Research and Management Science
[Electronic], Available: Wiley Online Library - DOI: 10.1002/9780470400531.eorms0852 [15 June 2010]
Buzon, L., Ouzrout, Y. and Bouras, A. (2007) Structuration of the Knowledge Exchange in a Supply Chain
Context, Paper read at the 4
th
IFAC Conference on Management and Control of production and Logistics,
Sibiu, Romania, September.
Cao, M. and Qingyu, Z. (2011) Supply Chain Collaboration: Impact on Collaborative advantage and firm
performance, Operations Managament, Vol 29, pp 163-180.
Cantor D. and MacDonald J. (2009) Decision-making in the supply chain: examining problem solving
approaches and information availability, Operations Managament, Vol 27, pp 220-232.
Derrouiche, R., Neubert, G. and Bouras, A. (2008) Supply chain management: a framework to characterize the
collaborative strategies, Computer Integrated Manufacturing, Vol 21, No. 4, pp 426-439.
Foster, ST. (2007) Towards an understanding of supply chain quality management, Operations Management,
No. 26-2008, pp 461-467.
Handfield, R., Ragatz, G., Petersen, K. and Monczka, R. (1999) Involving Suppliers in New Product
Development, California Management Review, Vol 42, No. 1, pp 59-82.
Harland, C., Zheng, J., Johnsen, T. and Lamming, R. (2004) A Conceptual Model for Researching the Creation
and Operation of Supply Networks, British Journal of Management, Vol 15, pp 1-21.
426

Juan Camilo Romero et al.
Jabrouni, H., Kamsu-Foguem, B., Geneste, L. and Vaysse, C. (2011) Continuous Improvement Through
Knowledge Guided Analysis in Experience Feedback, Engineering Applications of Artificial
Intelligence, Vol 24 (8), pp 1419-1431.
Kamsu-Foguem, B., Coudert, T., Beler, C. and Geneste, L. (2008) Knowledge formalization in experience
feedback processes: An ontology-based approach, Computers in Industry, Vol 59, pp 694-710.
Knowles, G., Whicker, L., Heraldez, J. and Del Campo, F. (2005) A conceptual model for the application of Six
Sigma methodologies to supply chain improvement, Logistics: Research & Applications, Vol 8, pp 51-
65.
Mavris, D. and Pinon, O. (2011) An Overview of Design Challenges and Methods in Aerospace Engineering,
Paper read at the 2
nd
International Conference on Complex Systems Design and Management, Paris,
France, December.
Vareilles, E., Aldanondo, M., Codet de Boisse, A., Coudert, T. and Geneste, L. (2012) How to take into account
general and contextual knowledge for interactive aiding design: Towards the coupling of CSP and CBR
approaches, Engineering Applications of Artificial Intelligence, Vol 25, Issue 1, pp 31-47.
Wilson, P., Dell, L. and Gaylord, F. (1993) Root Cause Analysis: A tool for Total Quality Management, ASQ,
United States of America.
Zhou H. and Benton W. (2007) Supply Chain Practice and information sharing, Operations Managament, Vol
25, pp 1348-1365.
427


428

Work
in
Progress
Papers
429


430
Recording our Professional Development Just Became
Easier: Using a Learning Management System
Mercy Kesiena Clement-Okooboh
University of Bolton, Institute for Educational Cybernetics, UK
Mkc1iec@bolton.uk.ac

Abstract: This study presents an evolutionary learning management system (LMS) that is used as a tool to
record continuing professional development (CPD) data. It is a critical component of Company X initiative to
promote the continuous improvement of its services to its internal and external clients through the enhanced
skills, knowledge and competencies of its employees. The changes in the workplace learning challenged the
learning and development practitioners to rethink new ways to improve the work based practices within the
organization through the implementation of processes and procedures in place to capture and analyse CPD data
across different sites of the multinational organization. The mode of recording this large amount of data is as
important as the learning intervention itself and understanding the benefits of capturing large data sources in a
central system is critical. The purpose of the LMS is toward consolidation and providing a single, common
infrastructure to manage and track learning and development initiatives across the multiple organization sites.

Keywords: learning management system (LMS), continuing professional development (CPD), summative
evaluation, formative evaluation, emerging technologies, learning technologies
1. Introduction
This study explores how employees in a service industry respond to a Learning Management System
tool developed to record learning hours as part of their continuing professional development (CPD).
The use of a Learning management system plays a key role in helping organisations cope with
informational deluge (Kriegel, 2011), as LMS plays a critical part in helping organizations store large
amount of persistent data safely and efficiently across different parts of the organization. Pina et al,
2008 determine that learners place high value on usability experience. The use of a learning
management system in organizations collates large data into a single platform that is accessible to
anyone (Dabbagh and Bannan-Ritland, 2005; Ullman and Rabinowitz, 2004). As with most new and
evolving technologies, the LMS is not without its limitations and disadvantages Ioannu and Hannafin,
2008; Pina, 2007) reported that many users of LMS found it slow, confusing, and focused more on
administrative needs that learners needs.
2. Literature review
Learning Management System is defined as an information system that administers instructor-led
and e-learning courses and keeps track of learners progress (Brown and Johnson, 2007). Used
internally by large enterprises for their employees, an LMS can be used to monitor the effectiveness
of the organizations education and training. Learning technology is defined as any application of
technology for the enhancement of teaching, learning and assessment, an essential component in a
learning technology is the ease with which the learner can interact with the system and this is termed
as human computer interaction Pierre, (2007) describes LMS as a web server based software
application that provides the administrative and data tracking functions.

Szabo and Flesher (2002) advocate from a business perspective that the LMS are essentially,
infrastructures that support the delivery and management of instructional content, the identification
and assessment of individual and organizational learning goals, and the management of the
progression towards meeting those goals. Gilhooly (2001) reaffirms that the function of LMS goes
beyond content delivery to course administration; registration, tracking, reporting and skills gap
analysis. Neto and Brasileiro (2007) further describe LMS as a technology necessary for supporting
the educational needs of the information age.
2.1 The impact of learning management system (LMS)
Sibler and Foshay (2010) argue that scalable enterprise consumption of e-learning became possible
through the use of LMS and its ability to plug content within them. Currently, there is a rapid
development in the use of learning management systems to record data in organizations. The use of
a learning management system helps to bridged this gap by leveraging strategic learning
management solutions to link strategic functions to better capture, manage, and improve the
knowledge, skills and attitudes of every employee across the organisation. LMS ensures useful and
431

Mercy Kesiena Clement-Okooboh
important data are captured as it takes place; and the data is available to all authorized users at any
time making information easily accessible.

Nonaka (2008) advocates that an organizations ability to create, store and disseminate knowledge is
crucial for staying ahead of competition in areas of quality, speed, innovation and price. The use of
technology to manage knowledge within an organization can lead to a long term productivity gain and
to stay in business. The use of an LMS as a knowledge management tool in organization is very
popular. Marquardt (2011) asserts that knowledge enables key stakeholders to assign meaning to
large data and thereby generate information; having a comprehensive system in place to manage
organization knowledge involves six sub systems namely acquisition, creation, storage, analysis
and data mining, transfer and dissemination, application and validation. Therefore, technology has
become a large part of the learning function in organizations (Brandon Hall Research, 2008)

Bersin and Associates (2006) asserts that organizations that invests in LMS have much higher
efficacy and efficiency levels.This entails that LMS are the sole learning repository in organizations for
storing data and information. According to Brandon Hall Research (2008) over 65% of organizations
have an LMS, this highlights the significance of LMS as a learning tool in organizations. Scmidt (2003)
outlines reasons why organizations utilise LMS; namely reduce the cost of training, personalise
learning content and make delivery accessible to learners. LMS can faciltate the tracking of employee
training records, send training notifications to learners, administrators and line management when
required.
3. Research methodology
This is a case study research and it is a useful research strategy used in the dynamics of technology
implementation. McCucheon and Meredith, (1993) and Benbasat et al, (1987) advocates that case
study is a valid research strategy in management information systems. Wilson and Woodside (1999),
and Hillebrand et al. (2001) also assert case study research as a strategy that is useful for theory
testing. Hence, this study employs a case study research method for exploration and explanation of
the implementation process of the new Learning Management System (LMS), to record data of
continuing professional development (CPD) of the employees in the organization.

In this study, data was gathered using the formative and summative evaluation approaches; these
approaches were carried out through survey questionnaires and semi-structured interviews on a
selected sample of employees and line managers from the different location sites
4. Results and discussion
4.1 Company X context
Company X is a Energy and Utilities Service Company based in Dublin Ireland. Its parent company is
a large French multinational with operations in 42 countries. The company has 411 employees in
Dublin and Belfast and has been in operations in Ireland since 1990. Company X has a range of
different portfolios that include energy, business development, pharmaceutical, technical, central
services, human resources, alternative energy, industrial and information technology. Each portfolio
has a director and a client operations manager that oversees activities in the portfolio.

The companys purpose is to Improve the operational and environmental efficiency of utilities and
energy by co-designing and delivering enhanced solutions in a safe, compliant and sustainable way in
the Pharmaceutical, Healthcare, Industrial, Food & Beverage, ICT and Public Sectors.
4.1.1 Use of learning management system (LMS) in Company X
The recording of CPD is a critical component of our initiative to promote the continuous improvement
of our services to our customer through the enhanced skills, knowledge and competencies of our
employees. For this reason, the organisation decided to use our new LMS online tool to ask
employees to record their CPD data, to ensure that we are fostering and cultivating a learning
environment and also capturing large data in one central system. In the past, internal classroom
learning and development was monitored and recorded through our HR database by the Learning &
Development Team. However, a great deal of employee development takes place outside of the
formal classroom based learning scenario and this is not always captured, recorded or evaluated.
432

Mercy Kesiena Clement-Okooboh
4.1.2 Benefits of using a learning management system to record CPD hours
Emerging technologies, changing workplace demand are some of the factors that now challenge
organisations to extend the scope of their practice to efficiently and effectively deliver and manage the
learning experiences of their employees from a preconceived way of training, to one of cultivating a
learning culture and learning of individuals, teams, and organisations. To be able to do this,
organisations must ensure that the learning experiences of its workforce must be accessible and
easily tracked by key stakeholders. The benefits of a learning management are namely:
Centralized learning environment to ensure consistency across the organisation
Tracking and reporting for enhanced performance of the learning and development team
Immediate capabilities evaluation
Regulatory and legal compliance
4.2 What worked well?
The implementation of the LMS has created a self-directed learning amongst employees in the
organisation. Self-directed learning can be described as self-reflective learning. Mezirow, (1985), this
is similar to the process described by Argyris (1992) as double-loop learning. It is a process of
recording achievement and action planning. According to some of the responses from participants
surveyed, participants were asked what worked well with the new learning management system.
The introduction of the new LMS has enabled us to take full ownership of our learning
development, we can now easily track our development regularly; the access to a
personal CPD log will enhance our career progression.
The new system improved the operation system between the learning and development team, line
management and senior management teams as essential training data was captured and analyzed
quickly. The additional add-on components of a personal space area allotted to individual learners in
the LMS enables individuals to take control of their personal development for recording continuing
professional development hours.
4.2.1 What did not work?
The number of people involved in the project slowed the implementation process, during the
implementation stage the engineer from the system provider left the company and this meant that the
new engineer had to go over the process again because there was no proper handover, same was
the situation in the organisation as the internal project co-ordinator was out on maternity leave and the
new project co-ordinator did not have a proper handover.
4.2.2 What could have worked better?
While some users felt the interface was dull and rigid. Siemens, (2004) noted that the LMS interface is
not user friendly and should be simplified and made more intuitive.
5. Conclusion
This paper supports and focuses on a practitioner perspective of a learning management system as a
vehicle for recording and tracking professional development hours. Firstly, this study proved the
importance of using the new LMS to track learning participation and effectiveness of employees in
one system. Secondly, employees felt it gave them more autonomy to take their learning development
into their own hands. Thirdly, the system reduced the time spent by the learning & development team
to manage, store and analyse big data from different sources across the organisation. Lastly, the LMS
steered a self-driven learning management process amongst employees and provided data for annual
statistics.

The case study has highlighted the importance of a learning management system in an organisation.
The evaluation of a learning management system requires the integration of different aspects such as
interoperability, technical characteristics, usability, scalability, high availability, user experience, cost
and effectiveness, stability and security. The findings from these approaches helped to establish their
perceptions and experiences, in relation to the ease of use of the system to record their professional
development data. The feedback derived from this approach was fed back to the IT department and
433

Mercy Kesiena Clement-Okooboh
the LMS provider to make corrective actions and amend some functionalities to enable users to log
their learning data easily.
6. Limitations and further research
This research study was limited to a single organisation; further research may be conducted on more
case studies to evaluate the impact of LMS implementation in organizations. It is expected that
supplementary research studies could focus on collecting more data and incorporating empirical data
from large organizations by comparing the results with findings of this study.
References
Argyris, C (1992) On Organizational Learning, Blackwell, Cambridge, MA
Benbasat, I., Goldstein, D.K., and Mead, M. 1987, the case research strategy in studies of information systems.
MIS Quarterly, 11(3):369-386
Boud, D., & Garrick, J. (1999) Understanding Learning at Work: London, Routledge.
Bramley, P. (2003) Evaluating Training. (2
nd
ed.), Chartered Institute of Personnel and Development. www.cipd.ie
Brown, M.I., Doughty, G.F., Draper, S.W., Henderson, F.P., McAteer, E. (1996) "Measuring learning resource
use" submitted to this issue of Computers & Education
Gilhooly, M., (2001) Quality of life and real life cognitive functioning. Growing older programme Newsletter, 2;6
Hillebrand, B., Kok, R.A.W., and Biemans, W.G. 2001, Theory-testing using case studies: a comment on
Johnston, Leach, and Liu. Industrial Marketing Management, 30: 651-657
Ioannou, A., and Hannafin, R., (2008). Deficiencies of Course Management Systems: Do students care?
Quarterly Review of Distance Education, 9 (4).
Johnson, A. and Ruppert, S. (2002). An evaluation of accessibility in online learning management systems, in
Library Hi Tech, vol. 20 (4), pp. 441-451
Kiergel, A. (2011) Discovering SQL: a hands-on guide for beginners, Wiley, 2011.
Klinger et al (2010) Learning Management System Technologies and Software Solutions for online Teaching:
tools and applications. Yurchak printing Inc., U.S.A
Marquartd, M.J (2011) Building the learning organization: achieving strategic advantage through a commitment to
learning. 3
rd
Edition, Nicholas Brealey Publishing Inc.
McCutcheon, D.M. and Meredith, J.R. 1993, Conducting case study research in operations management. Journal
of Operations Management, 11: 239-256
Mezirow, JA (1985) A critical theory of self-directed learning, in Self-directed Learning: From theory to practice,
ed S Brookfield, Jossey-Bass, San Fransisco, CA
Neto Mendes Milton, Francisco and Brasileiro, Francisco (2007) Advances in Computer-Supported Learning.
Information Science Publishing, U.S.A.
Nonaka, I., and Takeuchi, H., (2008) The Knowledge creating company: How Japanese companies create the
dynamics of innovation. Oxford University Press, New York
Pettigrew, A and Whipp, R (1991) Managing Change for Competitive Success, Blackwell, Oxford
Pierre, S (2007) E-Learning Networked Environments and Architectures. A knowledge processing perspective,
Springer Publications, London
Pina, A.A., Green, S., and Eggers, M.R., (2008) Learning Management Systems: Lessons from the front lines.
Paper presented at the annual technology in education (TechEd) conference, Ontario, CA.
Sibler, K.H, & Foshay, W.R., (2010) Improving Performance in the workplace. Volume 1: Instructional Design
and Training Delivery. Pfeiffer Publications
Siemens, G (2004) Learning Management Systems: The wrong place to start learning Retrieved 28 July, 2012
from http://www.elearnspace.org/Articles/lms/htm
Stoner, G. (1996) LTDI Implementing Learning Technologies, [online], Scottish Higher Education Funding
Council, http://www.icbl.hw.ac.uk/ltdi/implementing-it/implt.pdf (accessed on 18th April 2012)
Ullman, C., & Rabinowitz, M. (2004) Course management systems and the reinvention of instruction.
Technological Horizons in Education Journal Retrieved 28 July, 2012 from
http://www.thejournal.com/articles/17014
Wilson, E.J. and Woodside, A.G. 1999, Degrees-of-freedom analysis of case data in business marketing
research. Industrial Marketing Management, 28: 215-229
434
Is More Data Better? Experiences From Measuring
Academic Performance
Harald Lothaller
University of Music and Performing Arts Graz, Graz, Austria
harald.lothaller@kug.ac.at

Abstract: At University of Music and Performing Arts Graz (Austria), we have developed and implemented a
comprehensive online system to measure artistic and scientific performance in 2007. Since the roll-out in 2008,
more than 20.000 entries are made by the staff members. The main topic in the beginning was to make the staff
members use the new tool. Now, we have deal with different topics: We have to keep them motivated to enter
further data. We have to handle the big amount of data. And we have to increase benefits from collecting data for
staff members, departments, and the university. The paper presents some of our approaches to generate these
benefits and thereby also come up to the other requirements. For instance, an easy-to-handle reporting and
exporting solution is now available for persons and departments. Indicators for quality management and
evaluation purposes shall be derived from the tool in the upcoming future. Administrative work load analyses
based on out tool influence the human resources allocation to departments. In any case, there is a strong need
to reduce complexity and we must not spread the whole data set over different purposes. We conclude that more
data might be better with regard to reporting, rankings, or funding, but also raise problems that have to be solved.

Keywords: online tool, evaluation, quality management system, artistic performance, research performance,
practical experiences
1. Introduction
At University of Music and Performing Arts Graz (Austria), we have developed and implemented a
comprehensive online system to measure performance that is representing the wide-spread range of
activities of our artists and researchers in particular and other staff members too (presented at ECIME
2009). Since the roll-out in 2008, more than 20.000 entries are made and data are mainly used for
statistics and reporting. Now we have reached the point to raise the questions of (1) how to keep the
staff members motivated to enter data, (2) how to handle the big amount of data as well as (3) how to
increase benefits from collecting tons of data aside of fulfilling the reporting that is legally required for
Austrian universities.
2. Work in progress
The background of question 1 was completely different at the time of roll-out when we had to
convince the staff members of using the new system and entering data. But meanwhile, most people
have entered data and some seem to get tired of entering additional data. We have seen a decrease
in the number of entries last year as compared to the years before. This might be due to the amount
of data already entered and the feeling that one entry more or less is negligible in a 20.000-entries
system.

Question 2 is associated with quality assurance of data. The more narrow the variety and the lower
the number of entries, the easier it is to have an eye on the single entries and to assure the
meaningfulness of data. In the beginning, we had internal peer reviewing within the staff when
persons were aware that their colleagues would inspect their entries. But now, persons might assume
that even waste entries could slip through the peers eyes within the whole system.

The answers to question 3 hopefully become the solutions for the two other questions too. We have to
further increase benefits from our existing system for single persons, departments, and the university.
We present some of our respective activities below.

The initiative reason for developing our tool was the legal requirement of reporting performance and
activities of the university staff members in the Intellectual Capital Report (ICR) to the ministry of
science and to the public once a year. Thus, we developed a system to measure a widespread range
of different kinds of activities and tried to make our staff members entering every achieved activity.
We count all of them and the activities are quantitatively presented on a highly aggregated level in
several tables in ICR. Qualitative information can only be derived from the heading of the tables or
some subcategories within the tables, but this is usually only information about the kinds of activities,
but not their quality itself. Additional internal reports correspond with the ICR approach and present
highly aggregated data differentiated by our universitys departments every six month. This internal
435

Harald Lothaller
information is somewhat closer to the staff, but still suffering from a lack of qualitative information.
Both ways of reporting fulfill their purpose of providing updated amounts of activities, but cannot go
behind that.

Providing qualitative information goes along with splitting the whole data into manageable parts. This
splitting is done in reference to persons, to departments, or to the kinds of activities. Splitting in
reference to persons leads to an individual list of the data entries. In the beginning of 2012, we have
developed a query tool to enable each person itself to generate this pre-designed list online and to
both view on screen and save as editable file. The query tool allows selecting the time period and the
kinds of activities to be shown. Thus, every person can now extract her own list of activities and
further use for purposes like self-report preparation, a list of publications, or others on the individual
level. Splitting in reference to departments leads to a list of data entries of an organizational unit. One
the one hand, we offer the query tool to the heads of departments also for generating list on this level.
This might be useful within evaluation processes concerning the unit, parts of it, or persons, for one-
to-one performance reviews with staff members of the department, or others on the organizational
level. On the other hand, we are now developing a possibility for transferring data from our Oracle-
based tool into typo3-based websites of departments. The departments have different requirements
as well as artistic and scientific emphases. One department with a strong scientific emphasis only
wants to provide the publications on the websites, for instance. But this would not fit other
departments with artistic emphases. Therefore, it is necessary to offer different intersections
corresponding to their needs. Anyway, the data must be updated continuously to keep the websites
up-to-date and the response times of the websites must be very short, so the amount and format of
data that has to be suchlike that it can be transferred to and processes within the websites efficiently.
Splitting in reference to the kinds of activities is connected to the differences between the
departments and done within this project. Furthermore, it leads us to quality management purposes
that shall be supported by means of the data from our tool in the future.

In anticipation of an institutional quality audit of our university by an external agency, we are
developing a comprehensive quality management system. Therein, we are including our previous
activities in the field of quality management. But we also have to bring together different activities that
have been somewhat isolated of each other before as well as fill some gaps in our system. One
current gap is the systematic evaluation of the artistic and scientific work based on our universitys
quality aims. The university management and deputies of the faculty staff together are breaking down
each single aim to both qualitative and quantitative indicators to be observed systematically and
periodically. And several of these indicators should be derived from our tool as we do not only
measure the number of activities, but each entry is accompanied by useful information concerning the
respective activity. The tricky thing is that there are no given standards in evaluation of artistic
activities in particular but some scientific disciplines too. What would be an indicator for our aim to
preparing young artists to be competitive in their community based on an autonomous artistic
identity? Or what would be an indicator for our aims to have excellent artistic and scientific work done
by our staff members that is internationally visible and remarkable? Each person or each artistic and
scientific field might have a different definition of what is excellent, of what is visible and remarkable,
or of what is an autonomous artistic identity. But we need objectivity, comparability, and acceptance
of our indicators across the whole university and the possibility of internal benchmarkings over time
periods. This will be a project for the end of this year and beyond. There are a lot of ideas about what
might be possible additionally afterwards like automatically matching persons for evaluation and
reviewing processes or bringing together staff members from different departments for completely
new interdisciplinary work based on analyses of the indicators. But it will be a long way to go before
this will come true.

In some projects, our tool already proved to be helpful although their aims were not in its primary
focus. In the area of human resources, a project tried to find out about the work load of the
administrative staff members in the academic departments. These persons support the artistic and
scientific as well as teaching staff in fulfilling their duties. Among others, several work load indicators
are periodically derived from our tool to evaluate the administrations work load and compare between
the departments as well as over time periods. Resources are allocated based on these facts. As
departments that are proven under-staffed can get additional resources and better administrative
support, all staff members of the department can see the benefits from the project and our tool.
Another project does not use our data, but the technical basis of our tool for its purposes. With few
adaptions, the event unit of our university is now able to collect all necessary information of nearly
436

Harald Lothaller
1.000 different events per year. All entries are inserted automatically into their online calendar of
events and announcement on websites. In addition, event data can be easily analyzed for time peeks
or room use for instance. We assume that similar projects in other areas might follow.
3. Lessons learnt
We have learned that we are not able to increase benefits by just spreading the whole data set over
various purposes. There is a strong need to reduce complexity when generating benefits because
neither individuals nor units are able to deal with the amount of entries and the variety of activities of
our measurement tool for their practical needs. So we often have to select rather few aspects from the
comprehensive approach of measuring performance of university staff members. Therefore, we have
seen again a need for a bottom-up approach which means to go back to the individuals and units that
had been involved in the development of our tool some years ago. Back then, they had been asked to
name various kinds of activities to be measured as our aim was to represent all our staff members
work in our tool. Consequently, we added more and more aspects to our tool. This time, we asked
them to select parts from the tool that are relevant for their specific purposes to be fulfilled.
Consequently, we pick out some aspects of our tool now. Some aspects are used several times and
for different purposes. But we see now that some aspects are not used at all, not even by those who
had asked for them some years ago. Aside, some discussion about academic quality, quality
management, and evaluation have resulted on a more general level and distributed these thoughts
within the university.
4. Conclusion
Of course, in reporting to the public, counting more and more activities is alluring. University rankings,
benchmarking approaches, or funding discussions often refer exclusively to the counted performance
but not its quality. But aside, the practical experiences have made us rethink in some ways. Having a
clear focus instead of a broad approach as well as highlighting certain aspects instead of looking for
increasing counts is needed in addition. We will not rebuild and shrink our comprehensive system for
sure. Anyway, we maybe would take a slightly different approach now as we have seen that more
data can be nice and give opportunities, but can also raise problems of manageability and are not
necessarily better.

437

You might also like