Professional Documents
Culture Documents
8 (2007) 92 – 116
Abstract
Technological changes and audit firm mergers over the last decade raise the question as to whether the
decision aids reported in prior research are representative of the types of decision support currently employed
in audit firms. To address this issue, a study was conducted of the audit support systems used at five
international audit firms and the types of decision support embedded within their audit support systems. The
concepts of system restrictiveness and audit structure were combined to develop a definition of audit support
system restrictiveness, and the firms' systems were classified using this definition. Substantial differences in
audit support system restrictiveness were found to be associated with the type of decision support embedded
within these systems. In order to guide future research, existing audit decision aid studies were mapped to the
types of embedded decision support and several future research opportunities were identified.
© 2007 Elsevier Inc. All rights reserved.
1. Introduction
Audit support systems are the key technology application deployed by audit firms to facilitate
efficient and effective audits. These systems include electronic workpapers, extensive help files,
accounting and auditing standards, relevant legislation, and decision aids. Although several
studies have investigated issues surrounding auditor use of decision aids (e.g., Anderson et al.,
1995, 2003; Bedard and Graham, 2002; Boatsman et al., 1997; Eining et al., 1997; Jennings et al.,
1993; Kachelmeier and Messier, 1990; Lowe and Reckers, 2000; Lowe et al., 2002; Mueller and
Anderson, 2002; Murphy and Brown, 1992; Murphy and Yetmar, 1996; Swinney, 1999; Ye and
⁎ Corresponding author. Tel.: +61 3 8344 3415; fax: +61 3 9349 2397.
E-mail addresses: carlin@unimelb.edu.au (C. Dowling), saleech@unimelb.edu.au (S. Leech).
1467-0895/$ - see front matter © 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.accinf.2007.04.001
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 93
Johnson, 1995), limited evidence of the decision aids used in audit practice is available, and most
of this is dated and/or firm specific (e.g., Abdolmohammadi and Usoff, 2001; Abdolmohammadi,
1987, 1999; Bell et al., 2002; Brown and Murphy, 1990; Brown and Phillips, 1991; Connell,
1987; Graham et al., 1991; Shpilberg and Graham, 1986). The aim of this study is to support
future research by documenting the audit support systems and decisions aids employed in audit
firms and mapping these to prior research. Although it is important for epistemological reasons
that future research is informed by and extends existing research, the transferability of decision
aid research findings to practice is likely to be enhanced by studies investigating issues relevant to
the types of decision aids used in practice and the issues encountered in their use.
This study documents and compares the audit support systems deployed in five international
audit firms. The systems are compared according to their location of development, use policy, the
extent to which a client's file is manually or automatically tailored, the extent to which decision
outcomes in an earlier audit phase are manually or automatically integrated into later audit phases,
and the role of the audit system within the audit process. Significant differences in audit support
system design were found, including the extent to which an audit support system is viewed as an
enabler or enforcer of a firm's audit process.
The findings reported in this study have the potential to significantly influence the direction of
future audit support system and audit decision aid research. In addition to identifying several
research opportunities, this study highlights the importance of system restrictiveness, a construct
that has not been extensively investigated in the literature. This study combined the concepts of
system restrictiveness (from the information systems literature) and audit structure (from the audit
literature) to develop a definition of audit support system restrictiveness. This is an important
contribution to the literature because, as the findings reported in this study indicate, the extent of
system restrictiveness is an important design feature of audit support systems associated with the
type of decision support embedded within a system and the way in which an audit firm views their
audit support system. The significant differences in audit support system design documented in
this study provide important insights that can be used to generalize and integrate the findings of
studies investigating audit support systems.
The remainder of this paper is divided into three sections. The first section presents the
research method, compares the audit support systems and discusses the types of decision support
embedded in these systems. In the second section, opportunities for future research are identified
by mapping extant studies to the types of decision aids currently deployed in audit firms. In the
third section, the implications for research and the limitations of the study are discussed.
Semi-structured interviews were conducted with four partners and four managers from five
audit firms, the Big 4 and one large mid-tier international audit firm1. The purpose was to obtain
information regarding the audit support systems and the decision aids deployed in these firms
during 2004/2005. Each of these firms have a proprietary audit support system that has decision
aids embedded within the system. All of the partners and managers interviewed have extensive
1
To satisfy confidentiality agreements each firm is referred to as A, B, C, D, or E. These alpha codes are used in the
text and tables to facilitate cross-referencing.
94 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
knowledge of their firm's audit support system2 and decision aids. In three firms, the partners and/
or managers were members of their audit firm's global or national audit technology development
group. The partners and managers interviewed in the other two firms were responsible for
deployment and staff training of audit technologies.
Each interview began by defining a decision aid to ensure that interviewers and the inter-
viewees shared a common understanding. The partners/managers were then asked to sys-
tematically walk through the decision aids used in each stage of the audit process, the typical
users and the output obtained. Questions and prompts for further information were asked
throughout the interviews3 .
All interviews were voice recorded and transcribed. The interview transcriptions were used to
code the data for each audit firm4. For each firm, tables were compiled that contained an overview
of the firm's audit support system, the decision aids used in each phase of the audit, and the
perceived benefits and limitations of using decision aids. The interviewees were sent the tables for
their firm and asked to confirm the contents or make any necessary adjustments5.
Table 1 provides a comparison of the audit support systems deployed in the five audit firms.
The key similarities and differences across the audit firms are discussed below6.
2
The terms audit support systems and system are used interchangeably for ease of readability.
3
Some firms provided less information than others, despite prompting from the researchers. This variability is reflected
in the analysis throughout the paper.
4
One researcher coded the data and compiled summary tables. The complied tables were independently reviewed by
the other researcher then forwarded to the audit firms for verification. The coding process was very objective because of
the descriptive nature of the data. It was considered redundant for two researchers to independently code the data to
obtain a measure of inter-rater reliability because independent verification was obtained from the audit firms.
5
A change in audit technology committee membership in one firm meant that a different individual to whom we
interviewed verified our tables. For all other firms, at least one interviewee confirmed the data. The audit firms did not
verify the extent to which their firm’s system was classified as restrictive. System restrictiveness is a relative measure
(Silver, 1988a,b). To be meaningful, assessment requires knowledge of the other firms’ systems, which the interviewees
did not have access to.
6
To satisfy confidentiality agreements, screen shots of the decision aids cannot be provided because they would
identify the audit firms.
7
In 2006, this firm rolled out an enhanced audit support system. Use of the enhanced system was mandatory during the
planning phase of the audit.
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 95
Table 1
Comparison of audit support systems
Firm A Firm B Firm C Firm D Firm E
Developed globally Yes Yes Mostly Yes Yes
Supplemented with other tools No No Yes Yes No
Use policy for large clients Mandatory Mandatory Generally mandatory Voluntary Mandatory
Use policy for small clients Voluntary Mandatory Some aspects mandatory Voluntary Mandatory
Industry tailoring Yes Yes Yes Yes Yes
Tailoring of client file Predominately manual Automatic Manual Manual Automatic
Extent of automated Medium High None Low High
integration across
audit phases a
System viewed as enabler Yes Yes Yes Yes Yes
of audit process
System enforces compliance No Yes No No Yes
with audit methodology
Decision aids embedded Yes Yes Yes Yes Yes
Extent of automated Low High Low Low High
decision support b
Extent of System Low High Low Low High
Restrictiveness c
a
High = output in one phase of system is automatically integrated as input into a following phase; Medium =
combination of manual and automatic integration within the system; Low = integration is mostly completed manually;
None = all integration is completed manually.
b
High = system provides recommendations based on user input; Low = embedded decision aids generally only prompt
users (e.g., checklists).
c
High = system significantly restricts extent to which user is free to choose how the audit is performed and make certain
judgments through influencing how user interacts with the system and conducts the audit; Low = system does not
significantly constrain users interaction with the system or enforce how audit should be performed.
8
The manager at one of these firms described the questions as having two levels, and auditors are free to answer either
level. “Auditors can choose to answer the high level primary questions or they can drill down to sub-questions which can
help them answer the primary level questions”.
96 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
The examples above indicate that the five audit firms hold differing views on whether the
role of their audit support system is to enforce their firm's audit methodology. From Table 1 it
can be seen that these differences are correlated with other design differences. As summarized
in Table 1, Firms B and E, the two firms that view their audit support system as an enforcer of
their firm's audit methodology, are also the only two firms in which system use is mandatory
for all clients, the system provides a high level of automated decision support, automatically
tailors a client's engagement file and is highly integrated across the various audit phases. In
contrast, the systems used at the other three firms (A, C, and D), who do not view their system
as an enforcer of their firm's methodology, provide relatively low levels of decision support,
require auditors to manually tailor the file and are not automatically integrated across the audit
phases.
The differences in audit support system design highlighted in Table 1 and discussed
above are akin to the differences in audit firm structure (see for example, Bowrin, 1998;
Cushing and Loebbecke, 1986; Prawitt, 1995). A structured audit approach is “a systematic
approach to auditing characterized by a prescribed, logical sequence of procedures, de-
cisions, and documentation steps, and by a comprehensive and integrated set of audit
policies and tools designed to assist the auditor in conducting the audit” (Cushing and
Loebbecke, 1986:32). Audit support systems have become the “face” of a firm's audit
methodology and thus the firm's audit approach; as one manager said “auditors don't read
the methodology, they don't need to, the system informs them”. To the extent audit support
systems reflect a firm's audit structure, the differences in system design discussed above are
contrary to the belief that audit firms have converged on the adoption of similar semi-
structured audit approaches (Bowrin, 1998). Although all of the audit support systems
described in this study enable and (to some extent) structure the audit process, the audit
support systems used at two of the five firms have been clearly designed to structure and
control the audit process to a greater extent than the audit support systems deployed at the
other three firms.
restrictiveness limits the degree to which an auditor is able to choose how the audit is performed
and the degree to which the auditor is free to make certain judgments, such as choosing which
audit tests should be performed. Based upon the design features documented in Table 1 and the
previous discussion of the key features of these firms' systems, the audit support systems used at
Firms B and E are classified as having a “high” level of restrictiveness and the audit support
systems used at the other three firms (A, C and D) are classified as having a “low” level of
restrictiveness.
Although decision aids are embedded within all firms' audit support systems, the type of
decision support provided varies across the firms. These differences are correlated with the extent
to which a system is classified as having a “high” or “low” level of system restrictiveness. Table 2
provides an overview of the decision support provided during the key audit phases (Panels A to D)
for the audit support system's classified as having a “high” and “low” level of system re-
strictiveness system.
Consistent with the “high” level of audit support system restrictiveness in the systems used at
Firms B and E, the decision support embedded within these systems is more structured and
prescriptive than the decision support embedded in the systems classified as having a “low” level
of system restrictiveness (Firms A, C and D). The type of decision support embedded within the
“low” restrictive systems is predominately checklists that do not provide recommendations. These
checklists are designed to structure a user's information search by prompting users to consider
certain items. For example, during the client acceptance and understanding the client phase, the
checklists used at Firm C structure the information to be obtained by prompting a user to consider
the client's integrity, the risk profile and the industry in which the client operates. In contrast, the
decision aids embedded in the “high” restrictive audit support systems (Firms B and E) use an
auditor's responses to questions in checklists as the basis to tailor the audit file and provide
recommendations.
From the classification scheme used in Table 2 (Panels A to D) it can be seen that for the
majority of decisions across the various audit phases, systems classified as having a “high” level of
system restrictiveness contain decision aids that provide recommendations, including identifying
the risks, recommending the audit strategy, identifying the control objectives, recommending
relevant control tests, assessing the effectiveness of the controls, and recommending substantive
audit tests. In contrast, the decision aids embedded within the “low” restrictive systems generally
do not provide recommendations. For example, during the control and substantive testing phase
(Table 2, Panel C), all five firms make extensive use of test banks that contain lists of
recommended audit procedures. The test banks in the “high” restrictive systems (B and E) are used
by the audit support system to recommend the appropriate audit steps based upon the auditor's
input. In contrast, manual test banks are used at the three firms classified as having “low”
restrictive systems (A, C and D). These test banks include a minimal number of mandatory audit
steps and several other steps from which auditors manually select and import the applicable tests
into a client's file.
An exception to the pattern for firms whose audit support systems are classified as
having “low” levels of system restrictiveness is the automated decision support recom-
mendation in Firm A's system. This decision aid provides a standardized risk measure (Z
score) that informs the auditor's decision to accept/reject the client and influences the audit
approach by generating a list of relevant generic risks. However, this decision aid's output
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 99
Table 2
Comparison of decision support provided by audit support system classification and audit phase
Audit support High level of system restrictiveness Low level of system restrictiveness
system
classification
Audit firm B E A C D
(from Table 1)
Panel A: Decision support provided for client acceptance/understanding the client
Types of Risk identification Eligibility to act Automated Guide to Acceptance and
decision aids questionnaire checklist structure retention checklist
information to
Confirmation of Need for second be attained
independence partner review
requirements
Table 2 (continued )
Audit support High level of system restrictiveness Low level of system restrictiveness
system
classification
Audit firm B E A C D
(from Table 1)
Panel C: Decision support for control and substantive testing
Effectiveness of Yes for CIS Yes No No No
controls assessed (used by IT specialists)
by decision aid
Automated Yes Yes No No No
recommendation
of audit tests
Panel D: Decision support provided for reviewing the audit file and forming an audit opinion
Types of Electronic ‘file check’ Disclosure Disclosure Standardized Disclosure
decision aids highlights incomplete checklist checklist reporting checklists
areas templates
is not binding; clients identified as high-risk can be accepted and the audit approach
tailored accordingly. The relevant risk factors are imported into the audit support system
and auditors are required to outline the procedures which will be used to address the risks.
The partner reported that while this aid was reliable and stable, it is limited in that the
standard questions do not always cover all risks for a specific client. Therefore, users need
to consider each client's specific circumstances and whether other risks not covered by the
decision aid need to be considered. This limitation is also likely to be applicable to the
risks and audit strategy recommended by the systems used at Firms B and D. A partner at
Firm B stressed that the pervasive risks identified by the system using an auditor's
response to the standard questions needs to be supplanted with auditor-designated client-
specific risks.
The other significant difference to the pattern of decision support provided for systems
classified as having “high” vs “low” levels of system restrictiveness is that none of the five
firms has a decision aid which forms the audit opinion for a client; this is clearly an area
requiring application of an auditor's professional judgment. The firms do however use decision
aids which assist auditors review the audit file for completeness, and tools for determining that
the financial statements include the required disclosures. Red flags embedded within the
system highlight key areas not completed and/or reviewed in the systems used at Firms A, B, D
and E. Consistent with providing a high level of structure and prescription, the completeness
tools embedded within Firm B and D's audit support system enable auditors to match the risk
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 101
Table 3
Perceived benefits and limitations of providing automated decision support
Benefits Limitations
Enhances audit quality through compliance with Auditors can over rely on recommendations made by the system
auditing standards and audit methodology Mechanistic behavior — emphasis on ticking the box rather
Increases audit efficiency than judgment
Consistent audit approach across clients Significant amount of training required
Improves risk management Stability of technology
Facilitates documentation Not cost efficient on very small jobs
Controls junior staff Perceived complexity of the system can result in auditors not adopting
the technology, or working around it by using word documents
factors identified up front in the audit process with how they have been addressed throughout
the audit.
2.3.2. Benefits
The partners identified that decision aids can enhance audit quality through promoting
compliance with accounting standards and the firm's methodology. Although no mention was
made of whether the use of decision aids increases the degree of structure of the audit process, as
suggested by Ashton and Willingham (1988), the classification of the audit support systems in
the previous section suggests that the type of decision support provided and how it is integrated
within a firm's audit support system increases the structure of the audit process. The benefit of
consistency across clients and facilitation of documentation is consistent with claims that
documentation, as one aspect of justifying decisions, may lead to increased consistency (Ashton
and Willingham, 1988). The partners indicated that embedding decision aids within audit
support systems that tailor the client's audit program improves audit efficiency, controls junior
staff and improves risk management. These benefits are consistent with prior claims that decision
aids can improve efficiency through decreasing decision time and structuring the information
search to gather facts specifically relevant to each audit engagement (Ashton and Willingham,
1988; Elliott and Kielich, 1985). However, the benefits of “controlling junior staff and
improving risk management” are not explicit in the prior literature9. The benefits of using
decision aids to improve knowledge sharing and staff training mentioned in the prior literature
(Ashton and Willingham, 1988; Elliott and Kielich, 1985) were not mentioned by the partners/
managers.
9
In a subsequent interview with one of the participating managers, the manager raised the concern that staff shortages
are driving an increasing need for tools to be developed that will structure the audit process further to enable audit firms
to employ para-professionals to complete basic audit tasks.
102 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
2.3.3. Limitations
The risks associated with the use of decision aids identified by the partners include the
potential for auditors to over rely on system recommendations, mechanistic behaviour, and that
decision aids are time consuming — both on the job (they are often not suitable for small audits)
and in the time required to train users. The dangers of mechanistic use and over-reliance have
been noted before in the literature (for example, Arnold and Sutton, 1998; Ashton and
Willingham, 1988; Rose, 2002). Consistent with previously stated concerns regarding the
Table 4
Prior research which has examined decision aids similar to those currently used in practice
Audit phase Decision aids being used in practice Prior studies
Client acceptance and Automated and manual checklists – (Bedard and Graham,
understanding the client including: 2002; Boatsman et al., 1997;
• Acceptance/retention Eining et al., 1997;
• Eligibility to act Independence Johnson and Kaplan, 1996;
• Risk identification Lowe et al., 2002;
• Need for IT specialists Pincus, 1989)
Materiality calculator Jennings et al. (1993)
System recommended audit strategy
Guide to structure information to be attained
Understanding the System recommended control objectives and/or testing
control environment IS control assessment tool
Documentation tool
Questionnaires and checklists Bonner et al. (1996)
Risk factor identification (Eining and Dorr, 1991;
Hornik and Ruf, 1997;
Mascha, 2001; Murphy and
Yetmar, 1996; Odom and Door,
1995; Pei et al., 1994; Smedley
and Sutton, 2004; Steinbart and
Accola, 1994)
Need for IT specialists
Control and substantive Audit program recommended by system
testing Sample selection tools (manual and computerized) (Kachelmeier and Messier,
1990; Messier et al., 2001)
Evaluation calculator Butler (1995)
Ratio calculation
Analytical procedure tools (Anderson et al., 1995, 2003;
Kaplan et al., 2001; Mueller and
Anderson, 2002; Swinney, 1999;
Ye and Johnson, 1995)
Attribute testing support
Test banks
• Guidance for how to test controls
• Industry or generic processes
Audit program templates
Review and forming an Disclosure/compliance checklists (Lowe and Reckers, 2000;
audit opinion Murphy, 1990)
Identification of completed/ uncompleted workpapers
Identification of the review status of workpapers
Extract identified risk factors and how they have been
addressed
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 103
development and maintenance costs (for example, Elliott and Kielich, 1985), the partners raised
the question of the stability of the technology given changes in hardware, software, auditing
standards and methodology. They also implied that the perceived complexity of the system may
lead auditors to not embrace the technology, which can result in inappropriate use, including
auditors finding ways to work around the system. This concern is consistent with concerns about
users “circumventing the aid” (Ashton and Willingham, 1988) “working backwards” (Messier
and Hansen, 1987; Messier et al., 2001) or “working around” (Bedard et al., 2005a,b) the
system. Although the prior literature has raised concerns regarding the long-term use of decision
support, including the de-skilling of auditors' abilities (Arnold and Sutton, 1998), increased
competition from non-accountants and a decline in the demand for junior staff (Ashton and
Willingham, 1988; Elliott and Kielich, 1985), these concerns were not mentioned by the partners
and managers.
The discussion presented in this section has highlighted that the type of decision support
embedded within a firm's audit support system is an important design feature associated
with the extent to which a system is classified as having a “high” or “low” level of system
restrictiveness. In the next section, the implications of these findings for future research are
discussed.
3. Mapping extant research to current audit decision aids: opportunities for future research
To identify future research opportunities the extant research on audit decision aids is mapped
to the types of decision support used in practice. This mapping is summarized in Table 4, and a
detailed summary of the prior studies is provided in Appendix A. Because audit tasks and the
type of decision support provided varies across audit phases, the mapping is undertaken using
the four major phases of an audit: client acceptance and understanding the client, understanding
the control environment, control and substantive testing and audit review and opinion
formulation. This mapping is followed by a discussion of studies that have investigated audit
support systems.
The extant studies reported in Table 4 that have examined decision aid use in the client
acceptance and understanding the client phase have predominately used manual checklists.
Although manual checklists are predominately used in audit support systems classified as having
“low” levels of system restrictiveness, audit support systems classified as having “high” levels of
system restrictiveness typically incorporate automated checklists that provide a recommendation
based upon a user's input. Differences in reliance and decision performance have been found to be
associated with different types of decision aids (Eining et al., 1997). This raises the question of
whether users rely differently on manual compared to automated checklists depending upon the
restrictiveness of the audit support system in which they are embedded. Future research could also
investigate what factors influence an audit firm's decision to provide automated or manual
checklists. For example, how important is a firm's client portfolio in this decision? Audit support
systems cannot be designed to meet the needs of all audit engagements. Therefore, do the client
portfolios of firms that deploy highly restrictive systems differ from firms that deploy less
restrictive systems?
Overall, very few studies have investigated the types of decision aids used during the client
acceptance and understanding the client phase. The current study identified that three of the five
104 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
audit firms (A, B and E) use decision aids to inform the client acceptance decision and assess client
risk. However, the type of output provided by the decision aids differs across the firms. In one firm
the aid provides a standardized risk measure (Z score), whereas in the other two firms a decision to
reject/accept the client is provided. These differences in output suggest opportunities for future
research. For example, what are the implications of providing a Z score, a one line recommendation
or a more extensive recommendation? The provision of explanations has been found to increase
user acceptance of decision aids (Ye and Johnson, 1995). Future research could investigate
whether the provision of extensive recommendations influences user acceptance, reliance and/or
decision making outcomes.
Audit support systems also differ in terms of whether the system identifies and recommends the
risks. Understanding the relevant risks for a client's engagement is an important first step within an
audit which significantly influences the audit approach. The differences in the design of the
systems used at the participating audit firms provides opportunities to test concerns that long-term
use of decision aids has a de-skilling effect on auditor expertise (Arnold and Sutton, 1998).
Although several studies have investigated short-term learning transfer from decision aids to users
(predominately using student subjects and internal control assessment decision aids) (e.g., Eining
and Dorr, 1991; Odom and Dorr, 1995; Smedley and Sutton, 2004), no known published study has
investigated the long-term effects of decision aid use on auditor knowledge acquisition. Because
the audit support systems and decision aids reported in this study have been deployed for a
reasonable period of time, researchers are now in a position to begin investigating any long-term
effects of providing decision support. For example, a study could be designed to investigate
whether the design differences of audit support systems identified in this study are associated with
differences in auditors' knowledge.
Although several studies have used decision aids to evaluate internal controls (e.g., Eining
and Dorr, 1991; Hornik and Ruf, 1997; Mascha, 2001; Murphy and Yetmar, 1996; Odom and
Dorr, 1995; Pei et al., 1994; Smedley and Sutton, 2004; Steinbart and Accola, 1994), the
focus has been on investigating knowledge transfer from expert systems to novice users. As
such, these studies provide limited insights into the use of decision aids during this audit
phase.
Although all of the five audit firms provide their auditors with decision support for assessing
a client's control environment, the decision support provided and the way in which it is
incorporated within the firms' audit support systems differ in terms of whether a client's file is
tailored by the user (i.e., manually) or by the system (i.e., automatically). These design
differences, which are associated with the extent of system restrictiveness embedded within a
firm's system (Table 1), provide opportunities for future research. For example, does manual vs
automatic tailoring affect auditor knowledge and their ability to develop an appropriate audit
strategy when the auditor does not have access to their firm's audit support system? If so, what
are the implications when auditors review workpapers? O'Leary (2003) found that staff and
manager level auditors make different environmental assessments which result in differences in
user inputs. Does this mean that reviewers who have been trained on a system that tailors a
client's file automatically are less able to identify incorrect inputs by subordinates and
inappropriate audit plan tailoring? Or do they over rely on the system's automatic tailoring?
Manual audit file tailoring also has risks which present opportunities for future research. For
example, are manually tailored audit programs less efficient than automatically tailored
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 105
programs? And are risks adequately covered or do auditors tend to adopt a ‘same as last year’
approach?
Several studies were identified that used decision aids similar to those used in practice
during the control and substantive testing phase. Two studies (Kachelmeier and Messier, 1990;
Messier et al., 2001) that examined the use of sample selection tools found that auditors ‘work
backwards’ by adjusting inputs to obtain desired outputs. The current study identified
differences in the way decision aids assisted auditors determine an appropriate sample size. For
example, the tool used at Firm A is a manual set of tables from which auditors select the
appropriate sample size from suggested “ranges” depending on the extent of “comfort”
required. In contrast, the tools provided in two systems classified as having a “high” level of
system restrictiveness are computerized. For example, at Firm E, auditors input the population
size, key items, materiality level, control and inherent risk into the sample selection tool. The
tool uses these inputs to recommend the exact sample size. Auditors at this firm also have
access to a tool which assists in the extrapolation of the sample test results to the population.
Future research could investigate whether the type of output (“range” vs “exact sample size”)
impacts the propensity of auditors to “work backwards” or the ease of which they can “work
backwards”. A manual sample size calculation was used in Kachelmeier and Messier (1990)
and Messier et al. (2001). The open nature of this type of decision aid increases a user's ability
to understand how the aid works and thus how inputs can be altered to achieve the desired
output. In contrast, many of the sample size selection tools used in practice are electronic and
provide an exact recommended sample size. This raises the question as to whether this
eliminates or hinders a user from “working backwards”, for example, because the calculation is
‘hidden’ from a user; or, does it take a user longer to obtain a sufficient understanding of the
relationship between the inputs into the aid and the aid's output to acquire the knowledge to
“work backwards”?
Decision tools used during analytical procedures have been extensively investigated in the
prior literature. The focus has been on understanding auditor reliance on these aids. Studies
have found that auditors over rely on erroneous or insufficient decision aid outputs (Anderson
et al., 2003; Swinney, 1999). One reason for this is that auditors perceive intelligent decision
aids to be ‘experts’ that incorporate audit firm knowledge and therefore they must be relied
upon (Swinney, 1999). However, other studies have found that under-reliance is also a concern.
Factors influencing under and over-reliance on decision aids have been examined extensively in
the decision aid literature. Although not all of these studies have used auditors or examined
auditor judgments, the overall findings are potentially relevant for audit decision aid research.
Reliance on a decision aid has been found to be influenced by incentives (Ashton, 1990),
feedback (Ashton, 1990), disclosure of the aid's predictive ability (Kaplan et al., 2001), the
decision aid's face validity (Ashton, 1990), the user's locus of control (Kaplan et al., 2001),
and the provision of explanations (Ye and Johnson, 1995). Reliance on decision aids is an
important issue for audit firms, because underutilization of decision aids increases perceptions
of auditor liability (Anderson et al., 1995).
Studies have also identified that the design of an aid is important. Mueller and Anderson
(2002) examined the way goal framing influences how auditors use a decision aid. They found
that when an “inclusion” goal frame is built within the instructions, auditors select a smaller set
of relevant explanations from a provided list than auditors instructed with an “exclusion” goal
106 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
frame. The current study has documented significant differences in the prescriptiveness of the
decision support embedded within the five firm's audit systems. Future research could
investigate if such design differences and the extent of system restrictiveness embedded within
an audit support system are associated with differences in perceptions of auditor liability if
auditors underutilize these aids. The outcomes of such research have the potential to provide
important information to audit firms as they continually develop and refine their firm's audit
support systems.
Although this study identified that test banks are used extensively in practice, no previously
published study was identified that had examined their use. In practice, test banks are used
manually (where auditors select from lists of possible tests) or they are automated (where the test
bank recommends the relevant tests based upon an auditor's responses to standard questions). The
test banks also differ in relation to the way in which the information is organized, with some test
banks grouped according to industry and/or processes. There are potentially many research
opportunities related to test banks. For instance, does the design/grouping of tests, such as by
industry, process or class of transactions, influence audit efficiency and/or effectiveness? Are there
audit efficiency and/or effectiveness implications when using a manual test bank? And compared
with an automated test bank does having a manual test bank increase the likelihood that the tests
will be the ‘same as last year'?
Two prior studies which have examined decision aids used during the audit review and
opinion formulation phases were identified. Both of these studies examined the use of
compliance checklists. Lowe and Reckers (2000) found that the design of a compliance
checklist alters an auditor's thought processes to be more aligned with how the decision would
be evaluated ex post. The design has a framing effect, which can decrease the potential legal
liability implications in the case of audit failure (Lowe and Reckers, 2000). Murphy (1990)
examined whether the design of an expert system, developed by a Big 6 audit firm to assess
compliance with an accounting standard, impacted learning. The results indicated that learning
is highest when a semi-structured, non-automated aid is used compared with an expert system
(with or without explanations). No study was identified which has investigated the use of red
flags during the audit review process. However, our study found that these decision aids are
used extensively in practice. Future research could investigate the implications of using red
flags during the review process. For example, what are the consequences of closing a red flag
before it is actioned? Can reviewers identify inappropriate red flag/review note closure? Or do
they over rely on the automatic red flag check? Prior studies have found that the review
process is more complex when completed electronically (Bedard et al., 2005b; Rosman et al.,
2005) and reviewers identify less seeded errors in an electronic environment (Bible et al.,
2005). There are future research opportunities to investigate whether red flags or other
decision aids can be designed to decrease the complexity of completing an audit review
electronically.
The highly integrated nature of audit support systems has lead researchers to begin inves-
tigating the use of the entire systems. Banker et al. (2005) examined the economic impact of
investing in audit support systems and found that firms benefit from efficiency gains driven by
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 107
4. Conclusion
The aim of this study was to support future research in audit support systems and audit
decision aids by documenting and comparing the types of systems and aids deployed in
five major international audit firms. Significant differences between these firms' audit
support systems were identified. The concept of system restrictiveness (Silver, 1988a,b,
1990) was combined with Cushing and Loebbecke's (1986) definition of audit structure to
develop a definition of audit support system restrictiveness. Five firms' audit support
systems were then classified as having either a “high” or “low” level of audit support
system restrictiveness. The audit support systems classified in the “high” category are
viewed by their audit firm as an enforcer of their firm's audit methodology. Use of these
systems is mandatory for all clients, client files are automatically tailored, the work
components across the audit phases are automatically integrated and high levels of automated
decision support are embedded within these systems. Although all firms view their system as
an enabler of the audit process, the systems classified as having “low” audit support system
restrictiveness are not viewed as an enforcer of their firm's audit methodology. Use of the
systems in the “low” restrictiveness category is voluntary for small engagements, auditors
manually tailor the client files, the work components predominately require manual
integration and manual checklists are the main type of decision support embedded in these
108 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
systems. The differences in audit support systems and the types of decision support embedded
within these systems provide many opportunities for future research, many of which were
identified in this study by mapping the extant decision aid literature to the types of decision
support used in practice.
Several limitations of this study need to be taken into account. The use of semi-structured
interviews with the partners and managers of the five firms limited the amount of information
collected on the audit support systems and decision aids. Data were not collected from users.
Even so, we are confident that we gained access to the best experts in the firms since the
partner/managers were members of their audit firm's global or national audit technology
development group or responsible for deployment and staff training related to audit
technologies. The purpose was to gain an insight into the audit support systems and the
types of decision support embedded within them, not whether or how these systems are used. A
second limitation is that the study was limited to five firms. However, they are the biggest
international audit firms and were selected because they developed and deployed their own
audit support systems and decision aids. Finally, the mapping of prior studies to current audit
decision aids has its limitations because many of the studies used decision aids with task
content that varies from current aids.
Given the above caveats, this study provides crucial guidance for future research in audit
support systems and the types of decision support embedded in these systems. The
comparison of the audit support systems reported in this study clearly indicate significant
differences in the types of systems used in practice. This finding has important implications
for past and future research. Although several studies have investigated audit support systems
(e.g., Banker et al., 2005; Bedard et al., 2005a, O'Donnell and Schultz, 2003, Rosman et al.,
2005), with the exception of O'Donnell and Schultz (2003), these studies have investigated a
single audit support system. The fact that audit firms deploy different kinds of audit support
systems limits the generalizability of studies investigating a single support system. This is not
to imply that insights cannot be gained from studies investigating a single support system,
and depending upon the research question, controlling for system design could be vital.
However, as the body of literature investigating audit support systems grows, it is important
that links are made between the studies so that generalizable conclusions can be drawn. The
audit support systems used by five international audit firms in this study provides a common
basis from which prior and future studies investigating audit support systems can be
classified. This study documented that significant differences in the role of audit support
systems and the types of decision support embedded within these systems are correlated with
whether an audit support system is classified as having a “high” or “low” level of system
restrictiveness. This definition and the criterion developed in this study can be used to
classify the types of audit support systems investigated in prior and future studies as a basis
for generalizing research findings and for enhancing the transferability of a study's finding to
audit practice.
Acknowledgements
Anderson Experiment Analytical diagnostic tool 45 judges Assess auditor Perceived liability Full utilization of aid vs no Under utilization of
et al. (1995) (between that recommends liability of auditor decision aid decision aid increases
subject) procedures to investigate Financial statements Full utilization vs under perceptions of auditor
unusual ratio overstated by 9% utilization of decision aid⁎ liability.
due to an inventory error
Anderson Experiment Participants 51 Big 5 Provided with explanations Sufficiency Source of explanation Insufficient explanations
et al. (2003) (between provided with auditors for unusual fluctuations from of provided (aid or client)⁎ accepted from decision
subject) decision aid output either the firm's decision aid explanations Prior ratio analysis aid more than client,
or a client. Provided with experience⁎ suggesting auditors rely
information that all upon an aid's output.
explanations were
insufficient.
Ashton (1990) Experiment Output of aid 182 KPMG Bond rating predictions Classification Financial incentives External factors influence use of
(between provided. Users auditors for 16 firms accuracy [tournament] (FI) aid. Average performance
subject) informed aid was Performance feedback (PF) decreased in the presence of an
accurate in 50% Justification apparently valid aid when
of cases requirement (JR) tournament financial incentives,
Presence of decision feedback & justification were
aid ( (DA) ⁎ required.
FI × DA⁎
PF × DA⁎
JR × DA⁎ Lower face validity
FI × PF × DA decreases reliance.
Face validity⁎
Bedard and Experiment Client risk 46 auditors Identify risk factors, Number of Orientation of decision aid A higher number of risk
Graham (matched pair) assessment tool assess risk levels, negative risk (negative [emphasis on factors are identified
(2002) and plan audit tests factors client risk factors] or when a negatively
identified positive [no emphasis]) orientated decision aid is
Substantive testing Risk level of client used for high-risk clients.
performed. (high or low)
Orientation × risk level⁎
Prior engagement
experience with the client⁎
Boatsman Experiment Fraud 118 senior Audit planning Compare initial Decision consequences Higher non-reliance or shifting
et al. (1997) (within and assessment auditors judgment based on assessment (no aid) (severity of monetary away from aid's recommendation
between tool assessed level of with final decision payoffs/penalties)⁎ in the presence of decision
subject) management fraud (made after provided consequences.
with aid's
recommendation)
109
(continued on next page )
110
Appendix ) (continued )
(continued A
Study Research Decision aid type Participants Task Dependent Independent variable(s) Findings
design variable(s) ⁎ = significant
111
(continued on next page )
112
Appendix ) (continued )
(continued A
Study Research Decision aid type Participants Task Dependent Independent⁎variable(s) Findings
design variable(s) ⁎ = significant
113
114 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
References
Abdolmohammadi MJ. Decision support and expert systems in auditing: a review and research directions. Account Bus
Res 1987:173–85 [Spring].
Abdolmohammadi MJ. A comprehensive taxonomy of audit task structure, professional rank and decision aids for
behavioral research. Behav Res Account 1999;11:51–92.
Abdolmohammadi MJ, Usoff C. A longitudinal study of applicable decision aids for detailed tasks in a financial audit. Int J
Intell Syst Account Financ Manag 2001;10:139–54.
Anderson JC, Jennings M, Kaplan SE, Reckers PM. The effect of using diagnostic decision aids for analytical procedures
on judges' liability judgments. J Account Public Policy 1995;14:33–62.
Anderson JC, Moreno KK, Mueller J. The effect of client vs decision aid as a source of explanations upon auditors'
sufficiency judgments: a research note. Behav Res Account 2003;15:1–11.
Arnold V, Sutton G. The theory of technology dominance: understanding the impact of intelligent aids on decision maker's
judgments. Adv Account Behav Res 1998;1:175–94.
Ashton RH. Pressure and performance in accounting decision settings: paradoxical effects of incentives, feedback and
justification. J Account Res 1990;28:148–80.
Ashton RH, Willingham JJ. Using and evaluating audit decision aids. Auditing symposium IX: proc of the 1988 touche
Ross/University of Kansas symposium on auditing problems. Kansas: School of Business, University of Kansas; 1988.
Banker RD, Chang H, Kao Y. Impact of information technology on public accounting firm productivity. J Inf Syst
2005;16:209–22.
Bedard JC, Graham LE. The effects of decision aid orientation on risk factor identification and audit test planning. Audit J
Pract Theory 2002;21:40–56.
Bedard JC, Jackson C, Ettredge ML, Johnstone KM. The effect of training on auditors' acceptance of an electronic work
system. Int J Account Inf Syst 2003;4:227–50.
Bedard JC, Ettredge ML, Jackson C, Johnstone KM. Electronic media in the audit work process: perceptions, intentions
and use. Working paper. Boston: Northeastern University; 2005a.
Bedard JC, Ettredge ML, Johnstone KM. Adopting electronic workpaper systems: task analysis, transition and learning
issues, and auditor resistance. Working Paper. Boston: Northeastern University; 2005b.
Bell TB, Bedard JC, Johnstone KM, Smith EF. Krisksm: a computerized decision aid for client acceptance and continuance
risk assessments. Audit J Pract Theory 2002;21:97–113.
Bible L, Graham L, Rosman A. The effect of electronic audit environments on performance. J Account Audit Financ
2005;20:27–42.
Boatsman JR, Moeckel C, Pei BKW. The effects of decision consequences on auditors' reliance on decision aids in audit
planning. Organ Behav Hum Decis Process 1997;21:211–47.
Bonner SE, Libby R, Nelson MW. Using decision aids to improve auditors' conditional probability judgments. Account
Rev 1996;71:221–40.
Bowrin AR. Review and synthesis of audit structure literature. J Account Lit 1998;17:40–71.
Brown CE, Murphy DS. The use of auditing expert systems in public accounting. J Inf Syst 1990:63–72 [Fall].
Brown CE, Phillips ME. Expert systems for internal auditing. Intern Aud 1991:23–8 [August].
Butler SA. Application of a decision aid in the judgmental evaluation of substantive test of details samples. J Account Res
1995;23:513–26.
Connell NAD. Expert systems in accountancy: a review of some recent applications. Account Bus Res 1987;17:221–33.
Cushing BE, Loebbecke JK. Comparison of audit methodologies of large accounting firms. USA: American Accounting
Association; 1986.
Davis FD, Bagozzi RP, Warshaw PR. User acceptance of information technology: a comparison of two theoretical models.
Manage Sci 1989;35:982–1003.
DeSanctis G, Poole MS. Capturing the complexity in advanced technology use: adaptive structuration theory. Organ Sci
1994;5:121–47.
Eining MM, Dorr PB. The impact of expert system usage on experiential learning in an auditing setting. J Inf Syst 1991:1–16.
Eining MM, Jones DR, Loebbecke JK. Reliance on decision aids: an examination of auditors' assessment of management
fraud. Audit J Pract Theory 1997;16:1–19.
Elliott RK, Kielich JA. Expert systems for accountants. J Account 1985:126–34 [September].
Graham LE, Damens J, Van Ness G. Developing Risk Advisorsm: an expert system for risk identification. Audit J Pract
Theory 1991;10:69–96.
Hornik S, Ruf BM. Expert systems usage and knowledge acquisition: an empirical assessment of analogical reasoning in
the evaluation of internal controls. J Inf Syst 1997;11:57–74.
C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116 115
Jennings M, Kneer DC, Reckers PM. The significance of audit decision aids and precase jurists' attitudes on perceptions of
audit firm culpability and liability. Contemp Account Res 1993;9:489–507.
Johnson VE, Kaplan SE. Auditors' decision-aided probability assessments: an analysis of the effects of list length and
response format. J Inf Syst 1996;10:87–101.
Kachelmeier SJ, Messier WF. An investigation of the influence of a nonstatistical decision aid on auditor sample size
decisions. Account Rev 1990;65:209–26.
Kaplan SE, Reneau JH, Whitecotton S. The effects of predictive ability information, locus of control and decision maker
involvement on decision aid reliance. J Behav Decis Mak 2001;14:35–50.
Lowe DJ, Reckers PM. The use of foresight decision aids in auditors' judgments. Behav Res Account 2000;12:97–118.
Lowe DJ, Reeckers PM, Whitecotton S. The effects of decision-aid use and reliability on jurors' evaluations of auditor
liability. Account Rev 2002;77:185–202.
Lynch A, Gomaa M. Understanding the potential impact of information technology on the susceptibility of organizations
to fraudulent employee behaviour. Int J Account Inf Syst 2003;4:295–308.
Mascha MF. The effect of task complexity and expert system type on acquisition of procedural knowledge — some new
evidence. Int J Account Inf Syst 2001;2:103–24.
Messier WF, Hansen JV. Expert systems in auditing: the state of the art. Audit J Pract Theory 1987;7:94–105.
Messier WF, Kachelmeier SJ, Jensen KL. An experimental assessment of recent professional developments in
nonstatistical audit sampling guidance. Audit J Pract Theory 2001;20:81–96.
Mueller JM, Anderson JC. Decision aids for generating analytical review alternatives: the impact of goal framing and
audit-risk level. Behav Res Account 2002;14:157–77.
Murphy D, Brown CE. The use of advanced information technology in audit planning. Int J Intell Syst Account Financ
Manag 1992;1:187–93.
Murphy DS. Expert system use and the development of expertise in auditing: a preliminary investigation. J Inf Syst
1990:19–35 [Fall].
Murphy DS, Yetmar SA. Auditor evidence evaluation: expert systems as credible sources. Behav Inf Technol
1996;15:14–23.
Odom MD, Dorr PB. The impact of elaboration-based expert system interfaces on de-skilling: an epistemological issue.
J Inf Syst 1995;9:1–17.
O'Donnell E, Schultz J. The influence of business-process-focused audit support software on analytical procedures
judgment. Audit J Pract Theory 2003;22:265–79.
O'Leary DE. Auditor environmental assessments. Int J Account Inf Syst 2003;4:275–94.
Pei BKW, Steinbart PJ, Reneau JH. The effects of judgment strategy and prompting on using rule-based expert systems for
knowledge transfer. J Inf Syst 1994;8:21–42.
Pincus KV. The efficacy of a red flags questionnaire for assessing the possibility of fraud. Account Organ Soc
1989;14:153–63.
Prawitt DF. Staffing assignments for judgment-oriented audit tasks: the effects of structured audit technology and
environment. Account Rev 1995;70:443–65.
Rose JM. Behavioral decision aid research: decision aid use and effects. In: Arnold V, Sutton SG, editors. Researching
accounting as an information systems discipline. Florida, USA: AAA; 2002.
Rosman A, Bible L, Graham LE, Biggs S. Investigating auditor adaptation to changing complexity in task environments: the
case of electronic workpapers. Paper presented at American Accounting Association annual conference, August 7–10,
San Francisco; 2005.
Sailsbury D, Stollak MJ. Process restricted AST: an assessment of group support systems appropriation and meeting
outcomes using participant perceptions. Proc of the twentieth international conference on information systems. North
Carolina: Charlotte; 1999.
Shpilberg D, Graham LE. Developing Expertaxsm: an expert system for corporate tax accrual and planning. Audit J Pract
Theory 1986;6:75–94.
Silver MS. Descriptive analysis for computer-based decision support. Oper Res 1988a;36:904–16.
Silver MS. User perceptions of decision support system restrictiveness: an experiment. J Manage Inf Syst
1988b;5:51–65.
Silver MS. Decision support systems: directed and undirected change. Inf Syst Res 1990;1:47–70.
Smedley GA, Sutton SG. Explanation provision in knowledge-based systems: a theory-driven approach for knowledge
transfer design. J Emerg Techol Account 2004;1:41–61.
Steinbart PJ, Accola WL. The effects of explanation type and user involvement on learning from satisfaction with expert
systems. J Inf Syst 1994;8:1–17.
116 C. Dowling, S. Leech / International Journal of Accounting Information Systems 8 (2007) 92–116
Swinney L. Consideration of the social context of auditors' reliance on expert system output during evaluation of loan loss
reserves. Int J Intell Syst Account Financ Manag 1999;8:199–213.
Wheeler BC, Valacich JS. Facilitation, GSS, and training as sources of process restrictiveness and guidance for structured
group decision making: an empirical assessment. Inf Syst Res 1996;7:429–50.
Ye LR, Johnson PE. The impact of explanation facilities on user acceptance of expert system advice. MIS Quart
1995:157–72 [June].