You are on page 1of 14

What methodological issues are raised by researching the effectiveness of technology-based learning?

MA in Education and technology in Clinical Practice Duncan Boyd Student ID No

Education and technology in Clinical Practice: perspectives and issues Caroline Pelletier Supervisor:

Submission Date: Word Count: 5065

Introduction In the modern clinical world, developments in technological progress have revolutionized practice in all areas of work. Almost all areas of clinical practice are now enmeshed with mechanical- and information technology- based systems. In parallel with these developments in patient care, medical education is also increasingly embracing the opportunities offered by new technologies (Issenberg, 2005). Within medical education, these technologies are hoped to improve the learning experience but they are also recognized to have a degree of political value as well. There are increasing demands on clinicians time driven by such measures as the European Working Time Directive (Banford and Banford, 2008) and a national shortage of trained staff (Torjesen, 2009) which is coupled with a political will for medicine to be perceived to be delivered by specialists rather than doctors in training (memorandum by the Royal College of Physicians, 2010). In view of this, there is a will to find ways to deliver better, more clinically relevant medical training within a shorter working day. For some, technology is seen to offer a solution to this challenging demand for more training in less time. In particular, in this essay, I aim to consider the impact of simulation based training. This is looking at a disparate variety of technologies, some as basic as an inert mannequin with a trainer shouting out instructions and some fully immersive high fidelity learning environments where clinical staff can enact situations almost identical to those they could experience in real life with the simulation technology feeding back real time and realistic responses to interventions (Andreatta, 2010). From my point of view, however, these retain the basic similarity that they are trying to provide the clinician with a safe environment re-enact work-based practices in a non-clinical situation. Although my argument may be generalized to all simulation training, my own experience which informs this essay stems mainly from the use of resuscitation training scenarios where expert tutors guide a group of multidisciplinary professionals through the techniques of resuscitation with the assistance of an interactive mannequin. This is often supported by complex computer algorithms driving the model s responses to provide what Roger Kneebone (2003) would describe as a hybrid simulation involving both computer software and the tactile and/or haptic feedback of a mannequin Like all simulation, this is seen as a way of mimicking the traditional mode of medical education where skills would be learned in an apprenticeship model by creating an environment where work practices can be enacted, repeated and analysed in a more controlled way and without the possibility of patients suffering from any errors. A common theme in descriptions of the benefits of simulation technology is that it is considered a place where the learner can fail safely (Andreatta, 2010; Clark, 2010; Scalese, 1999; Okuda, 2009). It is hoped that the use of such technologies will form a platform on which to build clinical skills rapidly which can then be easily transferred to real world environments.

In an area where millions can be spent developing training techniques and where the consequences of inefficient training could be damaging for both patients and the profession, there is a need for robust evidence that the profession is addressing its training needs in the correct manner. However, when it comes to education and learning, evidence can be difficult to acquire and to process. There are numerous methodological challenges involved in educational research. These can be magnified when the research is done in a clinical situation where there may be a degree of conflict between the positivist reductive tradition of evidence based medicine and the more interpretavist holistic empirical traditions often used in educational research. Moreover, medical research is often focused on a very clear and easily measurable end point such as survival or improvement in a quantitative aspect of a patient s health (van Hove, 2010, Miskovic, 2010). These are very appropriate gold standards for any clinical intervention, including educational tools but consideration must be given to their validity and to the outcomes often used as surrogates for them. In this essay, I will particularly explore the concept of defining end points while researching simulation based learning strategies and how they relate to the traditions underlining research. I will examine the appropriate unit of analysis in such research and challenge the utility of using individual skill as a surrogate for success in educational interventions. Ultimately, I hope to demonstrate that clinical success is predicated on team interactions and so more empirical emphasis should be placed on the educational impact of simulation on communities rather than individuals.

The Problem with Positivism Much medical clinical research is predicated on a tradition of positivist evidence based medicine. Within educational research, this is often manifest in empirical design whereby a quantitative question is examined surrounding the introduction of a new simulation model. For instance, asking if trainees find one simulated model more realistic than an alternative (Mishra, 2010). This is a model with which clinicians are familiar but is one which poses numerous challenges in educational research. The routine gold standard outcome of patient survival or cure is difficult to relate to educational interventions but this is poorly suited for educational research (Okuda, 2009). In most cases, a large change would not be anticipated in patient outcome based on a new simulation technology as clinicians are hopefully already being trained to a reasonable degree of competence with preexisting training strategies. Moreover, given the huge number of competing factors which influence patient care, it would be extremely difficult to control for all other variables and isolate the effectiveness of an educational strategy. Both of these would require studies of huge patient numbers in order to detect any meaningful outcome based on patient epidemiology. Such studies have not been

performed and are likely to be impossible to perform as simulation training is usually highly individualised based on the skills of the tutors and the groups being trained (MacGaghie, 2009). As such, large scale studies are problematic due to variations between the training environment offered at different sites and most research is based on relatively small numbers of individuals (Issenberg, 2005). Surrogates for patient improvement are usually used as outcome measurements and the commonest tends to be to look for improvement in clinician s knowledge and skills (Okuda, 2009). I will later challenge this as a meaningful surrogate measure but, even accepting its validity, there are methodological difficulties in its use. It is extremely difficult to create a valid quantitative assessment of a clinician s skill at even a simple procedure in a real situation (Kogan, 2009; van Hove, 2009) and it is unclear if demonstrating an isolated skill in a simulation scenario is indicative of the ability to transfer this skill to clinical reality (Smith, 2010). A variety of different measures have been used to attempt to assess individuals skills. McGaghie (2009) collates and criticizes these: 1) Observational ratings of trainee performance (during simulation) this is open to many sources of bias. 2) Trainee responses these are post-course assessments testing usually theoretical knowledge learned during simulation training. This tends to show more reliability than observational ratings but it is unclear how well it relates to real clinical situations. 3) Haptic feedback from the simulation this can assess the trainee s performance during simulation but is a technology currently in its infancy and may be best suited to a fairly narrow field of training simulations. Self reported questionnaires asking if trainees felt that simulation training was beneficial are also commonly reported in the literature. However here, even more than with observational ratings, we are more clearly blurring the lines of the positivist traditions of this research. Such questionnaires do not provide data based on a measurable external reality but instead describe students instinctive and emotional responses to a lived experience. As such, they may be better considered within an interpretavist framework and inelegantly fitting them to this model of research does little to provide a valid endpoint.

The Issue with Interpretavism The consideration of context within a training environment is crucial, especially as much simulation based training involves educating multi-discliplinary teams to respond to emergencies as a well functioning unit. Social interactions within the group are as vital as the individual skills of any given member. Equally, with professionals from different backgrounds often training together there are complex power relationships within the group (Clark, 2010). When dealing with observational ratings or self reported questionnaires, the social context of the

environment is crucial. All of these fall largely outwith the scope of most positivist research but lend themselves well to the interpretavist tradition. However, though this seems to better encapsulate the complexity of simulation training, it can struggle to reach conclusions which can be usefully generalised. The literature may describe the interactions within a group and how a training environment develops these, but it can be lacking in a clear direction as to how to improve the outcome of the training. At worst, it can appear as simply a description of current practice without any insight into the validity of the technique (Ferroli, 2009). What s more, the same problem of defining a valid endpoint occurs once more. An interpretavist position may be better able to analyse the biases contributing towards trainee s self-reported ratings, but it does not necessarily mean that these ratings have any relevance to success in later real clinical situations.

The Value of the Individual as a Unit of Analysis Up until now, I have accepted the commonly made assumption that improving individual clinicians skills will result in improved patient outcomes. However, as I have described, patient care is a much more complex gestalt, depending on team working, social organisation, good management and a multitude of other factors (Liberating the NHS: Developing the Healthcare Workforce, Department of Health Consultation, 2011). Indeed, much resuscitation based simulation training is deliberately based on building teamwork skills. As such, it is questionable whether the focus on the learned skills of the individual clinician is a useful end point in educational research. Within socioculturalist research, there is a controversy over what best constitutes a unit of analysis or, in other words, the most basic subject research is directed at analyzing. There is a perception that, as Matusov (2007) puts it, the individual is a traditional but wrong unit of analysis and multiple other options have been presented by a variety of researchers. Within this field, there are recognized to be extremes of reductionism looking at minutiae of the scenario without paying full attention to the embedded context and holism, trying to analyse complex processes as a whole and running the risk of obscuring the importance in the component parts. A useful unit of analysis must be something which can be analysed by study but which also must provide information which can be generalized to the system as a whole (Babbie, 2010; Matusov, 2007). It can be questioned whether the study of the individual s response to simulation training is truly a generalisable property of the training as a whole. Emergent phenomena commonly explained as a bowl of water having the property of wetness which could not be applied to an individual molecule within it are known to exist within teams (Rothwell, 2005). Even if no individual clinician found simulation to improve their individual skills, it is plausible that, by training groups to all use the same skill sets, the improved group co-operation could improve patient outcome. Conversely, the benefit

derived from skills gained by a trained individual could be swamped by the problems caused by dysfunctional group working in the clinical environment. The individual clinician is always embedded in a team and the solutions to problems may often be found in the systems surrounding individuals, not within the individuals skills themselves. To paraphrase Vygotsky s work on disabilities (1993), the inability to speak English would be an insurmountable problem for a French clinician trying to work in Britain but would not be an issue at all for him practicing in his native France. Equally, if clinical education is aimed to improve patient outcomes, then any analysis of its success must take account of its abilities to change and improve systems of practice surrounding the individuals who have been trained in it. It is well recognized that medicine today relies on extremely complex interactions between multidisciplinary teams and that no one individual can possibly hold all the skills required to deal with all patient needs. Moreover, in addition to pure technical skill, it is known that factors relating to communication, empathy, teamworking, leadership and a myriad other more subtle skills also contribute greatly towards outcomes (Department of Health, 2003; Ingram, 1999). As such, it would seem that the clinical team is at least as valid a unit of analysis as the individual clinician. Of course, the opposite extreme to reductionism is by no means perfect either. Holistic units of analysis tend to broaden the scope of research and widen the variables involved. However, this leads to the problem of knowing when to stop. It is impossible to manage a study examining everything in a system and any artificial limitations often then just create increased awareness of the influencing factors outwith your field of study. Matusov describes how Engestrm has progressively broadened his area of interest from activities in 1988 through the activity system in 1999 to activity systems in 2004. A fine line needs to be drawn between a reductionist approach where too much information is lost and a holistic approach where so much information is considered that it cannot be processed. If I contend that the individual is not the most useful unit of analysis in simulation based educational research, I need to find a way of clarifying what broader term is useful and how it can best be measured. In doing this, I need to consider simulation less as a training system to impart individual skills and to look at it as a practice of work and relate how the activities enacted within it contribute to its educational success.

Social Theories Looking at workplace based learning from the perspective of social anthropology has created a movement that tends to look at networks of shared experiences rather than at individuals. From this perspective, workplaces are composed of groups who share tasks. Learning may be viewed as how individuals are integrated into these networks rather than the more behaviourist view we have

been mainly discussing of specific facts and skills being acquired by individuals (Staddon 2001). There are a wide variety of models of networks to be considered within this framework but, for the purposes of this essay, it is useful to first consider Lave and Wenger s view of communities of practice which they described as : Communities of practice are formed by people who engage in a process of collective learning in a shared domain of human endeavour (Wenger, 2007) These are considered networks of people whose jobs and roles share a common background and purpose. Crucially, the network will have certain set characteristics which can be used to define its limits. For an over simplistic example, the community of practice of operating theatre staff practice their work within an operating theatre. They will have certain conventions and traditions which are shared within the network. The network will also have a degree of self-perpetuation with the individuals involved in it recognizing their membership and reinforcing it with social interaction (Lave and Wenger, 1991). Lave and Wenger sought to understand how these communities became established and how newcomers were able to join the community. They described a system of legitimate peripheral participation whereby newcomers entering an established community of practice would be able to participate in low level activities within the community and then build up skills through the active mentorship of more senior colleagues. Throughout this, knowledge of the traditions of the community would be established and the newcomers would ultimately become part of the established community. This relates to Vygotsky s classic works on social development where he argues that most learning stems from the internalization of social phenomena; that education is best viewed as students interacting with what he terms More Knowledgable Others and, through the social progress of combined working, integrating the other s skills into their own (Vygotsky, 1978). In some ways, simulation could be seen as an attempt to accelerate this process. Where a new skill or skill set need s to be developed, rather than waiting for this to slowly seep through multiple pre-established communities, simulation learning could be viewed as creating a new community built around artificially selected traditions. The simulation training itself generates a community of simulated practice such that the groups who undergo training should leave with a new shared experience. It is well recognized that simulation training works best when led by a group of experts in the field to guide students and these could be perceived as the More Knowledgable Others described by Vygotsky (McGaghie, 2009; Issenberg, 2005; Kneebone, 2005; Dieckmann, 2009). Within this model, the instructors are seen as the mentors in the community and the trainees are taught the traditions and histories outlined in the learning package. For instance, within the community of neonatal life support, it becomes a tradition that the baby is dried thoroughly at birth before starting any further resuscitation (Resuscitation Council, UK, 2006). These traditions are then shared throughout multiple simulated training scenarios and brought back into the workplace until there develops a community of practice based around the taught ideal of neonatal resuscitation.

Lave and Wenger were mainly interested in the observational description of work practices however, by introducing the concept that such communities may be targeted towards a specific goal, I begin to move more towards the activity theory principally developed by Engestrom (2000). This adds, amongst other things, the component of an object to the practice. It draws it roots from Russian psychological theories describing the activities of organisms as being driven by needs. Many actions were described which meet these needs (at a simple level, you eat an apple because you are hungry) however later researchers such as Leontev observed that many actions do not immediately service a need. Instead they proposed that these actions may be part of a larger network of actions which, only when viewed as a whole, service a need (Leontev, 1978). For instance, someone performs a job in order to acquire money in order to be able to go and buy an apple to service his hunger. These action networks are described as activities. They can occur at multiple levels of complexity and may involve only the actions of the individual or, especially in the work of Engestrom, may involve the actions of others combining to meet a shared need. Within this philosophical tradition, communities of practice could be seen as action networks or activities. Activity theory then relates these activities to an object and attempts to describe how the activity achieves the object. For instance the activity of newborn resuscitation may be related to the object of making the baby survive or be healthy. Morevover, unlike the work surrounding communities of practice, activity theory also describes how an activity system can be altered or optimized to better achieve the object or even to achieve an entirely new object. Engestrom envisages means by which so called Boundary Crossing Laboratories allow groups to meet to refocus objectives and devise best strategies to meet them (Engestrom 2000, 2001). Within activity theory, for these laboratories to operate, the whole working group needs to be involved in the process, agreeing to the object and understanding the changes to be made to activity. Obviously, within large organizations, such involvement is rare for staff but the simulation can be seen as a means of disseminating the results of such laboratories. The object of simulation training in medicine (such as better infant survival) is usually well accepted by staff and the simulated environment allows experts in the new activity to meet and explore its practice with staff. It could be argued that the simulation is a Boudary Crossing Laboratory in microcosm with the aim of developing a new activity network within the student group. Although the expected activity network is organized prior to the simulation training and so it the situation does not have quite the fluid dynamism which is encapsulated in Engestrom s laboratory, I feel that the analogy is valid. Both activity theory and communities of practice deliberately reject individualism and the mentalistic view of training as transferring knowledge from one individual to another. Instead they view the workplace as a gestalt where learning can only be analysed on a collective basis. This seems, to me, a better model of how the modern health service operates on a practical level. Changes in our practice mediated through tools such as simulation, may be more effectively viewed through the effects of such changes on communities rather than through the prism of looking for individuals improvement.

The Communal Effectiveness of Simulation Of course, the idea that simulation can be viewed as a community of practice in itself creates an interesting dilemma. Within medicine, simulation should not be seen as separate from the general practice of work and its aim should be to generate skills which can be transferred back into the normal workplace. Roger Kneebone (2005) describes how the artificiality of simulation and close supervision by expert others can put a false value on the skills being taught in that environment. Following from the concepts of situated learning, he raises concerns that the practices learned in simulation may become isolated to the simulator setting and that simulation becomes a hyperreal simulacra distanced from reality (Baudrillard, 1981). There is a growing trend for simulation to use evolving technology to make the learning experience as realistic as possible, using more viscerally accurate mannequins or more completely representing the response to interventions. This is seen as a way to make the simulation more akin to real working life and hence to make the transfer of skills easier. However, at present, there is little evidence clearly showing advantage to such so called high fidelity models (Bligh and Bleakley, 2006). Indeed, though instinctively appealing, the argument that their use makes skills more easily transferrable to real situations ignores the psycho-social aspect of the simulated environment. It is quite probable that the artificiality of the simulated environment is a psychological phenomenon constructed from the abstract knowledge that the situation is unreal. No matter the fidelity of the simulation, the student will always have a different emotional response to real working situations. As such, Kneebone (2003) asserts that the skills and techniques taught in the simulated environment need to be practiced as well in reality in order to be consolidated. Within my model, the community of practice needs to overlap between simulation and real work such that the traditions and histories being established are firmly enmeshed in both situations. In order to be effective, the simlulated skills and communities established around them must form part of daily life. Once this is established, we must then return to the problem of how to assess effectiveness. How to assess communities? I have now, hopefully, established, that an individualistic, behaviourist approach to learning clinical skills is a very limited viewpoint. There is perhaps more value in viewing the topic from a more constructivist perspective examining the effects on communities and social networks. In modern times, most academic views of workplace learning adhere to this latter perspective. However, as I outlined at the start of this essay, most assessments of the effectiveness of simulation is focused on the individual and on the mentalistic transfer of knowledge (Farrow and Norman, 2003; McGaghie, 2009). This is, I suspect, simply because the individual is an easier unit of analysis than the group. Matusov (2007) describes the overuse of reductive units as being analogous to a

man only searching for his lost wallet under a street lamp, not because he thinks it is there but because it is the easiest place to be searching. If we accept that social theories are more useful descriptions of the process of learning, we must also seek a way to broaden the unit of analysis to find outcomes relating to communities. Within the social sciences, there are numerous research methodologies adapted for looking such broad pictures. These are often based in the interpretavist tradition and involve researchers spending significant time with the involved communities, paying attention to the dynamics of groups and to how these are created, shaped and changed by experience (Greenhalgh, 2009). However, these are not perfect. They are labour intensive and time consuming to perform which is a particular problem in an area where the drive of technological progress means that educational simulators change and develop very rapidly (Kneebone, 2003). Moreover, such studies are often highly descriptive about processes but can find it difficult to determine if an observed situation is objectively beneficial or not. Again, we come back to the problem of end points. If we accept that improved patient well being should be the ultimate end point of any clinical education but also accept that this is an end point which cannot be meaningfully correlated with individual educational strategies then we must find valid surrogates for it. I have already rejected the idea that the transfer of individual skills is a valid surrogate and have suggested that improvements in communities of practice would be more valid. How, then, to assess the communities? It could be argued that a cohesive and mutually supportive community functions well together and so an interpretavist analysis of the stability of social networks following the introduction of simulator packages could have validity in terms of describing its effectiveness. This seems weak to me, however, as it is unclear how it relates directly to patient outcomes. Moreover, on a practical level, the descriptive tone of such research often seems less strident and convincing that the quantitative claims more commonly seen in clinical literature. Similarly, some studies have been performed using videotaped sessions and objectively defined criteria of what constitutes good team work. A team of experts would watch a clinical scenario and score the scoialisation skills on display within the team (Okuda, 2009). This also would generate quite simply analysed quantitative data. As before, though, it is difficult to demonstrate that this technique provides outcomes which are relevant to patient wellbeing Another option which may ameliorate this issue is to look at outcome measures based on peer review. Within medical assessment in the last 5 years, there has been an emphasis on developing tools for measuring the skills of clinicians. Of these, the one with most empirical support is the Multi-Source Feedback whereby work colleagues are asked to assess inviduals skills and attitudes. It has been shown to be particularly powerful at providing a quantifiable

description of clinical and interpersonal skill in a variety of settings (Archer, 2005; Archer, 2010; Maylett and Riboldi, 2007; Joshi, 2004). This would appear to be a potentially powerful tool in assessing the development of communities around simulation training. Questionnaires are already routinely used asking how individuals feel that simulation has improved their skills (McGaghie, 2009) and it would seem very easy to adapt these to ask groups to describe how their skills have improved; how they feel their colleagues have developed; whether they feel more supported and confident as a team. This would allow a better connection to patient outcomes in that experienced clinicians would be assessing whether they think they have improved as a group. It would also allow a more simplistic quantitative data set to be created with a more objective and demonstrable concept of improvement than the descriptive studies described above. Of course, such an outcome measure is open to bias and is, at base, not far removed to the individual assessments I have previously dismissed. However, I feel that the emphasis on networks is of vital importance in understanding how clinical work is improved. Morover, as Matusov (2007) warns, in the search for holistic units of analysis, there is the danger of trying to spread the research so thin that it loses all focus. Matusov describes Barbara Rogoff s 1995 concept of planes of analysis as a way of trying to avoid this pitfall. In this model, no matter what unit of analysis is chosen, the context must be considered. Taking my suggestion of modified Multi-source Feedback within a group, though this may be the primary mode of information gathering with the unit of analysis defined as group function in a given situation, at all times consideration must also be made of the individuals within the group and the wider context of the hospital in which they function. Other research may focus on these as primary units of analysis but all studies are incomplete without at least envisaging the influences of each different level upon the whole. Conclusion Within the field of clinical educational research, there is often a scarcity of evidence for the effectiveness of interventions and, where evidence exists, it is often inconclusive. This has serious repercussions for a rapidly developing field on which the future skills of clinicians depend. I believe that one of the major issues with this research is that it is often unclear what outcomes if any are being studied and, indeed, which outcomes should be studied. I contend that there is a predilection within the published research to look at the individual as the primary unit of analysis which I find out of place in an area where social anthropology has led the field in modern theories of how learning is contextualized in the workplace. I believe that more emphasis needs to be placed on communities as the unit of analysis and that research into the improved functioning of social networks would be a more valid marker of the effectiveness of simulation training. In describing Rogoff s model of planes of analysis, Matusov likens it to radiotherapy whereby multiple different strands of

research with different foci and backgrounds converge on a topic to illuminate it most fully. Within simulation, I fear that we are overexposed to the individual at the expense of the wider group and, much like radiotherapy, this asymmetrical exposure is not only inefficient in its goal, it may also be actively harmful to our understanding.

Sources Andreatta, P.B. Bullough, A. S. Marzano, D. (2010) Simulation and team training,, Clinical obstetrics and gynecology Vol 53(3) pp.532-544. Archer, C.A. Norcini, J. Davies, H.A. (2005) Use of SPRAT for peer review of paediatricians in training BMJ Vol 330 pp.1251-1253 Archer, C.A. McGraw, M. Davies, H.A. (2010) Assuring validity of multisource feedback in a national programme. Archives of Disease in child health 2010(95) pp.330-335 Babbie, E. The practice of social research Wadsworth Cengage Learning ISBN-13: 978O495598411 Baudrillard, J. (1981) Simulacra and Simulation Editions Galilee original French. Translation via: http://files.meetup.com/1392983/Baudrillard,%20Jean%20%20Simulacra%20And%20Simulation.pdf Bligh, J. and Bleakley, A. (2006) Distributing menus to hungry learners: can learning by simulation become simulation of learning. Medical Teacher Vol. 28, No. 7, pp. 606-613 Clark, E.A.S. Fisher, J. Arafeh, J. Druzin, M. (2010) Team training/simulation. Clinical obstetrics and gynaecology Vol 53(1) pp.265-277. Department of Health (2003): Practitioners with Special Interests. Implementing a scheme for allied health professionals with special interests Department of Health Consultation (2011) Liberating the NHS: Developing the healthcare workforce. A consultation on proposals. Accessed via: http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/ @en/documents/digitalasset/dh_122933.pdf Dieckmann, P. Gaba, D. Rall M. (2007) Deepening the theoretical foundations of patient simulation as a social practice Simulation Healthcare Fall 2(3) pp. 161-163 Dieckmann, P. (2009) Simulation for education, training and research Access via: http://www.goinginternational.eu/pdfs/fachartikel/dieckmann_rall_oest ergaard.pdf Dieckmann, P. (2009) Using simulation for education, training and research. Psychology Books ISBN 9783899675399 Engestrm, Y. (2000) Activity theory as a framework for analyzing and redesigning work. Ergonomics Vol 43 No 7 pp. 960-974 Engestrm, Y. and Hasu, M. (2000) Measurement in action: An activitytheoretical perspective on producer-user interaction International Journal of human-computer studies Volume 53(1) pp. 61-89 Engestrm, Y. (2001) Expansive learning at work: towards an activity theoretical reconceptualisation. Journal of education and work Vol 14(1) pp.133-156 Farrow, R. and Norman, G. (2003) The effectiveness of PBL: the debate continues.

Is meta-analysis helpful? Medical Education Vol 37(12) pp.1131-1132. Ferroli, P. Tringali, G. Acerbi, F. Aquino, D. Franzini, A. Broggi, G. (2010) Brain surgery in a stereoscopic virtual reality environment: A single institution s experience with 100 cases. Neurosurgery 67(ONS suppl 1) ons79-ons84 Greenhalgh, T. Potts, H.W.W. Wong, G. Bark, P. Swinglehurst, D. (2009) Tensions and paradoxes in electronic patient record research : A systemic literature review using the meta-narrative method The Milbank Quarterly Vol 87(4) pp.729-788. Issenberg BS, McGaghie WC, Petrusa ER, Gordon DL, Scalese RJ. 2005. Features and uses of high-fidelity medical simulations that lead to effective l earning: a BEME systematic review. Medical Teacher, Vol. 27, No. 1, pp. 10 28 Joshi, R. Ling, F. and Jaeger J. (2004) Assessment of a 360-degree instrument to evaluate residents competency in interpersonal and communication skills. Academic Medicine Vol 79(5) pp. 458-463 Kneebone, R. (2003) Simulation in surgical training: educational issues and practical implications. Medical Education Volume 37(3) pp.267-277 Kneebone, R. (2005) Evaluating clinical simulations for learning procedural skills: A theory based approach. Academic Medicine Vol 80(6) pp.549-553 Kogan, J.R. Holmboe E.S. Hauer, K.E. (2009) Tools for direct observation and assessment of clinical skills of medical trainees. JAMA 302(12) pp.13161326 Lang, M. (2007) Aspects of Boundary Crossing in Education: Summaries and sources of selected literature. Crossnet Project Lave, J. and Wenger, E. (1996) Situated Learning: Legitimate Peripheral Participation. Cambridge University Press ISBN 0521423740 Leontev, A.N. (1978) Activity, consciousness and personality Prentice Hall accessed via http://marxists.org/archive/leontev/works/1978/index.htm Maylett, T. M., & Riboldi, J. (2007). Using 360 Feedback to Predict Performance. Training + Development, September, 48 52 McGaghie, W.C. Issenberg, S.B. Petrusa, E.R. Scalese R.J. (2010) A critical review of simulatio-based medical education research: 2003-2009, Medical Education 2010:44 pp.50-63 Memorandum by the Royal College of Physicians (PEX 16) September 2010 accessed via: http://www.publications.parliament.uk/pa/cm201011/cmselect/cmheal th/512/512vw13.htm Mergel, B (1998) Instructional design and learning theory. Accessed via: http://www.usask.ca/education/coursework/802papers/mergel/brenda .htm Mishra, S. Kurien, A. Ganpule, A. Muthu, V. Sabnis, R. Desai, M. (2010) Percutaneous renal access training: content validation comparison between a live porcine and a virtual reality (VR) simulation model. BJU International Vol 106 pp1753-1756. Miskovic, D. Wyles, S. Ni, M. Darzi, A.W. Hanna, G.B. (2010) Systematic review on mentoring and simulation in laparoscopic colorectal surgery. Annals of Surgery Vol 252(6) pp. 943-951.

Matusov, E. (2007) In search of the appropriate unit of analysis for sociocultural research. Culture and Psychology Vol 13(3) pp. 307-333 Okuda, Y. Bryson, E.O. DeMaria Jr, S. Jacobson, L. Quinones J. Shen, B. Levine, A.I. (2009) The utility of simulation in medical education: what is the evidence? Mount Sinai Journal of Medicine 76:330-343 Potter, T.B. and Palmer, R.G. 2003 360-degree assessment in a multidisciplinary team setting. Rheumatology (2003) 42 (11): 1404-1407 Resuscitation Council (UK) (2006) Newborn Life Support. 2nd Edition. Resuscitation Council (UK) ISBN 1190381216X Rogoff, B. 1993. Observing sociocultural activity on three planes. Sociocultural Studies of mind, eds. J. V. Wertsch, P. del Ro, and A. Alvarez, 139-163. New York: Cambridge University Press. Rothwell, W. and Sullivan, R. (2005) Practicing organizational development: a guide for consultants Pfeiffer Books ISBN 0787962384 Scalese, R.J. Obeso, V.T., Issenberg, S.B. (2007) Simulation technology for skills training and competency assessment in medical education. Journal of General Internal Medicine Volume 23 Supplement 1 pp.46-49. Smith, C.C. Huang, G.C. Newman, L.R. Clardy, P.F. Feller-Kopman, D. Cho, M. Ennacheril T. Scwartzstein, R.M. (2010) Simulation training and its effect on long-term resident performance in central venous catheterization. Simulation in Healthcare 5 pp.146-151 Staddon, J.E.R. (2001) The new behaviorism: mind, mechanism and society Psychology Press ISBN 1841690147 Torjesen, I. (2009) Doctor Shortages BMJ Careers 28 Feb 2009 Van Hove, P.D. Tuijthof, G.J.M. Verdaasdonk, E.G.G. Stassen, L.P.S. Dankelman, J. (2010) Objective assessment of technical surgical skills. British journal of surgery 97 pp.972-987 Vygotsky, L.S. (1978). Mind and society: The development of higher mental processes. Cambridge, MA: Harvard University Press. Wenger, E. (2007) Communities of practice: A brief introduction. Accessed via: http://www.ewenger.com/theory/

You might also like