当前位置: 首页 > 期刊 > 《英国医生杂志》 > 2004年第17期 > 正文
编号:11357587
The need for needs assessment in continuing medical education
http://www.100md.com 《英国医生杂志》
     1 Building T-13, McMaster University, 1280 Main St W, Hamilton, ON, Canada L8S 4K1

    Correspondence to: G R Norman norman@mcmaster.ca

    Introduction

    Maintenance of professional competence is a critical component of professionalism. However, traditional methods, which rely on individual self assessment, are inadequate. Conversely, legislated recertification programmes are difficult to individualise and can be perceived as draconian. What is required are better methods of standardised individual needs assessment. We suggest some possible strategies.

    Background

    Like all professions medicine is granted professional autonomy by society under the assumption that its practitioners will be deemed competent on entry into practice and will maintain competence for as long as they practise. Traditionally it is the responsibility of the individual practitioner to do whatever is necessary to remain competent.

    In the past maintaining one's competence was not problematic because relevant knowledge accreted slowly. Today, however, without a programme of active learning no doctor can hope to remain competent for more than a few years after graduation. One response to this challenge has been for education programmes, particularly problem based ones such as our own, to focus on the development of self assessment skills and self directed learning skills in order to equip graduates to maintain competence. The evidence, however, while not abundant, shows that this was a quixotic quest. The evidence that graduates from problem based learning are better at "keeping up" is weak.1 2 Moreover, many studies have shown that self assessment is far more difficult than we thought.3 Finally, self assessment does not emerge on graduation as a consequence of the demands of changing practice. Sibley et al observed that practitioners tend to pursue education around topics they are already good at while avoiding areas in which they are deficient and where there may be room for improvement.4 The evidence shows therefore that self monitoring programmes such as the maintenance of competence (MOCOMP) programme,5 which leave practitioners to their own devices, may be hopelessly optimistic. According to Norcini all attempts at voluntary recertification strategies initiated by members of the American Board of Medical Specialties failed and have been replaced by mandatory procedures.6

    As a consequence, much effort has been invested in more formal approaches to maintenance of competence. Several models exist—for example, formal peer review and, where necessary, remediation7 or formal written recertification examinations.6 All are expensive. Peer review processes, which are necessarily individualised, are difficult to implement on the scale needed to ensure adequate monitoring in large jurisdictions. Further, as Norcini points out,6 standardised audits can be applied only to relatively common conditions, yet much of specialist practice is devoted to diagnosis and management of rare, but clinically important, conditions. Examinations have an economy of scale, but their relevance to actual competence or individual practice needs is often challenged, and they remain costly to create and maintain. Finally, both of these strategies tend to operate under the long arm of the licensing law and are therefore unlikely to be adopted voluntarily.

    Summary points

    Traditional approaches to continuing education that rely on self assessment and self learning are likely to be ineffective

    Centralised methods, such as regular relicensure or recertification examinations, are difficult to tailor to the characteristics of individual practices and are perceived as threatening

    Innovative strategies for needs assessment are needed

    Several strategies are reviewed, including practice audits, and standardised assessments are discussed

    What is required is something between the anarchy of self assessment and the "Big Brother" approaches; strategies that would be seen as supportive and individual yet objective. The recent focus in continuing medical education (CME) on learning needs assessments shows promise.

    Why it's important for needs in continuing medical education to be assessed

    The ultimate goal of continuing medical education is to improve outcomes for patients by changing doctors' practice behaviours. Evidence from systematic reviews of the literature shows that programmes in continuing medical education that are predicated on well conducted needs assessments are effective in changing doctors' behaviours.8 In line with these findings a shift has occurred recently in thinking about continuing medical education, its purpose, and the keys to its effectiveness.9 Rather than be passive recipients of offerings of continuing medical education, often provided by enterprises with non-educational goals, doctors are encouraged to actively choose what to learn, how to learn it, and to reflect on the implications of what has been learned. To meet the needs of this self directed learning approach, formats of continuing medical education have broadened to include workshops, small, practice based study groups, individualised programmes delivered on CD, interactive computer programs, websites, workbooks, journals, practice guidelines, peer consulting, and "academic detailing." However, this proliferation of course formats still represents a diversity of methods to deliver knowledge. Without a grounding and justification of the content through a specific needs assessment, offerings are unlikely to be effective, regardless of the presentation format.

    Learning needs differ from educational needs

    A fundamental gap remains between the learning needs of the individual practitioner and the priority educational needs identified by bodies for continuing medical education for course offerings. The two are not synonymous. Learning needs are personal, specific, and identified by the individual learner through practice experience, reflection, questioning, practice audits, self assessment tests, peer review, and other sources.10 Although, in theory, doctors should use these methods to create self directed learning plans, there is no evidence for most doctors that this actually happens. In contrast, educational needs can be defined as the interests or perceived needs of a whole target audience and can be identified through surveys, focus groups, analysis of regional practice patterns, and evaluations of CME programmes. They are necessarily more general than learning needs and may be directed as much by participants' curiosity and academic interests as by well defined individual learning needs. An example of an educational needs assessment is a discrepancy or "gap" analysis, in which current practice behaviour is compared with an ideal or accepted standard of practice.11 In contrast, an exploration of the issues that created the gap, in individual cases, would identify the learning needs.

    What is the way forward?

    If doctors cannot assess their own learning needs reliably and large scale surveys can assess only education needs, what is the way forward? We need to increase the objectivity of learning needs assessments while making the process simple enough for doctors to do regularly. Little research directed at improving the technology of needs assessment has been reported.

    Proved strategies

    One approach is through more structured practice audit. Perol et al showed that doctors who kept an office visit diary of learning issues were able to generate more specific learning objectives than those who did not.12 The importance of this observation relates to the observation by Bergus et al that the structure of clinical questions is key to obtaining useful answers from consultants.13 Kiefe et al found that practice audit feedback combined with comparison to a group of exemplary peers and accompanied by "achievable" benchmarks improved the quality of diabetes care of a cohort of family doctors, internists, and endocrinologists.14

    Alternatively, better identification of learning needs may be derived from standardised assessment exercises. Cohen et al15 devised an exercise consisting of 10 stations and taking 25 minutes, modelled on the objective structured clinical exmaination. Their physician assessment in medical practice (PAMP) exercise, which was designed to reflect actual community clinical practice issues, showed acceptable psychometric properties and cost about US$250 (£135; 203) per doctor. This approach can provide rich feedback to doctors about their performance but must be narrowly focused in scope to be affordable and logistically manageable. Global assessments, from which doctors could formulate their learning needs, can be obtained through so called 360 degree appraisals. Mason et al described a pilot study of this approach, including 20 doctors.16 A wide range of comments about the participants was obtained but the cost, some five hours per doctor, could prove prohibitive when scaled up to a large hospital.

    Potential strategies

    These needs assessment strategies have been evaluated sufficiently to show promise. We believe that other strategies are also worthy of study. Electronic medical records, with capabilities to analyse the profile of the practice, show promise. With appropriately designed systems, doctors could conduct their own objective analyses of their diagnostic habits and therapeutic patterns. The process could be automated such that profiles on selected clinical problems could be generated at preset intervals. Such analyses could be submitted to local academic units for continuing medical education or a regulatory body, as evidence of assessment and for continuing education credits. Where the analyses generate purely informational issues, the practice profiles might be electronically matched to relevant practice guidelines or review publications and the relevant documents sent to the doctor. For unique learning issues, local academic units for continuing medical education might help their doctors develop personalised educational plans. Using electronic medical records to generate such analyses has the advantage of efficiency and objectivity as the analysis is based on all of the relevant patients.

    As an extension of Perol's practice diary method, doctors could maintain notes on their more interesting or problematic "sentinel" patients. Software tools could be developed to help capture essential information about patients in a structured way and provide a means of coding the records for later retrieval. A motivation for the maintenance of these records would be continuing education credits for simply recording patients, with further credits to be earned through developing and pursuing relevant patient related questions.

    Potential strategies to identify learning needs

    Periodic internal audits by using electronic office records

    Individualised audit results compared with current literature or practice guidelines

    Individualised audit results compared with exemplary peers (benchmarking)

    Single issue audit tools developed by local academic units for continuing medical education

    Facilitated notekeeping and reflection around sentinel patients

    Academic units for continuing medical education could create brief, single topic tools for practice audit that doctors or their office staff could complete. A profile, again matched to relevant high quality reviews or practice guidelines, could be provided back to the doctor. Continuing education credits could be earned by simply completing the audit tools with additional credits to be earned through the follow up exercises. This method might provide an alternative where peer comparison data are not available.

    These approaches represent attempts to make learning needs assessments more objective yet relatively easy and inexpensive to do. There will certainly be other approaches that we have not anticipated. A different role for academic units for continuing medical education is implied. As an adjunct to traditional courses, academic units might shift their efforts to helping doctors assess their learning needs by using valid tools and to developing learning plans. Funding for this might be shared between doctors, their regulatory bodies or specialty societies, and government. In some jurisdictions, the regulatory bodies may wish to maintain only the regulatory role while leaving the educational roles to be handled by the academic units for continuing medical education. In others, regulatory bodies may wish to do both. An approach that respects the practicalities of geography and numbers of doctors while conforming to regulatory statutes would have to be worked out in each jurisdiction.

    The authors have been heavily involved in needs assessment, continuing education, and recertification and relicensure. GRN has developed individual relicensure methods. SIS has been involved for many years in development of innovative individualized continuing education strategies. MLM is the current assistant dean for continuing education at McMaster.

    Contributors: GRN, SIS, and MLM all contributed individual sections to the writing of the paper.

    Competing interests: None declared.

    References

    Shin JH, Haynes RB, Johnston ME. Effect of problem-based, self-directed undergraduate education on life-long learning. CMAJ 1993;148: 969-76.

    Norman GR, Shannon SI. Effectiveness of instruction in critical appraisal (evidence-based medicine) skills: a critical appraisal. CMAJ 1998;158: 177-81.

    Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med 1991;66: 762-9.

    Sibley JC, Sackett DL, Neufeld V, Gerrard B, Rudnick KV, Fraser W. A randomized trial of continuing medical education. N Engl J Med 1982;306: 511-5.

    Parboosingh JT, Thivierge RL. The maintenance of competence (MOCOMP) programme. Ann R Coll Physicians Surg Can 1993;26: 512-7.

    Norcini JJ. Recertification in the United States. BMJ 1999;319: 1183-5.

    Page GG, Bates J, Dyer SM, Vincent DR, Bordage G, Jacques A, et al. Physician-assessment and physician-enhancement programs in Canada. CMAJ 1995;153: 1723-8.

    Fox RD, Bennett NL. Learning and change: implications for continuing medical education. BMJ 1998;316: 466-9.

    Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing physician performance: a systematic review of the effect of continuing medical education strategies. JAMA 1995;274: 700-5.

    Slotnick HB. How doctors learn: physicians' self-directed learning episodes. Acad Med74; 1106-17.

    Grant J. Learning needs assessment: assessing the need. BMJ 2002;324: 156-9.

    Perol D, Boissel J-P, Broussolle C, Cetre J-C, Stagnara J, Chauvin F. A simple tool to evoke physicians' real training needs. Acad Med 2002;77: 407-10.

    Bergus GR, Randall CS, Sinift SD, Rosenthal DM. Does the structure of clinical questions affect the outcome of curbside consultations with speciality colleagues? Arch Fam Med 2000;9: 541-7.

    Kiefe CI, Allison JJ, Williams OD, Person SD, Weaver MT, Weissman NW. Improving quality improvement using achievable benchmarks for physician feedback: a randomized controlled trial. JAMA 2001;285: 2871-9.

    Cohen R, Amiel GE, Tann M, Shechter A, Weingarten M, Reis S. Performance assessment of community-based physicians: evaluating the reliability and validity of a tool for determining CME needs. Acad Med 2002;77: 1247-54.

    Mason R, Zouita L, Ayers B. Results from a pilot study using portfolio and 360° questionnaire. BMJ 2001;322: 1600.(Geoffrey R Norman, assist)