This posting is the 3rd in a series of postings discussing distinguishing instructional designers using competencies, statements, etc. In the first posting, “Labels Do Matter” A Brief Survey of Instructional Design Job Posting Titles I presented a litany of “labels” that I found on sites like HigherEdJobs, The Chronicle, Monster, Indeed, and LinkedIn (not linked i.e., requires authentication) when searching with the term “instructional.” In the second posting, “Labels Do Matter” A brief look at IBSTPI Competencies for Instructional Designers I discuss the background of International Board of Standards for Training, Performance and Instruction (IBSTPI) and some competencies that can be used for instructional designer differentiation.
The title for this posting is derived from article that I remember reading a few years ago, Labels Do Matter by colleague Patrick Lowenthal. Practicing instructional designers do want to advance yet what do these titles really mean? Do you agree with me that perhaps these labels don’t really matter? Is it the responsibility and projects that count?
Considering the nature of the instructional designers found in the department where I work, i.e., we all graduated from instructional design and technology (IDT) graduate programs if not doctoral programs, is IBSTPI this the best tool for us? Many professionals in our field are practicing as instructional designers but may not have formal training. I also ask because I came across an article, Performance Improvement Competencies for Instructional Technologists (2004) by Dr. James D. Klein and Dr. Eric Fox. Here they discuss evaluating graduates of these programs against some generic statements from the Human Performance Technology (HPT) model in an effort to improve course offerings at those institutions. I find many of these statements to be of great value to our needs, particularly:
- Distinguish between performance problems requiring instructional solutions and those requiring non-instructional solutions.
- Conduct a performance analysis for a specific situation to identify how and where performance needs to change (performance gap).
- Evaluate a performance improvement intervention to determine whether or not it solved the performance problem.
- Conduct a cause analysis for a specific situation to identify factors that contribute to the performance gap.
- Select a range of possible performance interventions that would best meet the need(s) revealed by the performance and cause analyses.
- Assess the value of a performance improvement solution in terms of return on investment, attitudes of workers involved, client feedback, etc.
- Identify and implement procedures and/or systems to support and maintain performance improvement interventions.
The HPT model is also heavily utilized by the International Society for Performance Improvement (ISPI) which has developed their own Certified Performance Technologist certification. When I was reviewing them I found that the above is a very similar and perhaps a simplified list, but I encourage you to take a look and find outlined skills, examples, etc. that meet the needs for your organization. If you are creating your own evaluation tool I don’t see why you couldn’t pick, choose, and eliminate the ones that are not relevant to your organization and its needs.
Let’s take a look at the ISPI CPT Standard 5: Determine Need or Opportunity as it is very similar to the 2nd generic HPT statements above.
Competent practitioners design and conduct investigations to find out the difference between the current and the desired performances (the performance gap). They:
- Facilitate discussions with clients to clarify intent of the investigation.
- Determine the scope of the investigation.
- Choose the appropriate method of analysis.
- Decide on how to best get the data.
- Gather the data.
- Analyze the data.
- Determine the magnitude of the gap.
- Report the finding with recommendations.
- Interpret the findings for the client.
Examples: In collaboration with your client, you might:
- Identify the objectives of the analysis, who to involve, what data you require, how best to get the data, how to best use the data, who will use the data, and when you want to begin and end.
- Interview stakeholders, observe job processes, and examine existing documentation.
- Determine which needs or opportunities are worth pursuing further.
Obviously these supporting examples and deliverables will be very useful in creating such assessment tools. Needless to say, imagine an evaluation of instructional designers based on the above competes which included samples of work, feedback from clients and managers, written reflections from the designer, etc.? Our learners are often assessed using these forms of evaluation but are we?
Another angle that I hope to blog about is the use of the supporting principles of the eLearning Manifesto which is a recent movement in eLearning started by some key members or “instigators” of our field, Michael Allen, Julie Dirksen, Clark Quinn, and Will Thalheimer. Currently there are 22 guiding principles that should be considered when developing “Serious eLearning.” What an amazing group of professionals in the field of Instructional Design and eLearning that banded together to cause some “disruption” to the current state of eLearning. I wonder if they ever would have thought of their principles used in this way but they may…