Feeding into Feedback
Alright y'all....
I have no energy left. I am just copy and pasting a discussion I spent so much time on. I feel that if I do anything, I have to give it my all. So even though I am now 4-weeks behind on assignments. I MUST get my voice outside of the constraints of the originating course. If I somehow find the time and energy, I will return to moisten and season the overcooked abomination. Okay ENJOY!
1.
· What information do you want to collect?
For my level 1 evaluation I will want to collect my student-technicians reactions to a couple forms of digital service literature. I will explain what I mean by this here as well as below. As my students work their way through the automotive program, I will introduce them to literature which details a myriad of important automotive service information. Thus, said repositories for this content have been dubbed, "service literature." The two that sources of literature that I will be collecting audience reaction on are called "ProDemand" and "Identifix." These are digital forms of service literature and may potentially only be accessible with an internet connection.
Data concerning these service literatures that I will be collecting will be my student-technician's perceived value of ProDemand and Identifix. How I will Collect this data will concern how valuable each literature is when locating and utilizing a few different categories of information. These categories are: locating component specifications [considered to be low in complexity], locating service procedures [e.g. how to fix or remove and replace something - generally considered moderately complex], and locating diagnostic information [high complexity]. I will then similarly pair these data points with questioning the perceived quality if and when technicians locate the needed information.
· Why do you want to collect this information?
The topic of the data is extremely relevant to the industry and the program because Technicians MUST Read The *Freaking Manual before any service/diagnosis/procedure/etc. [RTFM]. Because of this, I want to inform my approach to introducing/implementing technical resources as listed above. With this, I will consider how much or little focus to place on one literature, both, or neither. How this will help students is that I hope to curb aversion to some literatures and an over-reliance on other literatures. Each have their value and it is best that technicians be proficient in more than just one form of service literature.
· Who will use this information?
I, as the course designer and facilitator, will be using the information attained from this level one evaluation. In addition to this, any other instructors of the automotive program will use the information if they deem useful or necessary.
· When will you do your evaluation?
The evaluation will take place during the third week of the first four week course of the automotive degree. [Auto-112 is the first four-week session]. At this juncture, technicians will have been introduced to the service literatures by myself and will have some experience utilizing both. Students will very likely have developed bias and formed decisions on where to build their literature habits.
· How will you distribute your evaluation? (e.g., paper, online survey, email, other?)
For initial distribution for the purposes of evaluation taking, I will distribute it online via a Learning Management Software. [LMS]. e.g. Blackboard Ultra, Canvas. Students with access to the LMS will have their accounts linked with their institutionally issued student email [CWI]. This will be used as a recovery method and/or delivery method depending on LMS functionality. Distribution of data will take place over a database with access to said database emailed out to those on a need-to-know basis.
2. What were the two or three most important ideas you learned about Level 1 evaluations? Why were these ideas important to you? How do these ideas relate to your current use (or experience) with Level 1 evaluations are the most frequently used type of evaluation. Why do you think that is? Are there times when it is inappropriate to use a Level 1 evaluation? If so, when?
1)
What: If you, the facilitator of the training and/or evaluation won't use the responses or data, don't ask the question!
Why: It is important to be efficient with your own time, as well as students' time, as a facilitator. Asking relevant and utilized questions will retain more value in the responses and those who review the data of these sorts of evaluations. Lastly, for those who must tabulate or even analyze the data, asking questions that will all be used improve these actions.
How: Student workload and timeframe of rigor aggressively constrains what students assign worth and time. As for myself, I may become too caught-up in the creation and ideation process and overproduce prompts. [I am sure this surprises none of you].
Why-how: I think this is the case because of how enrollment, income, and the community college culture shapes student experience. For me, I am inexperienced but highly driven. Which at times, can be a detriment.
When bad: One situation in which level 1 evaluation is not appropriate or does not offer what is needed: when skill transfer on the job must be assessed.
2)
What: It is important to customize generic or, "off-the-shelf," evaluations in order to better suit the needs of yourself/students/org/etc. I already knew to not just throw generic forms at students. However, there was a lot of nuance I have now picked up on. I will detail below:
Why: This is important because well-suited evaluations increase student buy-in and possibly even validity of data. Just as importantly, if I were to better utilize off shelf evaluations, I would not have to start from scratch when looking to create my own evaluations as I have already done so many times.
How: These ideas relate to my current use because I implement both learner and instructor focused evaluations into my courses. When I do so, I often start from scratch. Additionally, the end-of-course evaluations in which data is shared with me are extremely generic and do not change as students move through the program.
Why-how: I think that this is the case because of data tabulation needs in the size of the institution that I am in. Infrastructure at present allows a certain kind of data tabulation and to change would require too much investment and time for the return it might offer.
When bad: Another scenario in which level one evaluation would be inappropriate would be when a client needs to analyze or determine the return on investment of a training program.
3)
What: Lastly, the third important idea that I learned was that evaluations function much more effectively when they are linked with, and build off of, each other.
Why: This idea is important to me because I want myself and my organization to have a more complex and rounded understanding of our operations that we run.
How: These ideas relate to my current use of evaluations because I use multiple, yet they are not linked or really compounding.
Why-how: I think this is the case because my both my skills with relationship-building and outcome-alignment could use improving.
When bad: A final example of when level one valuations are not appropriate is when the same method or format of evaluation is applied across multiple, especially dissimilar, courses or training programs.
3. The Kirkpatrick article for this week compares an instructor-centered and a learner-centered approach to each evaluation level. What is the difference between instructor-centered and learner-centered evaluations? What are the advantages of a more learner-centered approach to the stories of evaluation?
Oh my, the assignment is not done yet...
Instructor-centered evaluations involve the perception of the instructor from the perspective of the participants of the training program. That perception affects what value the participants place on what information the instructor has delivered. Therefore instructor-centered evaluation hones in on how the facilitator captivates, influences, and involves their audience.
Learner-centered evaluations focus on students performing a self audit on their experience, thoughts, and feelings. Particularly, students reflect on their level of confidence with any number of topics, skills, and the like.
With instructors, the focus is on how they are performing on the job. With learners, the focus is on how they are progressing in their learning environment to knowledge and skill learning.
The advantage of learner-centered evaluation is that it leaves much more room for improvement by the learners. It also makes space for the instructor to pay better/more attention to where their learners are at in their learning journey.
...aaaaaaaaaaaaaaand done.
Comments
Post a Comment