Using Technology for Intentional Student Evaluation and Program Assessment
Authored by: George E. Steele
2015
"Excellence is never an accident. It is always the result of high intention, sincere effort, and intelligent execution; it represents the wise choice of many alternatives—choice, not chance, determines your destiny.”—Aristotle.
If Aristotle is correct, then we in academic advising can always use assistance to be as intentional as possible in our practice. Advising as teaching is a paradigm that has been advocated by many authors (Lowenstein, 2005; Hemwall & Trachte, 1999, 2003), but intentionally identifying what students need to learn is critical. As Martin (2007) stated, “Learning objectives answer the question: what should students learn through academic advising?” Likewise, Steele (2014) argued for the intentional use of technologies as tools.
Tools are designed for specific uses. The best use of technologies is when their capabilities align with our advising goals. This article will use elements of Steele’s model for the intentional use of technology and combine these with elements of the curriculum development model called Understanding by Design to help advisors achieve better student learning outcomes and improve program assessment. Integrated, these two models offer a conceptual way to reconsider how to organize learning outcomes and program assessment.
Curriculum Development Stages Useful to Assessment
Intentionality is a critical component in the curriculum development model Understanding by Design (UbD) created by McTighe and Wiggins (2005, 2011). A synopsis of their UbD model contains three main stages:
Stage 1: Identify desired results by identifying what students should know, understand, and be able to do.
Stage 2: Determine assessment evidence by identifying how we will know if students have achieved the desired results.
Stage 3: Plan learning experiences and instruction that address the transfer of learning, meaning-making, and acquisition.
McTighe and Wiggins’s primarily focused has been on K–12 education. However, their model has much to offer college educators. For this article, outlined are the advantages their model offers to academic advisors seeking to develop a teaching and learning approach.
Understanding by Design—the Model
Stage 1: Identifying Desired Results
While the UbD model offers more in-depth considerations and guidelines than what can be presented here, there are several key points of the UbD model that are helpful for our purpose. In its crucial first step, the UbD model offers insights about identifying desired results by identifying what students should know, understand, and be able to do. The model suggests that learning outcomes be evaluated in the context of what should be the key understandings of the outcome and what are the essential questions related to specific outcomes. For our purpose, as an example, an essential academic advising goal is engaging in self-assessment to develop an academic and career plan. An example of understandings and fundamental questions related to this goal is suggested in the graph below.
General Goal
|
Understandings
|
Essential questions
|
Students will engage in self-assessment and educational and career planning.
|
Satisfaction and success in education and career endeavors are more likely when there is compatibility between one’s interests, abilities, and values and one’s college major and future work.
|
How can students discover the relationship between their self-knowledge and educational and career possibilities?
|
By determining what the essential learning outcomes are, versus those not essential or "nice" to know - we are forced to make value decisions about what we believe is critical for our students to learn. For example, what is a more critical learning outcome? Having a student engage in a successful educational and career planning project, or showing up to a scheduled meeting on time? By determining the essential academic advising learning outcomes in this fashion, we can also stress the importance of better aligning them to our institutional mission and working with the collaborative campus community to achieve them (Campbell, 2008; Robbins & Zarges, 2011).
Stage 2: Determining Assessment Evidence
The second stage, McTighe, and Wiggins, identify is determining assessment evidence. For our sake, let’s call this student learning evaluation to better align it with the nomenclature found within the field of academic advising. At this stage, it is also important to consider which technology tools can best assist with capturing evaluation evidence of student learning. This decision should be guided by the levels of learning we seek.
McTighe and Wiggins (2012) present their own taxonomy of terms to describe the understanding of the different levels of learning. In order of complexity, it begins at the simple stage of explaining and moves through levels of interpreting, apply, demonstrate perspective, display empathy, and finally to have self-knowledge. For them, this last level is a meta-cognitive level of awareness where one uses productive habits of mind, developed through prior learning, to reflect on the meaning of current learning and experiences. Those familiar with Bloom’s Taxonomies (Bloom, 1953; Anderson et al., 2001) can see parallels with McTighe and Wiggins’s taxonomy in terms of defining learning outcomes. Both taxonomies move from the simple to more complex functioning; while McTighe and Wiggins present one classification taxonomy, Bloom and his colleagues defined three taxonomy domains: cognitive, affective, and behavioral.
Stage 3: Designing and Planning Activities
Stage three of McTighe and Wiggins’s approach is to plan learning experiences and instruction that address the transfer of learning, meaning-making, and acquisition. At this step, advisors can consider what activities might help achieve desired learning outcomes. Consider the example in step one: activities for self-assessment and exploration. This step might include the following activities:
- online self-assessment tools,
- students and faculty in specific academic programs informational interviewing resources,
- lists of career options related to particular academic majors, and
- students' reflective writing assignments focused on their academic and career goals.
A quick scan of these suggested activities shows they include both virtual and traditional experiences. The advantages are maintained as long as the means of evaluation relies on the use of digital technologies. By doing so, advisors can capitalize on the key to McTighe and Wiggins’s (2012) approach that “encourages teachers and curriculum developers to first think like assessors before designing specific units and lessons” (p. 7).
Applying Steele’s Intentional Use of Technology
The intentional use of technology model identifies three categories of technologies used in advising: service, engagement, and learning (Steele, 2014). The service category highlights those tools that provide institutional services through personalized student accounts such as student information systems and degree audits. The engagement category identifies tools to inform and build communities with students and others at the institution, such as social media, static websites, and many electronic communication tools. The learning category includes tools such as e-Portfolios, learning management systems, interactive video conferencing, and early alert systems. A key element in this category is including a means to digitally evaluate student learning through mastery of content, skills developed, a project produced, plan submitted, or demonstration of reflection on a topic or issue. While it seems self-evident that tools in the learning area can assist with achieving learning outcomes, the integration of tools found in all three areas can lead to the creation of a more robust and enriched digital learning environment. Such a combination of tools can help advisors address student learning and program assessment in a robust manner.
Learning Taxonomies and Student Tools for Evaluation
Regardless of the taxonomy used, the central point is that different types of evaluation tools are better at evaluating different levels of student learning. Using the general learning goal presented earlier, the chart below suggests relationships between 1) assessment evidence, 2) types of technologies identified in Steele’s model under learning, and 3) specific digital assessment tools used in specific technologies.
Assessment evidence
|
Technology
|
Assessment tools within selected technology
|
Demonstrates an understanding of the Holland code
|
Learning Management System (LMS)
|
Use of quizzes: either fill-in-the-blank or matching terms to definitions
|
Can describe the relationship between self-assessment and educational and vocational options
|
LMS
|
Use of a rubric to assess the quality of students’ response in a written paper submitted to the LMS Dropbox
|
Can provide reasons for the choice of educational major and vocational direction
|
e-Portfolio
|
Use of rubrics to assess 1) relationship between students’ self-assessments and selection of major and vocational direction, 2) quality of presentation and 3) quality of the explanation of choices
|
The progression illustrated in this table shows that technology tools for evaluating students can be differentiated in terms of their effectiveness at different levels of student learning. Technology tools should be aligned with student-learning outcomes beginning with simple tools found in most Learning Management Systems (e.g., quizzes, fill-in-the-blank, matching, and short answer responses).
Evaluation tools at this level tend to focus on the lower levels of learning taxonomies: information recall, comparing, or explaining. More complex learning outcomes require more sophisticated technology tools, including tools that allow for writing papers demonstrating careful reflection on the exploration process or submitting an academic and career exploration project in an e-Portfolio tool. These more sophisticated efforts often rely on the use of rubrics for evaluating student learning example.
Evaluation tools at this level tend to focus on the higher levels of the learning taxonomies that seek demonstrations of meta-cognitive levels of awareness.
Such approaches to evaluate students are familiar to academic advisors in traditional learning settings. When advisors use appropriate technologies to help assess students learning, two critical advantages are evident. Firstly, content modules related to specific learning outcomes are aligned to evaluation tools through an LMS or e-Portfolio. Secondly, the results of student learning are digitally created in relation to specific learning outcomes.
Integrating Technologies for Improving Understanding
Technologies found in the learning component of Steele’s model are not the only technologies that can advance learning. For instance, degree audits are categorized under “services.” We can ask students to produce a degree audit as one measure of a student learning outcome. More importantly, we can have students interpret what the degree audit means to them through a reflection paper submitted in the LMS or e-Portfolio.
Many of the technologies listed in the social category exist outside the institution’s student portal, what Steele referred to as crossing the “FERPA Line” where institutional security does not exist. Thus, advisors must use these technologies with caution. Still, we can suggest students use services like a LinkedIn site to connect with alumni from the institution who participate because they want to share their career experiences with currently enrolled students. Students could consider their interactions with these alumni in the context of their e-Portfolio project or a reflection paper, which would then be submitted through an LMS. Students who engage in these types of activities demonstrate higher-order thinking through reflection.
Implications for Using Technology in Program Assessment
The use of models from both McTighe and Wiggins and Steele can help advisors with program assessment in significant ways. One advantage is that identification of outcomes for advising can be classified as process/delivery outcomes (PDOs) for the advising program and student learning outcomes (SLOs) (Campbell, 2008; Robins & Zarges, 2011). Creating SLOs is necessary for stage one of the UbD model. When technology is used to evaluate student learning outcomes, students’ results are in a digital format that can be easily aggregated. Thus, advisors can more efficiently determine how well a program’s activities helped students achieve desired SLOs.
Similarly, another benefit is that because an integrated approach encourages advising programs to use technology for content and service delivery, students are using technologies in different ways. When students use technologies located within the institution’s student portal (service and learning technologies), it becomes easier to track student use. When students use a degree audit report to help explain their choices for educational and career planning, their use of the degree audit produces PDOs. If they write a paper describing why they chose a particular program, the document can be an activity associated with an SLO. Rich sources for program assessment are created by aggregating both SLO and PDO data in this manner.
Curriculum planning and the intentional use of technology help advisors evaluate essential learning and prove how advisors increase student learning while aiding program assessment.
George E. Steele
Retired, The Ohio State University
[email protected]
References
Anderson, L.W., Krathwohl, D.R., Airasian, P.W., Cruikshank, K.A., Mayer, R.E., Pintrich, P.R., Raths, J., & Wittrock, M.C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's Taxonomy of Educational Objectives. New York, NY: Pearson, Allyn & Bacon.
Bloom, B.S. (1956). Taxonomy of educational objectives, handbook I: The cognitive domain. New York, NY: David McKay Co Inc.
Campbell, S. M. (2008). Vision, mission, goals, and program objectives for academic advising programs. In Gordon, V.N., Habley, W.R. & Grites, T.J. (Eds.). Academic advising: A comprehensive handbook (second edition) (229-241). San Francisco, CA: Jossey-Bass.
Hemwall, M. K., & Trachte, K. C. (1999). Learning at the core: Toward a new understanding of academic advising. NACADA Journal, 19(1), 5–11.
Hemwall, M. K., & Trachte, K. C. (2003). Academic advising and a learning paradigm. In M.K. Hemwall & K. C. Trachte (Eds.), Advising and learning: Academic advising from the perspective of small colleges and universities (NACADA Monograph No. 8, pp. 13–20). Manhattan, KS: National Academic Advising Association.
Lowenstein, M. (2005). If advising is teaching, what do advsiors teach? NACADA Journal, 25(2), 65-73.
Martin, H. (2007). Constructing Learning Objectives for Academic Advising. Retrieved from NACADA Clearinghouse of Academic Advising Resources. Website: http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Constructing-student-learning-outcomes.aspx
McTighe, J., & Wiggins, G. (1999). Understanding by design professional development workbook. Alexandria, VA: ASCD. Website: http://shop.ascd.org/ProductDetail.aspx?ProductId=411
Robbins, R., & Zarges, K.M. (2011). Assessment of academic advising: A summary of the process. Retrieved from NACADA Clearinghouse of Academic Advising Resources website:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Assessment-of-academic-advising.aspx
Steele, G. (2014). Intentional use of technology for academic advising NACADA Clearinghouse Resource website:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Intentional-use-of-technology-for-academic-advising.aspx
Wiggins, G., & McTighe, J. (2005). Understanding by design (expanded 2nd edition). Alexandria, VA: ASCD.
Wiggins, G., & McTighe, J. (2011). The understanding by design guide to creating high-quality units. Alexandria, VA: ASCD
Cite this using APA style as:
Steele, G. (2015). Using Technology for Intentional Student Evaluation and Program Assessment. Retrieved -insert today's date- from NACADA Clearinghouse of Academic Advising Resources website: http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Using-Technology-for-Evaluation-and-Assessment.aspx