Ilene M. Gilborn, Mount Royal College
Gambling on academic advising success? Using the balanced scorecard approach could lead to advising windfalls.
A couple of years ago, when I was asked to advise for some of our programs at the Bissett School of Business, I thought, “Why not? How hard can it be? Students come into my office and I tell them what courses to take; by the end of the semester, if I don’t hear from the Dean, I guess I’ve done a good job!”
Being an accountant, it reminded me of the simple way Henry Ford ran his company a hundred years ago. Ford manufactured the cars, people bought them for more than they cost to produce, and he knew he had done well if there was money in the bank at the end of the day. In those days, most consumers had (and wanted) little choice. In fact, Ford has frequently been quoted as saying, “The customer can have any color of Model-T he wants, as long as he wants black!”
But that was then. Modern companies are more complex as a result of global competition, vast diversification, and discerning and savvy customers. Not surprisingly, it took only one or two advising appointments for me to realize academic advising is very similar.
Since the time of Henry Ford, most companies have measured performance using summative profitability measures such as operating profit, return on investment and earnings per share. These have, and still do, provide an excellent summary of the bottom line. Yet, by themselves, financial measures are woefully inadequate for telling the whole story about a company’s operations. Here’s why.
Financial measures report past experiences. This is troublesome for two reasons. Firstly, it causes our responses to be reactive. By the time the bank calls us to say we are overdrawn, it is too late to employ preventative measures, such as a line of credit. And because we cannot change the past, we must live with the consequences—such as a poor credit rating. These types of measures are called “lag” indicators.
Secondly, we don’t get enough information from financial measures. Shareholders might be happy with only knowing the earnings per share. Managers, on the other hand, also need to know things like market share, production quality and employee turnover, so operations can flow smoothly and profits can be generated. It is obvious that a drop in product quality will lead to reduced market share and ultimately, diminished profits. Happy customers, efficient processes and a skilled workforce are all essential to successful operations—yet none of the performance measures in these areas show up on the balance sheet. These so-called “lead” indicators help managers pinpoint and resolve problems before they affect the bottom line.
In the early 1990s, two management consultants recognized the need for some kind of integrated performance measurement system that would incorporate both financial and non-financial results, lead and lag indicators, and internal and external measures. They also reasoned that because profits are linked to customer satisfaction, product quality and skillful, happy employees each of these areas should be represented by the measures selected. After a lot of research, they determined their performance measurement model should be limited to 20 to 24 different measures spread equally between four to five critical areas. Because of this “balanced” approach, they called their model “The Balanced Scorecard” (BSC).
Many companies world wide have successfully implemented the balanced scorecard for measuring their business performance. (A list of companies can be found at the Balanced Scorecard Collaborative website).
Now that I have three years of academic advising under my belt, it has occurred to me that the “balanced scorecard” could be utilized for advising assessment. Academic advising is a multi-faceted activity that has clear linkages among learning outcomes, retention rates, student satisfaction, program design and delivery, and the training and development of advisors. So, as illustrated by companies that are using the BSC, a balance of lead and lag indicators equally distributed between these five “perspectives” would not only tell us how well we are doing, but will also provide insight into where we could proactively improve our programs and prevent problems from occurring.
Implementing a balanced scorecard for academic advising would not be easy as much as we might prefer a turn-key package. Each organization must use the model as a template, customizing it to fit their particular program. In addition, a BSC is not static. As our students, programs and people change, so too must our performance measures. The dynamic nature of the scorecard insures we are always on top of our program delivery despite any changes to our academic environment.
Development of a BSC begins with a clearly stated mission or purpose, measurable objectives and strategies to meet those objectives. Then under each perspective, specific outcomes are identified for each strategy. For example, if one of our strategies is to utilize faculty advisors, then a desired outcome might be the development of faculty’s advising skills.
The next step would be to determine how each outcome will be measured. This is frequently the most difficult part of developing a BSC, particularly when it applies to an abstract product like advising. At the same time, it gives us an opportunity to study and analyze our programs while looking for measurable attributes.
Lastly, we need to establish some numerical criteria against which we will measure our actual performance. A measurable outcome for faculty development would be the number of faculty who complete the development program. Specifically, we might say our target is to have 15 faculty members complete the training program.
Once the balanced scorecard is in place, we would follow a regular cycle of use. We would need to gather evidence (results) on a regular basis—possibly for each month, each semester or each year. Integrated computer systems make this task less daunting. We would compare the results to our targets (calculate scores), and interpret them in terms of our performance. Following that, in addition to making any indicated changes, we would go back to review our strategy. This is an important part of the cycle because our “scores” may indicate weaknesses in our strategy rather than our performance.
Finally, we would examine our current outcomes and associated measures from each perspective to make sure they are still the best indicators of performance and will provide us with the information we need for evaluation and decision-making. With our revised scorecard, we are ready to begin the cycle again.
The following illustration provides a very rough idea of what a balanced scorecard might look like if we were to apply it to academic advising assessment.
This is a very brief introduction of the BSC, and how it might be applied to advising programs. For more in-depth information please visit the following web sites:
http://www.balancedscorecard.org/
Ilene M. Gilborn Mount Royal College Calgary, Alberta, Canada [email protected]
Cite this article using APA style as: Gliborn, I. (2006, February). Balanced scorecard approach. Academic Advising Today, 29(1). Retrieved from [insert url here]