AAT banner

Voices of the Global Community


Victoria A. McGillin, Advising Assessment Commission Member; NACADA Research Committee Past Chair

One frequent question heard from NACADA members is, "What's the difference between research and assessment?" The following is an effort to articulate both the overlap, and the distinctions, between these two.

In our workshop on advising research and grant proposal development, the NACADA Research Committee discusses the similarities and differences between research and assessment. The following is a synopsis.


The goals of experimental research and program assessment differ significantly. While research focuses on the creation of new knowledge, testing an experimental hypothesis, or documenting new knowledge, assessment and evaluation focus on program accountability, program management, or decision-making and budgeting.

That is, while research is designed to document or measure a phenomenon not formerly recorded, e.g. applying a new theory to an advising encounter and documenting how well a model 'explains' what is going on between advisor and advisee, program assessment provides information to your campus about whether you are achieving prescribed goals, expending resources wisely or meeting a documented campus need.


While the methods employed in good program assessment and evaluation may be similar to those used in good research, they need not be. The range of methods employed in both may range from subjective field observations through objective questionnaires. If your key assessment question is how your campus advising compares to national data on advising (such as the ACT survey), the use of a nationally-standardized, reliable and valid instrument would be crucial to answering that question. However, nationally-standardized instruments may not always 'fit' your campus as they may utilize differently-named services or institutional structures not present. When an existing measure just won't do, good research AND good assessment practices call for the development of a reliable and valid new measure. We must be wary of developing a 'quick and dirty' measure in an effort to just 'get a quick answer' to our questions, without taking the time to ensure our measures are reliable or valid for our own campuses.

One major methodological difference between research and assessment is that researchers will 'experimentally manipulate a variable' (for example, randomly assigning students to one model of orientation or another), while program evaluation tends to be non-random (we rarely have the luxury of such random manipulation of our students). At best, assessment just looks at 'natural' differences that emerge, such as comparing students who chose one orientation event over another).


Just as experimental research and program assessment differ in their goals, they also differ in the use of their results. Research results are expected to be generalizable beyond one's own campus, with implications for similar institutions or similar populations. Program assessment results are applicable only to one's own campus. While both are of great value, research should contribute new knowledge to the field. When opening the NACADA Journal, you expect documentation of research that began as an advising question and culminated with statistically significant research of an advising method, theory or programmatic intervention that you can apply with some assurances of success.

Conversely, program assessments are designed to be site-specific and crucial for campus decision-making. Good program assessment ensures that you are responsive to the changing (or unchanging) needs of your populations. You may want to reuse the same measure each year to document the high level of program success over time. Your results may be particularly appropriate for the NACADA Journal's Tool Box section that highlights examples of best advising practices that link to current research in the field.

Finally, while research should provide possible answers to identified questions, it should also generate new research questions from the results. For example, if one's data showed that both male and female students were more critical of male advisors than female advisors, the researcher would want to explore this research question further. Assessment, however, looks for answers. Viewed from an assessment standpoint, such results might lead to interventions, such as additional training for male advisors and the desire to assess the effectiveness of that intervention on one's campus.


Given these differences, it is not surprising that there are vastly different audiences intended for program assessments, as compared to research. Assessment results are targeted for the key decision-makers on your campus. When budgets are cut, new programs proposed or accreditation rolls around, assessment/evaluation reports help you make a case for your program. As I am fond of saying, 'Whoever gets to the table with numbers first, wins.' The ability to produce an executive summary of key assessment findings (no more than 2 pages) documents the effectiveness of your work and moves your programs to the top of the funding lists, ahead of those supported only by anecdotal information.

In contrast, research is intended for the field of advising and higher education as a whole. Your results will be read by many, debated and critiqued, copied and expanded upon to generate even newer knowledge. While a one-page executive summary submitted to your dean may get you funding for a new advising initiative, your colleagues outside your institution look for full documentation of the research that led you to this question, the literature review of the theory that guided your process, details on the methods you used, the results (strengths and weaknesses) of this study, and the conclusions you drew based upon your research. The 15-20 pages, with bibliography, necessary for a published article, would only gather dust if submitted as part of a funding request to most deans or VPs.

Connecting It All

Let me conclude by emphasizing the most crucial point of connection between assessment and research. Good assessment/evaluation can be expanded into good research. Good research should lead to even better assessment procedures. Good assessment makes use of the best conceptual and theoretical models and the best research measures or methods. With valid and reliable measures, campus-specific questions may have national implications. A phenomenon identified on your own campus may be the cutting edge for an issue of significant importance.

Finally, find significant resources on advising assessment on the Assessment of Advising Commission Web Page.

We urge you to consult with the NACADA Research Committee. They seek cutting-edge proposals. Your assessments may lead to a critical (and fundable) piece of research!

Victoria A. McGillin
Wheaton College

Cite this article using APA style as: McGillin, V. (2003, December). Research versus assessment: What's the difference? Academic Advising Today, 26(4). [insert url here]


There are currently no comments, be the first to post one!

Post Comment

Only registered users may post comments.
Academic Advising Today, a NACADA member benefit, is published four times annually by NACADA: The Global Community for Academic Advising. NACADA holds exclusive copyright for all Academic Advising Today articles and features. For complete copyright and fair use information, including terms for reproducing material and permissions requests, see Publication Guidelines.