x
  
  

Measures of Student Learning

A measure is the process by which data is collected and evaluated to determine whether students are achieving learning outcomes. There are two types of measures: direct and indirect.

Direct measures require students to demonstrate their competency or ability in some way that is evaluated for measurable quality by an expert, such as an instructor, internship supervisor, or industry representative.

Indirect measures provide secondhand information about student learning. Whereas direct measures are concerned with the quality of student work as it demonstrates learning, indirect measures are indicators that students are probably learning. Often, indirect measures are too broad to represent achievement of specific learning outcomes, but they may provide useful supplemental information.

For academic programs, direct measures of student learning should be prioritized. 
For academic & administrative educational support units, indirect measures may provide enough information necessary to determine whether objectives have been met.

Examples of Direct Measures:

  • Written assignments, oral presentations, or portfolios of student work to which a rubric or other detailed criteria are applied
  • Exam questions focused on a particular learning outcome or content area
  • Scores on standardized exams (e.g., licensure, certification, or subject area tests)
  • Employer, internship supervisor, or committee chair evaluations of student performance
  • Competency interviews
  • Evaluations of student teaching and classroom observation
  • Other assignment grades based on defined criteria

Examples of Indirect Measures: 

  • Survey questions students answer about their own perception of their abilities
  • Tasks tracked by recording completion or participation rates
  • Completion of certain degree requirements
  • Number of students who publish manuscripts or give conference presentations
  • Job placement data

 These short videos created by the Office of University Assessment at the University of Kentucky summarize key information and highlight important considerations when determining measures for collecting assessment data. 

 Video: Assessment Tools (7 Minutes)

Video: Artifact Mapping (6 Minutes)

Video: Data Collection (5 Minutes)

Programs and departments are encouraged to use and/or revise existing rubrics to fit their needs. The American Association of Colleges & Universities (AAC&U) VALUE rubrics were created specifically for this purpose, and many of them have been used as the foundation for Core Curriculum assessment rubrics currently used at Texas A&M International University. Attached to each AAC&U rubric is a cover page that provides a definition of the learning outcome, framing language, and a glossary of key terminology used in the rubric.

 
 

Once assessment data is collected and analyzed, the next step is to determine what the findings mean. Was the outcome achieved or not, and to what degree? It is important to set benchmarks ahead of time for each assessment measure you are using. 

The strongest benchmarks are those which are based on previous information, such as past assessment findings or some other observed performance or achievement. The faculty/staff group should collectively decide what benchmark should be set. Benchmarks should be clear, specific, and aligned with the language of the measure.

Benchmarks may be quantitative or qualitative, or even multi-faceted to include both types of data. Below are some examples.

Quantitative Targets

  • 85% of students will earn at least 7 out of 10 points on the critical thinking essay question.  
  • 90% of students will achieve the “Competent” threshold on the WIN rubric criterion. 
  • 70% of students will score above the 80th percentile on the NCLEX standardized exam. 
  • 80% of students will select “Agree” or “Strongly Agree” that the training improved their mentoring skills.  
  • 75% of service requests will be acknowledged within 24 hours. 
  • Women student’s enrollment in this activity/event will increase 15% from last year    (= 250). 
  • 90% of reports will be submitted on time. 
  • Demographics of students participating in this experience will match the demographics of students on TAMIU campus (list percentages). 

Qualitative Targets

  • Each submitted developmental portfolio will demonstrate growth (as defined by the program) in incorporating credible research sources.  
  • When asked open-ended questions about their experience with the service/support, users in focus groups will mention keywords or synonyms related to the unit’s purpose and/or mission statement (e.g., belonging, inclusion, safe space, etc.).   
  • Each debriefing session with clients will indicate that they are satisfied with the team’s pre-event communication.
  • Open-ended survey questions will reveal favorable overarching themes.  

Contact

Office of Institutional Assessment, Research and Planning
5201 University Boulevard, Sue and Radcliffe Killam Library 434, Laredo, TX 78041-1900
Phone: 956.326.2275