Means of Assessment

Selecting an appropriate assessment method for data collection is an important step in the learning outcomes assessment process. To get started, ask yourself the following questions:

  • What type of data is needed?
  • Is there existing data that can be used?
  • Is there an opportunity to collaborate with another program or department to assess similar learning outcomes?
  • How should a department go about collecting the data?

Can I use grades to assess my students?

It is not recommended to use grades to assess learning outcomes for two reasons:

UNRELIABLE: Although grades share some overlapping properties with assessment (e.g., rubric, criteria), grades are not necessarily a reliable measure of skills or ability. Course grades represent an individual student’s evaluation of performance and achievement. Grading criteria may include other components that are not measurable of learning outcomes, including homework completion, effort and progress, positive attendance, and participation. Besides, if multiple learning outcomes are assessed across sections and across a program, individual instructor grading practices may not accurately reflect learning.

NOT ENOUGH INFORMATION: Grades inform us that learning took place, but they do not give enough information on specific strengths and improvement areas of student learning. It is hard to determine what or how much learning took place. For example, students who earned a B in a course or assignment may have low competency in one skill set area. Assessment goes beyond grades and is improvement focused. Assessment informs instructional decisions, such as curricula revisions or changes, learning strategies and teaching methods, rethinking Guided Pathways program mapping, and professional development training possibilities.

Assessment Methods

Once learning outcomes are developed for a course, degree program, certificate, or student support service and program, faculty and staff should determine which assessment method(s) is best to measure the level of performance of students. In general, assessment methods are commonly grouped into two categories:   

  • Direct assessment method requires students to demonstrate the application of knowledge, skills, and abilities through student-produced work (e.g., papers, portfolios). Examples of external direct assessment methods include licensure or certification exams. With direct assessment, student behavior is observable and measurable, which allows faculty to directly measure expected learning outcomes and provide concrete evidence.

    Course Embedded Direct Assessment

Embedded direct assessment involves selecting student work that is embedded as part of the curriculum and is already being done for a grade (e.g, artistic performance, test items). Faculty should bear in mind, assessment is a separate process from grading in that it involves ungraded measures of student learning. When possible, embedded direct assessments are recommended to avoid another layer of work for faculty members. Given the assessment tool is relevant, valid, and reliable, faculty could alter an assessment tool that could assess a learning outcome and still be used for grading. For departments measuring learning outcomes across many courses, there may be more initial time devoted to collaboratively developing and validating a rubric. This assessment approach can provide meaningful evidence to gain insight into the strengths and areas of improvement of a particular program. 

Examples of Direct Assessment Methods

Audio recording
Capstone projects
Case study analyses
Clinical experience
Conference papers/presentations
Embedded assignments or test items, given they are valid and reliable
Essays, research papers
Exhibitions
Field work
Lab reports
Music or theatrical performances
Oral presentations (individual, group, debate)
Portfolios
Pretests and posttests
Program standardized tests
Service learning projects
Simulations
Stage plays

Do we have to do a pretest-posttest assessment?
Pretest assessments provide a knowledge baseline or a benchmark against which to measure growth of knowledge or skills over time and are administered prior to learning subject matter or receiving services. The posttest assessment tools are administered at the end of a course or program to determine achievement of learning outcomes. One assessment is usually adequate to determine whether students could demonstrate content knowledge or skills upon completion of respective learning modules or training sessions, or by the end of a course or program. Faculty and student support professionals who are interested in monitoring student progression and who would like to pinpoint the impact of a program or service on student learning are encouraged to use a pretest-posttest assessment. 

  • Indirect assessment method is not demonstrated directly and is based on a report of perceived student learning, which requires students to reflect upon their knowledge, skills, and abilities (e.g., surveys, focus groups). An example of an external indirect assessment method is the Community College Survey of Student Engagement.

    Indirect methods are particularly helpful for programs when interpreting assessment findings of direct methods. Indirect assessment adds to the understanding of direct methods by discovering implicit qualities, such as values, perceptions, and attitudes of students’ learning experiences.

    Some programs, particularly student support services and programs, may be interested in seeking out indirect assessments to discover the impact of a program or service on students or the perceived usefulness or accessibility of a service on students. Programs may also gather information by means of indirect assessment methods collected by Chaffey’s Office of Institutional Research.

              Examples of Indirect Assessment Methods
 
                            Annual reports (institutional benchmarks, retention rates, graduation rates)
                            Entrance and exit exams
                            Focus groups and interviews
                            Job placement data
                            Surveys (e.g., student perception surveys)
                            Participation in service learning or internships
                            Reflective essays
                            Student participation rates
                            Transcript studies

Outcomes Assessment: A Summative Approach

The primary focus in outcomes assessment is to demonstrate mastery of course or program knowledge or skills in a summative manner. A summative assessment process uses, in most cases, direct methods to measure the overall level of student learning at the end of a course or after completing a series of courses. For example, to assess a course learning outcome, a final project or portfolio (direct assessment) represents cumulative learning achieved at the conclusion of the semester. Conversely, a formative process may focus on learning objectives, which are assessed throughout the semester or throughout a program and allow faculty to monitor students’ learning progress. It asks, “What do students know at this particular point in time?” Both approaches help faculty improve teaching methods, but a summative approach provides feedback to improve curriculum development or changes and offerings. 

Data Collection

Data from direct or indirect measures may be quantitative or qualitative.

  • Quantitative data is measured using numbers or expressed numerically and is used to draw conclusions from a representative data set. Quantitative data is more suitable for data analysis and can measure students’ attributes and characteristics of learning.
  • Qualitative data, also called categorical data, is in-depth, exploratory data aimed at increasing understanding of a learning experience. This type of data is generated through texts, documents, interviews and focus groups, images, observation, and audio or video recordings. Qualitative data can be “quantifiable” (coded quantitatively). For example, faculty could use rubrics to assess student video presentations or performances. The chart below illustrates the relationship between direct/indirect assessment methods and quantitative/qualitative data.
  Direct Assessment
Demonstrates learning
Indirect Assessment
Describes learning
Quantitative Data
(numeric data)
Embedded test items
Licensure exam
Pretest-posttest
Surveys
Qualitative Data
(textual, images, audio)
Performance
Portfolio
Research Paper
Interviews
Focus Groups
Reflection papers

Rubrics

One common quantitative data collection tool used for outcomes-based assessment is rubrics. Rubrics are a scoring tool and typically resemble a grid with the following:

  • Criteria: the skills, knowledge, or performance students can demonstrate (e.g., organization, atomic and molecular structure, systems of equation, critical thinking).

  • Levels of Performance: indicates the different degrees of demonstrated performance within each criterion (e.g., beginning, basic, proficient, advanced). Scores are used to rate each criterion, and a specific score is associated with each level of performance, which determines a numerical value for the ranges of performance.

  • Descriptors: describes performance levels according to each criterion (e.g., Carefully proofreads and edits content. Message is clear and concise. Adapts content to the audience’s level of understanding and interests). Although there are different types of rubrics, it is recommended to include explicit descriptions of performance levels to increase interrater reliability among scorers.

There are two general types of rubrics: holistic and analytic. The nature of the learning outcome that is being assessed will determine which type of rubric is most appropriate.

Holistic rubrics assess one attribute of student learning based on a single criterion. A single score (usually three- to six-point scales) identifies overall achievement or performance. Use a holistic rubric when a single dimension is adequate to describe the learning outcome being assessed. An advantage of holistic rubrics includes quick scoring, but it does not provide detailed feedback and may be more challenging to assign one overall score, especially on lower-point scales (i.e., 2- or 3-point scales). See the table below for an example of a holistic rubric with explicit descriptors.

Criterion Very Highly Skilled
5
Highly Skilled
4
Medium Skilled
3
Low Skilled
2
Unskilled
1
Assertiveness Communicates viewpoints, feelings, or perspectives clearly and directly. Speaks in a calm and neutral tone. Uses “I” statements and appropriate words in conversation. Nonverbal communication aligns with spoken words. Communicates viewpoints, feelings, and perspectives clearly and directly. Uses “I” statements and appropriate words in conversation, but nonverbal communication (e.g., stern look, looks down, slouches, speaks loudly, speaks softly) is not always in alignment with spoken words. Communicates viewpoints, feelings, and perspectives clearly. Uses appropriate words in conversation, but nonverbal communication (e.g., stern look, looks down, slouches, speaks loudly, speaks softly) is not always in alignment with spoken words. Viewpoints, feelings, and perspectives are not well communicated; speaks apologetically and softly
OR
speaks impulsively (e.g., interrupting frequently).
Does not communicate viewpoints, feelings, or perspectives by either
(a) failing to assert themselves
OR
(b) criticizing and blaming others, while speaking loudly and/or interrupting frequently.


Analytic rubrics are a multi-dimension scale that assesses several different criteria by describing each attribute’s criterion separately. Explicit description of attributes or skills is provided. Sub-scores of each criterion and an overall score could be calculated. Although these are more time consuming to create and take more time to score, they provide detailed feedback of student learning, including both strengths and areas of improvement.  View an example of an analytic rubric (.PDF).

Examples to Describe Levels of Performance
3-Point Scales Exemplary, Competent, Developing
Excellent, Good, Poor
Master, Apprentice, Beginner
Mastery, Satisfactory, Unacceptable
Sophisticated, Competent, Needs Work
Strong, Medium, Weak
4-Point Scales Advanced, Proficient, Basic, Beginning
Exemplary, Accomplished, Developing, Beginning
Fully Met, Met, Partially Met, Not Met
High Pass, Pass, Low Pass, Fail
Exceptional, Good, Fair, Poor
Sophisticated, Highly Competent, Competent, Not Yet Competent
Distinguished, Proficient, Basic, Unsatisfactory
5-Point Scales Exemplary, Proficient, Acceptable, Emerging, Insufficient
Excellent, Very Good, Good, Limited, Poor
Master, Distinguished, Proficient, Intermediate, Novice

AAC&U Value Rubrics

The Association of American Colleges & Universities (AAC&U) website provides access to VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics for faculty and student support professionals. The 16-VALUE rubrics represent 16 core learning outcome proficiencies that undergraduate students should learn in their programs of study.

Registration to access the free rubrics is required before downloading them. Faculty members and student support professionals could use the rubrics as is or modify achievement descriptors and/or criteria to measure learning outcomes. The VALUE rubrics are listed below.

Intellectual and Practical Skills

Personal and Social Responsibility

Integrative and Applied Learning


Access the 16-VALUE rubrics.