50thAnniversary

Glossary

Assessment: The systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. (Palomba & Banta, 1999)

Assessment is the ongoing process of:

  • Establishing clear, measurable expected outcomes of student learning
  • Ensuring that students have sufficient opportunities to achieve those outcomes
  • Systematically gathering, analyzing, and interpreting evidence to determine how well student learning matches our expectations
  • Using the resulting information to understand and improve student learning (Suskie)

Bloom’s Taxonomy of Cognitive/Affective/Psychomotor Objectives: (Bloom, 1956)

Six levels of cognitive objectives arranged in order of increasing complexity (1=low, 6=high):

  1. Knowledge: recalling or remembering information without necessarily understanding it.
  2. Comprehension: understanding learned material.
  3. Application: putting ideas and concepts to work in solving problems.
  4. Analysis: breaking down information into its component parts to see interrelationships and ideas.
  5. Synthesis: putting parts together to form something original. It involves using creativity to compose or design something new.
  6. Evaluation: judging the value of evidence based on definite criteria.

Six levels of affective objectives:

  1. Receiving: willingness to receive or attend to particular phenomena or stimuli
  2. Responding: active participation on the part of the learner
  3. Valuing: seeing worth or value in the subject, activity, or assignment
  4. Organization: bringing together a complex of values, possible disparate values, resolving conflicts between them & beginning to build on internally consistent values systems
  5. Characterization by a value or value complex: internalization of values have a place in the individual’s value hierarchy; behavior is persuasive, consistent, predictable

“Closing the Loop”: Use of assessment results to inform decision-making.

Direct /Indirect Methods:

Direct method – Gathers evidence about student learning based on student performance that demonstrates the learning itself.  Can be value added, related to standards, qualitative or quantitative, embedded or not, using local or external criteria. Examples are written assignments, classroom assignments, presentations, test results, projects, logs, portfolios, and direct observations. (Leskes, 2002)

Perceptual method - Acquiring evidence about how students feel about learning and their learning environment rather than actual demonstrations of outcome achievement. Examples include surveys, questionnaires, interviews, focus groups, course evaluations, and reflective essays.

Indirect method – Acquiring evidence about how students feel about learning and their learning environment rather than actual demonstrations of outcome achievement. Examples include surveys, questionnaires, interviews, focus groups, and reflective essays. (Eder, 137)

Embedded Assessment: A means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Often uses for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy). (Leskes, 2002)

Evaluation: Applying value judgment about assessment data to determine effectiveness of teaching strategy or program.

External/Internal Audiences:

Internal – faculty committees, Faculty Council, Advancement Office

External – accrediting associations, prospective students, donors, employers, community leaders, foundations

Formative/Summative Assessment:

Formative – The gathering of information about student learning during the progression of a course or program and usually repeatedly to improve the learning of those students. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative. (Leskes, 2002)

Summative – The gathering of information at the conclusion of a course, program, or undergraduate career to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Example: examining student final papers in a course to see if certain specific areas of the curriculum were understood less well than others. (Leskes, 2002)

Learning Outcomes/Competencies:

Learning Outcome – statement that describes what learners are expected to know and do by a specified time (i.e. by junior year, by graduation)

Competency – Statement that specifies the measurable evidence of performance(s) to meet the learning outcome.

Mission Statement: A statement of the core values that guide an institution or program; the purpose the institution or program serves for its stakeholders. Usually describes what the organization/program does, who it serves, and what makes it unique. The mission statement is the central and guiding force in creating a plan to assess student learning, as illustrated below (from most general to most specific).

Mission Statement > Learning Outcome > Learning Competency

Union Institute & University empowers adults to acquire and apply advanced knowledge through interdisciplinary, flexible, and collaborative programs focusing on social relevance and personal enrichment.

Performance Criteria: The standards by which student performance is evaluated. Performance criteria help assessors maintain objectivity and provide students with important information about expectations, giving them a target or goal to strive for. (New Horizons for Learning)

Portfolio: A systematic and organized collection of a student’s work that exhibits to others the direct evidence of a student’s efforts, achievements, and progress over a period of time. The collection should involve the student in selection of its contents, and should include information about the performance criteria, the rubric or criteria for judging merit, and evidence of student self-reflection or evaluation. It should include representative work, providing a documentation of the learner’s performance and a basis for evaluation of the student’s progress. Portfolios may include a variety of demonstrations of learning and have been gathered in the form of a physical collection of materials, videos, CD-ROMs, reflective journals, etc. (New Horizons for Learning)

Reliability: A property of the scores or assessment data derived from using an instrument rather than the instrument itself. An instrument yields reliable date to the extent that the variance in scores is attributable to actual differences in what is being measured, such as knowledge, performance, or attitude. (Palomba & Banta, 88)

Rubric: Specific sets of criteria that clearly define for both student and teacher what a range of acceptable and unacceptable performance looks like. Criteria define descriptors of ability at each level of performance and assign values to each level. Levels referred to are proficiency levels which describe a continuum from excellent to unacceptable product. (System for Adult Basic Education Support)

Sampling Plan: The process of selecting a pre-determined sub-set of student work to ensure that faculty can validate that students are learning, per Union’s mission commitment.

Standards: Sets a level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways. (Leskes, 2002)

Validity: An overall evaluative judgment of the adequacy and appropriateness of the inferences and action based on scores. (Messick, 33) Does an instrument measure what we want to measure? (Palomba & Banta, 90)

Value Added: The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills—in the aggregate—than freshmen papers). Requires a baseline measurement for comparison. (Leskes, 2002)

SOURCES

Allen, Mary; Noel, Richard, C.; Rienzi, Beth, M.; and McMillin, Daniel, J. (2002). Outcomes Assessment Handbook. California State University, Institute for Teaching and Learning, Long Beach, CA.

Angelo, Dr. Tom (1995). Reassessing (and defining) Assessment. The AAHE Bulletin, 48(2), 7-9.

Bloom, B.S. (1956). Taxonomy of educational objectives: the classification of educational goals.Handbook I: Cognitive Domain. White Plains, NY: Longman.

DeMars, C. E., Cameron, L., & Erwin, T. D. (2003). Information literacy as foundational: determining competence. JGE: The Journal of General Education, 52(4), 253.

Eder, D. J. (2004). General education assessment within the disciplines. JGE: The Journal of General Education, 53(2), 135.

Leskes, Andrea (2002). Beyond confusion: an assessment glossary. Peer Review, 4(2/3).

McTighe, J., & Ferrara, S. (1998). Assessing Learning in the Classroom. Washington D.C.: National Education Association.

Messick, S. (1988). “The Once and Future Issues of Validity: Assessing the Meaning and Consequences of Measurement.” In H. Wainer & H. Braun (eds.) Test Validity. Hillsdale, NJ: Erlhbaum.

National Center for Research on Evaluation, Standards & Student Testing (CRESST). Glossary.

National Teaching & Learning Forum, Classroom Assessment Techniques.

New Horizons for Learning. (2002). Glossary of Assessment Terms.

Palomba, C & Banta T. (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. San Francisco: Jossey Bass.

Smith, K., & Harm, T. (2003). Clarifying different types of portfolios. Assessment & Evaluation in Higher Education, 28(6), 625.

Suskie, Linda. (2004). Assessing Student Learning: A Common Sense Guide. Bolten, MA: Anker Publishing Company, Inc.

System for Adult Basic Education Support. Glossary of Useful Terms.

[Adapted from American Public University Glossary, College of Mount St. Joseph Proposed Glossary, and Dr. Shirley Piazza Draft Glossary]