NOTICE : Campus-wide Power Outage - Thursday August 28, 2014 from 6:00 am to 4:00 pm. Click for details.

Reflections from a Director of Assessment

I am presently in my sixth year as Director of Assessment at Rhode Island College's Feinstein School of Education and Human Development (FSEHD). Serving in this capacity has been an enriching and rewarding experience for me. It has also allowed me to work with and learn from many highly talented, enthusiastic, and insightful faculty members and administrators. Since 2006, FSEHD has been engaged in an ongoing process of evaluating the capacity and effectiveness of our assessment systems for initial teacher preparation and advanced programs and making changes consistent with the evaluation results. Since that time, the entire initial programs assessment system at FSEHD has been revised, as have significant portions of the assessment system for advanced programs. We have "closed the loop" (i.e., completed a full cycle of field testing and implementation) in some areas and are still in the planning stages in others. We have prepared for and participated in more state program review or national accreditation visits than I care to count As we are preparing for yet another set of accreditation and program review visits, I have been reflecting on some of the challenges I have encountered and the lessons that I have learned in my role as Director of Assessment. In particular, I would like to describe some inherent conflicts or tensions in higher education assessment (and particularly assessment in teacher education), ways to circumvent these conflicts, and lessons that I would offer to others attempting to develop or revise a higher education assessment system.

Inherent conflicts

In order to understand some of the conflicts, or tensions, inherent in higher education assessment, it is important to understand the origins of the higher education assessment movement. Through the 1970s, "assessment" per se was not of much interest in higher education; the value of a post-secondary education was assumed, and institutions were not expected to reveal to external audiences what was happening in their classrooms (Huba & Freed, 2000). Additionally, external accrediting bodies evaluated institutions on the basis of inputs—e.g., financial resources, faculty qualifications, etc., as opposed to outputs (McPherson, 2007). Institutional assessment of student outcomes was on few people's horizons.

Some scholars and historians say that the concept of higher education assessment gained prominence with the 1983 publication of A Nation at Risk, which focused on the failings of elementary and secondary education but also pointed out that American colleges and universities were responsible for producing "failing" public school teachers. This gave rise to calls for "accountability" at all levels of education and the assertion that educational institutions provide clear, unequivocal evidence that they functioned properly (Wall-Smith, 2011). In this context, assessments aligned with clear educational standards were expected to exert positive effects on student achievement and education overall.

Almost simultaneously, the 1984 Study Group on the Conditions of Excellence in Higher Education produced a report entitled Involvement in Learning which recommended that institutions establish high student expectations, create active learning environments, provide students with prompt, useful feedback, and use institutional performance data for improvement (Banta & Associates, 2002). Subsequently, attendees at the First National Conference on Assessment in Higher Education in 1985 participated under dual auspices. Banta writes, "Many were there under the banner of Involvement in Student Learning..., seeking reasonable and valid ways to gather information to improve curriculum and pedagogy. At least as many (and probably more) were there in response to a brand new mandate" (2002, p. 8).

Soon thereafter the National Governor's Association report, Time for Results (1986), recommended that states require colleges to assess what students actually learn while in college. In 1988, the U.S. Secretary of Education's Procedures and Criteria for Recognition of Accrediting Agencies specified that accrediting agencies were required to evaluate whether or not institutions maintained clearly specified objectives, documented the educational achievement of students, publicized objectives and results, and used assessment information for improvement. The 1990s witnessed a burgeoning of state level accountability requirements for higher education. Furthermore, the U.S. Department of Education's Commission on the Future of Higher Education (aka the Spellings Commission) concluded in 2006 that "improved accountability is vital" and that "student achievement, which is inextricably connected to institutional success, must be measured on a 'value-added' basis" (p. 4).

From the 1990s forward, the terms "assessment" and "accountability" have often been used together, if not interchangeably. However, the two terms have very different origins. "Assessment" comes from the Latin ad+sedere, "to sit beside," and originally described the process of giving guidance and feedback to students on their learning. "Accountability," on the other hand, is derived from the Latin accomptare (to account), a prefixed form of computare (to calculate), which in turn stems from putare (to reckon). The concept of accountability carries with it the notion of being subject to an obligation to report, explain, or justify something (dictionary.com). At their essence, these two terms are in conflict.

Indeed, Banta (2002) postulates that assessment, as it is currently used, has three meanings: 1) determining and providing continuous feedback on an individual's abilities; 2) evaluating programs to improve curricula and pedagogy; and 3) benchmarking institutional and/or system performance. Banta's third definition corresponds closely to an accountability framework for assessment. Kuh and Ewell (2010) further posit that "assessment," as it is understood in the United States, has largely lost its original focus on the individual and has instead come to denote the process of gathering evidence about the performance groups of people in the aggregate. These distinctions highlight the differences between using assessment for improvement and for accountability.

In 2011, institutions of higher education, and particularly schools of education and other professional schools/programs, are operating within the conflicting paradigms of assessment for improvement and assessment for accountability (see Table 1). On the one hand, we are asked to implement assessments and assessment systems that enable us to give targeted feedback to guide individual student learning and program improvement. At the same time, our assessment systems must be designed to allow for aggregation of standardized assessment data and public reporting. The data yielded by our assessment systems must have both formative and summative aspects to guide continued growth and improvement and provide evidence of minimal competency. Multiple sets of targets guide our assessments and the design of our assessment systems; some are unique to the institution and created collaboratively by faculty, while others are established by the state or other outside entities. We entreat faculty to engage in an endeavor with which they are at the same time required to comply. Our assessments are required to be meaningful to students, yet standardized across users and contexts. In teacher preparation and other professional programs, this dichotomy between these two paradigms is often delineated in the distinctions between accreditation and state program approval.

Table 1: Distinctions between Assessment for Improvement and Assessment for Accountability
Dimensions Paradigms
Assessment for Improvement Assessment for Accountability
Intent Formative (Improvement) Summative (Judgement)
Purpose Continued growth and improvement (Provide evidence of overall program quality and improvement) Licensure/certification (Establish minimal competency)
Theoretical Framework Constructivist (Meaning varies across individuals, over time, and with purpose) Positivist (Meaning is constant across users, contexts, and purposes)
Perspective Internal (Reflects learning from the institution's and student's perspective) External (Reflects outside standards and interests)
Fundamental motivation Engagement Compliance
Guise Accreditation State program approval
Based on Ewell (2009) and Barrett & Wilkerson (2004)

Validity and Logistical Issues

Assessment validity refers to "the extent to which an assessment measures what it is supposed to measure, and the extent to which inferences and actions on the basis of tests scores are appropriate and accurate" (CRESST, 1999). The use of assessments for two distinct purposes therefore poses a validity concern: "The more uses a test is put to, the greater the strains on its validity and the more expensive to determine, since each individual use must be validated separately" (Rabinowitz, n.d.). Wilkerson & Lang (2004) elaborate:

The validity problem in teacher assessment begins with a common confusion about assessment purpose. Colleges of education need to respond to accreditation and approval requirements that are based on different purposes, and these purposes often remain undifferentiated. NCATE accredits units, looking for evidence of overall program quality. That is their purpose. States approve programs, looking for evidence that individual teachers are minimally competent...NCATE conceptual frameworks focus on the unique aspects of graduates of an accredited program; state expectations focus on the consistency of graduate qualifications. While both types of agencies review results for teachers on the same or similar sets of teaching standards, they look at them through a different lens because their purposes are different. The conflicting paradigms of ensuring minimal competence (protecting the public from unqualified practitioners) from the state perspective and preparing unique practitioners from the NCATE perspectives create a potential validity conflict. (Wilkerson & Lang, p. #).

The issues described above are likely applicable to all professional programs that fall under both institutional accreditation and professional program approval by the state or other government entity.

Circumventing the Conflicts

So what is the solution to designing and implementing assessment systems that serve different and often competing functions? Should we design parallel assessment systems that serve the dual purposes of assessment for learning and continuous improvement versus assessment for accountability purposes? This would no doubt be extremely burdensome, cumbersome, and expensive for all involved. Additionally, assessment experts tend to agree that assessment for learning and assessment for accountability need not be separated and can co-exist in a single, balanced assessment system. Ewell (2002, p. 7) asserts that "both periodic judgment and continuous feedback are important in occasioning institutional learning." Chappuis, Stiggins, Arter, & Chappuis (2005) similarly advocate that a true, high quality assessment system takes advantage of both assessment for learning and assessment for accountability, as both make important contributions. The challenge, therefore, is to build a balanced assessment system.

A balanced assessment system is a set of interacting assessments focused on serving the needs of different consumers of assessment information for the common purpose of improving education. A balanced system is not a system with an equal number of tests of each kind or in which each assessment carries the same weight. What makes an assessment system balanced is the alignment of different assessments to the different consumer's information needs such that the needs of all consumers are met (Nichols, 2010, p.1).

Strategically combining types and purposes assessments in a higher education system increases the likelihood that the inherent tensions of such a system can be accommodated and balanced. Mandatory accountability demands and questions will be attended to. At the same time, there will be opportunities to engage faculty and students in the learning and assessment process, incorporate already existing assessments, and focus on areas of growth and improvement. This, in fact, is what we have tried to for the past five years as the FSEHD unit assessment system was examined, revised, piloted, and refined.

The following six features describe a balanced assessment system and were used to guide the development of FSEHD's unit Assessment System and the plan for its implementation:

  1. The assessments collectively are relevant to announced learning targets.
  2. Each assessment has an announced purpose
  3. The assessments are conducted at multiple time points
  4. The system is made up of assessments that are initiated at multiple levels
  5. Candidates are allowed multiple opportunities to demonstrate knowledge, understanding, and skill development
  6. The assessments draw on multiple formats—"traditional" and "alternative" alike (Coladarci, 2002, pp. 73-74; Maine Comprehensive Assessment System Technical Advisory Committee, 2000, pp. 3-4)

The assessments are relevant to announced learning targets.

The FSEHD Assessment System was specifically designed to provide evidence of student achievement of essential learning targets as identified by the school and the state. These include the unit's Conceptual Framework, the Advanced Competencies (linked to the unit's Conceptual Framework), the Unit Dispositions, the Culturally Competent Teaching Areas, as well as the Rhode Island Professional Teaching Standards developed by the state. The targets of each component of the assessment system are public, and the rubrics/criteria for judging student performance on each learning target are explicit. As various assessments within the system serve different functions, it is not essential that each assessment align with every set of learning targets. Instead, the degree to which learning targets are targeted within the system is balanced across assessments.

Each assessment has an announced purpose

The FSEHD Assessment System has been explicitly designed to make clear the purpose each assessment has within the system. Each assessment within the system serves one of the following purposes:

  • Admission: Evaluation of candidate qualifications to enroll in an FSEHD advanced programs
  • Formative: Evaluation of candidates as they proceed through the programs, identifying weaknesses in candidates and programs so student remediation or program improvements can be offered in a timely fashion.
  • Summative: Evaluation of candidates at the end of an advanced program to ensure that applicants and candidates are qualified to graduate and to identify strengths and weaknesses of the programs and the unit
  • Post: Evaluation of candidates for program and unit evaluation

In addition, there are at least six potential audiences for assessments in the balanced assessment system. Assessment data can be utilized to address questions and concerns relevant to students, faculty, programs, the institution, the state, and accrediting bodies. Examples of questions and concerns pertaining to various audiences over time include (but are not limited to):

  • Students: Am I improving over time? Am I succeeding at the level that I should be? What help do I need?
  • Faculty: Does this candidate meet the admissions or exit criteria for our program? Which candidates need help? What grades should candidates receive? Are my instructional strategies working?
  • Program: Is our program effective? How can it be improved? Which candidates are making adequate progress? Are our candidates ready for the workplace or the next step in learning?
  • Institution: Who is applying to our programs? Are programs producing the intended results? How should we strategize to achieve success? Which programs need/deserve more resources? (Stiggins, 2001, pp. 11-12)
  • State: Do teacher candidates have the required basic skills? Are our teacher graduates competent beginning teachers? Have they mastered the competencies required by the state? Will they cause no harm and have a positive impact on student learning?
  • Accreditors: Does the assessment system reflect the institution's unique conceptual framework? Does the system include measures that are of sufficient quality to inform the important aspects of faculty, curriculum, instruction, and candidate performance? Can the system regularly and systematically collect, compile, aggregate, summarize, and analyze candidate assessment data to improve candidate performance, program quality, and institutional operations (NCATE, 2008)

The assessments are conducted at multiple time points

As a balanced system, FSEHD Assessment System includes four checkpoints where knowledge, skills and dispositions are assessed: admission, formative (e.g., mid-point), exit, and post-graduation. This allows students, faculty, and administration to monitor candidate progress toward mastery of relevant learning targets and areas for instructional and program improvement. Furthermore, assessments and rubrics at various checkpoints are aligned to more easily track the progress of the same students over time.

The system is made up of assessments that are initiated at multiple levels

According to the Standards for Educational Accountability Systems established by the Center for Research on Evaluation, Standards, and Student Testing, high quality assessment systems "include data elements that allow for interpretations of student, institution, and administrative performance" (Baker et al., 2002, p. 2). Including assessment data from multiple levels (e.g., classroom, program, institution) facilitates the process of identifying areas of improvement in each area (American Education Research Association, American Psychological Association, & National Council on Measurement in Education, 1999) and allows institutions to respond to accountability mandates. Consequently, the assessments in the FSEHD Assessment System are initiated at the individual course (I), program (P), unit (U), and state or national (SN) levels. The system includes GPA and grades that are derived from course specific assessments (I); however, many courses also include assessments that are common across a particular program (P) or the unit (U). Several major performance assessments required by FSEHD program level (P), yet assessed with a unit wide rubric (U). On the other hand, candidate self-evaluations and faculty evaluation of candidates are initiated at the unit level (U). Furthermore, state or national level professional licensure certification exam results (SN) are utilized to provide additional information regarding the achievement of initial and advanced program graduates. Graduate follow up and employer surveys are initiated at the unit level (U) but are disaggregated at the program level (P). The use of multiple measures allows for the assessment of students, programs, and the unit through multiple lenses and allows for the triangulation of evidence used to make inferences about student achievement and program effectiveness. This, in turn, increases the validity of such inferences.

Candidates are allowed multiple opportunities to demonstrate knowledge, understanding, and skill development

The design of the FSEHD Assessment System affords candidates multiple opportunities to demonstrate their growth in the learning targets identified by their programs and the unit. Additionally, the use of common learning targets, criteria, and rubrics as candidates progress through their programs clarifies expectations and enables faculty and candidates to observe candidate growth as they participate in multiple opportunities to demonstrate their knowledge, understanding, and skill development over time. The use of multiple assessments with multiple formats, as opposed to a single, "one-shot" assessment, increases the validity of the inferences subsequently made regarding the knowledge, skills, and dispositions of advanced programs candidates.

The assessments draw on multiple formats—"traditional" and "alternative" alike

There are many methods for assessing learning; yet, no single assessment format is adequate for all purposes. (American Educational Research Association, 2000) Consequently, the FSEHD Assessment System is balanced in that it allows candidates to demonstrate their knowledge, skills, and dispositions using a variety of methodologies. The various assessment methodologies used in the assessment system are classified as follows:

  • Selected Response and Short Answers: Assessments that ask candidates to choose from pre-selected responses, such as multiple choice, true/false, or matching questions. Short answer questions are also included here. These assessments are a good match for evaluating content knowledge and to a lesser extent for the application of knowledge to solve problems.
  • Constructed Response: Assessments that require substantial responses that candidates construct for themselves on paper. Included here are essays, graphic representations, case studies, and other ways for candidates to demonstrate their knowledge and skills on paper. This method of assessment is often a good match for evaluating content knowledge and the application of knowledge to solve problems.
  • Performance Tasks: Assessments that require candidates to provide evidence of their knowledge or skills by demonstrating them "in the moment" or by creating artifacts that are similar to those created by professionals in their area of interest. Included here are projects, presentations, and exhibitions. This method is a good match for evaluating candidates' skills as practitioners in their field.
  • Observation and Personal Communication: Assessments that classroom faculty carry out as part of their daily teaching and assessment repertoire as they observe and communicate with candidates, including formative assessments such as check lists, anecdotal records, conferencing, journal entries, and guided conversations. This method also includes candidate self-evaluation, as candidates reflect on their experience and learning and evaluate their own strengths and weaknesses. This method is a particularly good match for evaluating the dispositions of candidates. (Smith & Miller, p. 17)

All four assessment formats are utilized throughout the four assessment transition checkpoints. This attempt to "balance" assessment in terms of assessment methods yields multiple forms of diverse and redundant types of evidence that can used to check the validity and reliability of the judgments and decisions. (Wiggins, 1998)

Lessons Learned

Creating and implementing a balanced assessment system that addresses both learning/improvement and accountability functions is no easy feat. It takes time, effort, patience, and cooperation among many stakeholders who must also accept that the system will never be perfect and will always be subject to revision. Along the way, I have learned some important lessons that may be useful to others who are poised to engage in a process of designing or revising a higher education assessment system:

Faculty want assessment to be meaningful

Most faculty members are willing to comply with external mandates and accountability mechanisms. However, they deeply desire for assessment to be meaningful to them and their students. At the beginning of our assessment revision process, many faculty members were asking to revisit the assessment system in order to make it more meaningful for them, their students, and their programs. Throughout the process, they were also extraordinarily helpful, collaborative, and generous with the time they shared offering feedback, piloting new assessments and assessment practices, and demonstrating patience with technological and logistical challenges and/or oversights associated with the new assessment system.

Don't make accreditation the primary focus

Accountability demands are externally imposed and are ever present in education. However, they should not dictate the design or content of an entire higher education assessment system. The assessment for learning/improvement component of an assessment system presents faculty with the opportunity to ask and answer their own questions, such as: "What do we want to know about our students/ourselves/our program/our institution?" and "What evidence do we need to answer these questions?" Focusing on issues that are relevant to the context and important to stakeholders ensures greater buy-in to the assessment system and process. Also, if the assessment system is truly balanced, assessments that respond to accountability concerns will also have a place and in many cases yield data that respond to questions faculty also want answered.

Start with the end in mind

In redesigning our assessment system, we have found it useful to use a "backward design" approach, starting with assessments that provide us with evidence about what we would like to know about our students at the end of their program. Consequently, as we have re-examined and redesigned the assessment system, we started with the exit time point and worked our way forward to admission. This helped provide us with a clear picture of the purpose and expectations of candidate preparation. It also aided us in uncovering how we expected students to grow and develop in our programs, allowing us to instill the necessary consistency within the assessment time points built into the system.

The process must be participatory

Designing a balanced assessment system is not the work of a single individual. A committed, knowledgeable, and open-minded faculty assessment committee is essential to moving forward in the task of establishing a quality assessment system. Committee members are familiar with the unique situations, needs, and constraints of their programs and are crucial to arriving at solutions that will meet the needs of all students, faculty, and programs. They are also often better equipped and more credible when it comes to communicating about the assessment system to their colleagues in their programs and departments. However, participation in the process must extend beyond the assessment committee, to include opportunities for wider faculty input on new assessments and assessment processes via presentations and discussions at retreats, faculty meetings, debriefing meetings, and other venues, as well as opportunities for faculty to provide feedback or express concern through face-to-face meetings, email, and surveys.

Phase things in

It is neither desirable nor feasible to make several large changes at once. It is preferable to start slowly, allow early adopters to pilot new assessments and processes, collect feedback, revise the assessment/process, and repeat this process until the assessment or process is of sufficient quality and faculty have gradually grown more comfortable with the idea and content of changes in their practice. This sequence also enables more reluctant faculty to hear firsthand from their colleagues who have tried the new assessments or processes. Hearing what works, that a proposed change is not excessively burdensome or difficult, or that students reacted positively to a change can be a powerful motivator to hesitant faculty members. The implementation of FSEHD's revised assessment system consisted of this iterative process which took time but also allowed faculty and the institution to pilot and phase in changes before making them final or mandatory.

Relationships matter

Ultimately, the success of a change process rests largely on the quality of the relationships among those involved in implementing the change. A participatory, collaborative assessment system design and implementation process helps foster relationships among individuals with diverse and often conflicting viewpoints. It also establishes trust where suspicion or resentment may originally have been present. Meeting face to face with a faculty member with whom one has exchanged terse emails can "humanize" an interaction and allow diverse parties to understand and relate to those with whom they disagree. Admitting where mistakes have been made, attempting to respond to difficulties/problems in a timely fashion, and acknowledging the frustration that faculty members are undoubtedly experiencing also go a long way toward establishing relationships that will in the long run build good will and support for change.

Quality matters, too

Throughout the design and implementation process, it is important to monitor and act on findings related to assessment quality. Analyses must be conducted to examine the fairness, accuracy, and freedom from bias of the various components of the assessment system. These procedures will permit valid inferences to be made regarding students, faculty, programs, and the institution. Further, reliability, or the consistency of scores across raters, over time, or across different tasks or items that measure the same thing, need to be continuously examined, as it is a necessary condition for validity. It is unethical to act as if these characteristics of an assessment system do not matter or are automatically present. It is also unfair to judge students or ask faculty, programs, or institutions to change their practice based on data from assessments or processes which may be deeply flawed. While some faculty or administrators are uncomfortable with this aspect of the assessment conversation, it cannot be ignored.

Support from leadership is critical

As cliché as it may sound, leadership support for the establishment of a balanced assessment system is key. Administrative leadership and assessment personnel need to share a common vision, be on the same page, and communicate a consistent mission to audiences within and outside the institution. This can make the difference between building a positive momentum with faculty support and stalling in the middle of the process or, even worse, backtracking and confusing/frustrating faculty who thought they understood where they were headed.

Data must be accessible and user friendly

Access to assessment data is crucial to fostering faculty engagement and consistent data exploration and use. Research has also demonstrated that educators who have ready access to data tend to use data more frequently and more effectively. In addition, educators who explore their own data "invariably want more detailed data, or want data presented in different ways, than paper reports typically provide... Preformatted data reports, while useful, cannot be cross-analyzed or connected with other data." (McLeod, 2005, p. 2) This underscores the continued need for data that faculty, programs, and institutions can "get their hands on." This is an ongoing challenge in a balanced assessment system. Strauss (2010) notes, "One knock on this [balanced assessment] approach is that these [multiple, diverse] types of evidence can't be processed and reported as quickly as test scores." The lesson that I have learned here is that this is an important, yet challenging aspect of a balanced assessment system. We have experimented with a combination of traditional and electronic data collection and reporting strategies, and have still not arrived at the ideal solution for providing stakeholders with the data they need in the time and format in which they require it.

You're never finished

The assessment system can always be improved. People will notice things that were overlooked, technical studies will reveal weaknesses in the system, and key components will need to be revised. Institutional and political contexts and policies may change, necessitating small or even radical revisions. Continual improvement is part of the status quo.

Conclusions

Building, refining, and implementing a higher educational assessment system is fraught with tensions associated with conflicts between differing paradigms, purposes, and audiences. In particular, it is challenging to operate an assessment system that serves both assessment for learning/improvement and accountability purposes. However, attempting to balance the content and purposes of an assessment system makes it easier for higher institutions to meet both learning/improvement purposes and accountability demands. FSEHD's attempt to build a "balanced assessment system" has yielded important lessons that will potentially be useful to others in the same position.


References

American Education Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: Authors.

Banta & Associates. (2002). Building a scholarship of assessment. San Francisco: Jossey Bass.

Baker, E., L., Linn, R. L., Herman, J. L., & Koretz, D. (2002). Standards for educational accountability (Policy Brief 5). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing.

Barrett, H.C. & Wilkerson, J. (2004). Conflicting Paradigms in Electronic Portfolio Approaches: Choosing an Electronic Portfolio Strategy that Matches your Conceptual Framework. Outside Linkhttp://electronicportfolios.com/systems/paradigms.html

Center for the Study of Evaluation & National Center for Research on Evaluation, Standards, and Student Testing (CRESST). (1999). CRESST assessment glossary. Los Angeles, CA: CRESST/UCLA. Available: Outside Linkhttp://cresst96.cse.ucla.edu/CRESST/pages/glossary.htm

Chappuis, S., Stiggins, R., Arter, and Chappuis, J. (2005). Assessment For Learning: An Action Guide for School Leaders. Portland, OR: Assessment Training Institute.

Ewell, P. (2009). Assessment, accountability, and improvement: Revisiting the tension. Occasional Paper #1. Champaign, IL: National Institute for Learning Outcomes Assessment.

Ewell, P. (2002). Perpetual Movement: Assessment after Twenty Years. Boulder, CO: National Center for Higher Education Management Systems.

Haessig, C.J. & LaPotin, A.S. (2007). Lessons Learned in the Assessment School of Hard Knocks. Irving, CA: Electronic Educational Environment, UCIrvine. Available: Outside Linkhttp://eee.uci.edu/news/articles/0507assessment.php

Huba, M.E. & Freed, J.E. (2000). Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning. Boston: Allyn & Bacon.

Kuh, G.D. & Ewell, P.T. (2010). The state of learning outcomes assessment in the United States. Higher Education Management and Policy, 22 (1).

McLeod, S. (2005). Data-driven teachers. Minneapolis: School Technology Leadership Initiative, University of Minnesota. Available at: Adobe PDFOutside Linkwww.scottmcleod.net/storage/2005_CASTLE_Data_Driven_Teachers.pdf

McPherson, M. (2007). Assessment and Accountability in Higher Education. Cambridge, MA: Forum for the Future of Higher Education.

Measured measures: Technical considerations for developing a local assessment system. (2005). Augusta, ME: Maine Department of Education.

National Council for Accreditation of Teacher Education. (2008). Professional standards for accreditation of schools, colleges, and departments of education. Washington, DC: Author.

Nichols, P. (2010). What is a balanced assessment system? Testing, Measurement & Research Services Bulletin, 11.

Rabinowitz, S. (n.d.) The Integration of Secondary and Post-secondary Assessment Systems: Cautionary Concerns. San Francisco, CA: WestEd.

Rhode Island Department of Education. (2009). Rhode Island program approval process: Educator preparation program approval guidelines. Providence, RI: Author.

Smith, D. & Miller, L. (2003). Comprehensive local assessment systems (CLASs) primer: A guide to assessment system design and use. Gorham, ME: Southern Maine Partnership, University of Southern Maine.

Stiggins, R.J. (2001). Leadership for Excellence in Assessment: A Powerful New School District Planning Guide. Portland, OR: Assessment Training Institute.

Strauss, V. (May 21, 2010). How to combine learning, assessment, accountability. Washington Post.

Wall-Smith, S. (2011). A History of Higher Education Assessment. Fitchburg, MA: Fitchburg State University.

Wiggins, G. (1998). Educative assessment. San Francisco, CA: Jossey-Bass.

Wilkerson, J.R. & Lang, W.S. (2004). A standards-driven, task-based assessment approach for teacher credentialing with potential for college accreditation. Practical Assessment, Research & Evaluation, 9(12). Retrieved October 27, 2011 from http://PAREonline.net/getvn.asp?v=9&n=12 .

This page contains content in PDF format. You must have the Adobe Acrobat Reader to view this content, click here to download it for free.

Page last updated: December 16, 2011