STANDARD 2. ASSESSMENT SYSTEM AND UNIT EVALUATION
STANDARD 2. ASSESSMENT SYSTEM AND UNIT EVALUATION
The unit has an assessment system that collects and analyzes data on the applicant qualifications, the candidate and graduate performance, and unit operations to evaluate and improve the unit and its programs.
Overview of the Assessment System
Our assessment system was designed by an assessment committee based on faculty collaboration and guided by the Unit’s Conceptual Framework, professional, program and state standards. The system is designed to use technology to collect and analyze data on candidate performance and unit operations and has three components.
Operation of the Unit is based on the Conceptual Framework, national and program standards and state level expectations. These and other institutional documents (e.g., mission statements, bylaws, strategic plans) provide the context in which the Unit operates. Candidate proficiencies and Unit performance are evaluated in terms of the principles and expectations in these documents.
Component II consists of the artifacts collected at five transition points of the assessment system: program entry, professional sequence, professional performance, program completion, and post completion. At transition points 1-4, candidates are required to meet expectations in order to proceed in a program. At transition point 5, assessment data is used to evaluate programs and make changes.
The artifacts collected at the five transitions points are assessed with agreed upon rubrics. Data from these rubrics as well as survey results help determine levels of candidate, program, faculty, and unit effectiveness. Rubrics and assessment tools are reviewed to ensure fairness, consistency, reliability, and elimination of bias. Assessment tools are aligned with the Unit’s Conceptual Framework, national, program, and state professional standards. Most data on candidate artifacts are collected by faculty with critical or advanced critical tasks embedded in their courses and other individuals who supervise field experiences and in the case of advanced programs, advisors who supervise candidate research projects. The data for initial licensure programs are collected and stored in LiveText and other Unit data bases. Data for advanced programs are stored in departmental databases. In addition, the University’s PeopleSoft system is the primary storage system for information on candidate grades and GPA. PeopleSoft is also the primary data base for demographic information on students and faculty.
Candidate preparedness is monitored for readiness to advance to the next point in the program. Assessments such as GPA, Course Grades, Field Dispositions, Student Teaching Evaluations, and Graduation Audits determine preparedness. If data indicate a candidate is not prepared to transition, the candidate may work with his/her faculty advisor, the Associate Dean for Teacher Education, and/or be referred to the Teacher Education Student Review Committee. Possible actions include repeating courses and field experiences, denial of advancement to the next transition point, support to seek other career options, academic probation, or academic dismissal. In the case of Advanced Programs the Graduate School and the faculty advisor monitor the advanced candidates’ progress through their program of study.
2a. Assessment System
2a.1-2 Evaluation/Refinement/Collection of Information
At the time of our last NCATE visit in October 2001, the BOE’s report issued the following weakness in the unit assessment system:
The assessment system does not include a plan or data system from which the unit will be able to engage in program evaluation and improvement.
Rationale: Although the Unit has given some thought to developing an evaluation system, there is no conceptual design for relating the various categories of data currently being collected. The Unit should begin to develop an information system architecture that will be flexible enough to meet internal as well as external reporting needs (p. 18).
In consideration of this evaluation, the Unit began a major effort to make changes to the Assessment System in the Fall of 2002. Responsibility for oversight of the unit assessment system now rests with the unit head, specifically with the Associate Dean for Teacher Education who is also the NCATE Coordinator. While each department is responsible for the collection, analysis and evaluation, and use of data for improvement of its individual programs, the Office of the Associate Dean tracks progress and where needed coordinates efforts to ensure that each element of Standard 2 is met. In addition, the Teacher Education Committee serves as advisory to the unit head and Associate Dean in matters relating to candidate and unit assessment.
Initial Programs. Beginning efforts attended to the undergraduate initial licensure programs and involved examining ways to “link the component parts (of the unit assessment system) in a meaningful way”. A committee of faculty was formed to align professional and state standards by creating a “crosswalk” between the standards and the portfolio-based system of assessing teacher candidates. In the fall of 2003, changes to the portfolio system were discussed, e.g., have students create electronic portfolios and assess the student work using rubrics. As the planning evolved, it became evident that faculty were loyal to the portfolio system but felt the changes under discussion might undermine the philosophical purposes of the portfolio (i.e., to have students articulate their professional development over time).
Faculty agreed to move forward and create a new plan to assess candidates, recognizing that it was unclear how the previous portfolio review system would be incorporated. In the fall of 2003, a team attended AACTE/NCATE and upon return, proposed to undergraduate program faculty in the Unit that a new assessment system be developed based on a common rubric approach. The rubrics would assess candidate performance on critical tasks, would be aligned with the Unit’s conceptual framework, professional, state, and program standards, and would draw upon a web-based electronic management system, LiveText, for data collection and analysis. The extant portfolio system would remain in use, and a task force was convened by the Department of Teaching and Learning’s Chair and charged to study ways the system might be revised and integrated into the new assessment plan.
The common rubric system with critical tasks became somewhat operational in fall 2004, underwent revision in fall 2005, became functional in spring 2006, and fully functional in Spring 2007 (Related departmental meeting minutes E-exhibit 2a.1-2.1, T&L Assessment Committee minutes E-exhibit 2a.1-2.2, Complete plan E-exhibit 2a.1-2.3). Also, by the fall of 2007, the Portfolio Task Force had completed its work (Related minutes E-exhibit 2a.1-2.4). The revised portfolio system was presented to and accepted by the faculty and the implementation of the new system set into motion (E-exhibits 2a.1-2.5). It was decided that the portfolio and its presentation would become the final critical task in the candidates’ program and the first cohort of candidates to present portfolios under the new system would be those student teaching in the spring of 2008. Because of these changes, we do not present the results of former portfolio reviews in this report. However, we do have data should members of the review team wish to see it. We refer to portfolios where appropriate to supplement evidence of meeting standards and do have a number of candidates’ portfolios in the Hard Copy Exhibit Room under Standard 2 and available in LiveText, an online data collection and management system.
Special Program Accreditation (SPA). The Special Education Program Area in the Department of Teaching and Learning prepares candidates for licensure in the field of special education at the graduate level. The program report was received on October 18, 2007. Two standards were met with conditions, the remaining were not met. The primary issue of concern related to the need for the program to revise assessments in order to provide evidence that standards were met for each disability area (rather than the program as a whole). The program faculty are in the process of preparing a follow-up report. For detailed assessment information on that program, please refer to the CEC SPA report.
Advanced Programs For Teachers. Assessment plans for the advanced programs for teachers have also undergone complete revision since the previous NCATE visit. In fall 2004, graduate directors were convened to collaborate on outlining a new assessment plan. The candidates would be assessed on advanced critical tasks and other performances using rubrics aligned with the Unit’s Conceptual Framework and National Board for Professional Teaching Standards (NBPTS). While initial implementation was directed towards admission and the final project, in the spring and summer of 2007 additional assessments were piloted. In the fall of 2007, the assessment plan was presented to and accepted by graduate directors. It will become fully operational in spring, 2008 (E -exhibits 2a.1-2.6 and 2a.1-2.7). Because the plan is new, the majority of the data for Standard 1 of this report is drawn from the previous assessment process. Candidates have been assessed at entrance, at mid-point and at the end of their program. It is possible, however, to view results of some of the new assessments within particular advanced program state reports, in particular the Reading Specialist Report (see the report under ESPB Program Reports on the webpage.)
Programs for Other School Professionals. The Departments of Educational Leadership and Counseling Psychology and Community Services have assessment plans (E –exhibits 2a.1-2.8, 2a.1-2.9 and 2a.1-2.10). In addition, information related to each program’s assessment system is outlined in each program re-approval report (see the related report under ESPB Program Reports on our webpage). The Instructional Design and Development Program within the Department of Teaching and Learning has developed an assessment system in response to AECT (Association for Educational Communications and Technology) Standards (E -exhibits 2a.1-2.11). The graduate program in the Department of Communication Sciences and Disorders is accredited by ASHA and information related to their assessment plan may be viewed in their accreditation materials (Hard Copy Exhibit 2a.1-2.1). The Reading Specialist program’s assessment plan parallels that of the advanced programs and so is addressed as an Advanced Program throughout the institutional report.
A great deal of work has been done since the previous visit to evaluate and refine the overall assessment system, so the data collection and analysis on the applicant qualifications, candidate and graduate performance, and unit operations leads to improvement of the unit and its programs. For a graphic representation of the unit assessment system please see 2a.1.12 in the E-exhibit room.
2a.3-4 Key Assessments and Transition Points
The unit’s assessment system includes evaluation measures to monitor candidate performance across the programs as indicated in Table 2a3-4.1.
2a.5 Fairness, Accuracy, Consistency, and Non-bias in the Assessment System
The following list addresses the procedures used to establish fairness, accuracy and consistency and to eliminate bias in programs and the assessment system.
2a.6 Unit Operations Assessment and Improvement
The Office of the Associate Dean for Teacher Education periodically monitors assessment activities to improve programs and unit operations. The Teacher Education Committee, convened twice yearly by the Associate Dean, reviews aggregated findings from candidate assessments (including Praxis scores), graduate and advisement surveys. As well, recommendations and requirements for policies or program change are brought forward for discussion. This body represents stakeholders in teacher education both within and external to the university and approves all program and policy changes.
2b. Data collection, analysis, and evaluation
2b.1-2: Unit’s Process and Timeline to Collect Summarize, and Analyze Data
The Unit Assessment System includes the collection and analysis of data, and the Unit uses this information for candidate, program and unit evaluation for initial and advanced programs. Internal (candidate and university) and external sources of data are used at various transition points to assess candidate knowledge, skills and dispositions which are aligned with the Unit Conceptual Framework, professional and state standards and program goals. The data are also used to improve programs and the unit.
Multiple forms of assessment data are collected routinely from the candidates, faculty, University supervisors, cooperating teachers, University staff and institutional offices (e.g., Registrar) and accrediting agencies as detailed in Table 2b.1-2.1. Table 2b.1-2.2 describes the data bases, where they are housed and how the data is used. Data are analyzed routinely to monitor student progress and evaluate programs and unit operations.
Initial Programs: Candidate Level. A profile of each candidate is compiled in a cumulative folder and kept in the offices of the Associate Dean. At Transition Point 4, cumulative folders include, but are not limited to the following performance information for completers of an initial licensure program:
The cumulative files for both initial licensure and advanced program candidates are reviewed by the candidate’s academic advisor, who uses the information to monitor candidate preparedness. Additionally, performance data is monitored by staff in the OAD (or Graduate School staff in the case of advanced programs for teachers). Files and data that reveal insufficient performance by an initial licensure candidate usually involves a review of the performance with advisor or if more appropriate, the Student Review Committee (E-exhibit 2b.1-2.1). Insufficient performance can result in retaking a course or field experience, developing an improvement plan, seeking a new major, a referral to various student services, dismissal from the program or other appropriate interventions.
Initial Programs: Program Level. Excel/Access spread sheets, statistical software packages, the University PeopleSoft System, and LiveText reports are used to compile and analyze data. Key components for program review for initial programs include:
The Undergraduate Assessment Committee in the Department of Teaching and Learning conducts an initial analysis of all data in preparation for the annual faculty assessment retreat. The committee summarizes the data related to critical tasks and prepares charts and tables for faculty review as well as the agenda for the sessions. The first meeting was held in May of 2006 (E-exhibit 2b.1-2.2) and the second in April of 2007 (E-exhibit 2b.1-2.3). After that session, the Committee decided to move the retreat to January to allow faculty the full spring semester to complete the necessary work for program change such that new plans could be implemented the following semester.
Completer and administrator survey data reveal descriptive statistical information (e.g., percentages) related to graduates’ and administrators’ perspectives about the units teacher preparation programs. The Office of the Associate Dean collects the data annually in collaboration with BESAR (Bureau of Educational Services and Applied Research). Once the reports are submitted to the Associate Dean they are routed to program area coordinators who review the data with faculty and prepare a summary report (E-exhibit 2b.1-2.4). Complete data related to the survey reports of 2006 and 2007 are available in the hard copy exhibit room under 2b.1-2.2.
The Advising Satisfaction Survey (E-exhibit 2b.1-2.5) data is compiled using Excel (as of spring 2008) and provides descriptive information related to students’ perceptions of advising in the unit. Results are shared with advisors in the Office of Advising and Admissions and the appropriate program area faculty who interpret the data and plan data-driven program improvements. Full survey reports for spring 2007 are available in the hard copy exhibit room under 2b.1-2.3. For a summary report see E-exhibit 2b.1-2.6.
Advanced Programs: Candidate Level. A profile of each candidate in advanced programs is compiled in a cumulative folder and kept in the offices of the Departments in which the programs are administered as well as in the Office of the Graduate School. For completers of an advanced program, at Transition Point 4 a cumulative folder includes the following::
The cumulative files for advanced program candidates are reviewed by the candidate’s academic advisor, who uses the information to monitor candidate preparedness. Additionally, performance data is monitored by the Graduate School staff.
Advanced Programs: Program Level. Excel/Access spread sheets and the University PeopleSoft System are used to compile and analyze data. The Excel data based is configured so that data related to distance programs can be disaggregated. In the previous assessment system, once candidates were admitted to the program, GPA and the assessment of the scholarly project were used for program evaluation. Currently, advanced program candidate’s performance on Advanced Critical Tasks, knowledge of research, and internships/practica are used. Data from common rubrics and assessment tools which are aligned with the Unit Conceptual Framework and national and state standards are tabulated and summarized to reveal program strengths and areas that need improvement.
The Graduate Program Directors in the Department of Teaching and Learning are responsible for the collection and analysis of all critical task data. They also engage in an annual assessment review related to advanced programs. Faculty in programs in Educational Leadership and School Counseling are responsible for the collection and analysis of data as described in their assessment system plans (Please see related state program reports under ESPB Program Reports on the UND NCATE website.)
2b.3: Records of Student Complaints and Resolutions
Faculty advisors and advisors in the Office of the Associate Dean are committed to helping candidates resolve issues early-on. The Unit monitors and addresses complaints in two ways. When general complaints come directly to the Office of the Associate Dean for Teacher Education, written documentation is retained in a file along with notes indicating what has been done to resolve the difficulty (Hard Copy exhibit 2b.1-2.4). Generally, the Associate Dean reviews the letter or e-mail, contacts the appropriate person(s) who may assist the candidate, and follows-up to assure that difficulty has been resolved. In the case of academic complaints related to grades or discrimination, candidates are referred to the college grievance policies outlined on the colleges’ web-page (E-exhibit 2b.3.2).
2c. Use of Data for Program Improvement
2c.1. Assessment Data Indications about Candidate Performance
Data are collected in a variety of ways across and within programs. Overall findings indicate that candidates in our initial programs know their content and how to teach it and have the dispositions to help all students learn. Candidates are satisfied with their programs for the most part as are those employing them. As faculty reviewed data concerns related to assessment of students and working with diverse learners arose and action plans have been developed to address these.
Candidates in advanced programs are able to study and explore content and pedagogy through a variety of courses. They have successfully demonstrated their skills, knowledge and dispositions in course related assignments, action research projects and the culminating research project.
Graduate Directors have begun discussions that center around the assessment of dispositions and the establishment of a data collection process that will offer more direct measures of candidates knowledge and abilities while still providing maximum choice in their program of study, a hallmark of the graduate degree program.
2c.2.-3 Data Use to Improve Performance
The Unit has an assessment system that promotes on-going improvement in candidate, faculty, program and unit performance. Candidates have access to the results of nearly all performance information (with the exception of the raw data from their admissions application). Candidates have opportunities to improve their performances through revising assignments, additional or extended (highly supervised) field experiences, and retaking courses. In addition, the Director of Field Placement, Cooperating Teachers and University Supervisors work closely with student teachers to help them understand student teaching performance criteria and to apply, throughout the semester, the continuous the assessment feedback on knowledge, skills and dispositions.
Faculty analyze course evaluation (University Student Assessment of Teaching [USAT]) data and make changes to course assignments, pedagogy, course materials, or other aspects of course design. The annual faculty evaluation processes are designed to be supportive of improving teaching. Based on teaching data (USATs, formative assessments, or other information), faculty are encouraged to set teaching goals and to develop course changes that align courses with the adopted national standards, state expectations and program goals.
The faculty peer evaluation process is designed to support faculty in achieving departmental goals for tenure and promotion. If the evaluation process reveals ineffective teaching, suggestions for improvement by the peer committee are to be taken under consideration by the department chair. The department chair works with the faculty member to plan expectations for improvement. If progress towards expectations is not sufficient, the chair works with the College dean to determine the next steps.
When the assessment data drives program or course changes (to content, methods, or field courses), program area faculty discuss and initiate changes, completing the appropriate curriculum forms. The changes are brought to elected committees at the department, college and university levels (and State Board of Higher Education level, if the change process requires).
Individual departments develop individualized plans for routine review of assessment data. At least once per year, departments present the results of their review process at an all college Assessment Day.
2c.4 Data-Driven Changes
Extensive effort has gone into developing and organizing the unit assessment system. Changes within the assessment system the last three years include but are not limited to:
Other Program and Unit Improvements
Data-driven program decisions are now being made on a routine basis. A sampling of those changes is offered below:
A technology-supported connection between the Education Building on campus and an elementary classroom in the Grand Forks public schools was developed. The purpose of the connection is to study how skilled elementary teachers manage their classrooms through a live interactive video feed. A classroom on campus is connected to a school classroom in which teacher candidates can see and interact with the teacher and students involved in a range of management decisions including classroom routines, student groupings, transitions, and engaged learning for all students. In October 2007, at an elementary education program area meeting, faculty will report back on this initiative.
The field experience that is co-requisite with the methods block is active and sustained. Candidates are in the field for three consecutive weeks and candidates are assessed on teaching four lessons in the field (at least one enhanced with technology). One lesson, chosen by the candidate, is assessed as a Critical Task.
2c.5. Sharing Assessment Data
Assessment information is regularly shared with stakeholders throughout the year. Candidates are informed about assessment and receive on-going feedback about performance level through grades and personal LiveText messages reporting performance on Critical Tasks. In addition, candidates are participative in dispositions and student teaching evaluation assessments.
Faculty have timely access to course evaluation (USAT) data. Also, faculty peer evaluation committees review the work of the faculty regularly and department chairs are required to review faculty performance annually.
Assessment committees in collaboration with the Office of Associate Dean for Teacher Education share candidate performance data with faculty at annual retreats. The Teacher Education Committee regularly reviews unit level data to support and recommend changes to improve the unit. The extended faculty (faculty in the College of Arts and Sciences and the College of Business and Public Administration) receive Praxis II scores from the Office of the Associate Dean. In addition, they review the Lesson Plan, a critical task embedded in methods courses, and can use the tools in LiveText to analyze and interpret candidates’ scores. The Director of Field Placement and Student Teaching hosts collaborative meetings with supervisors and cooperating teachers and regularly shares data related to the assessment of candidates. Advanced Program faculty will begin annual assessment retreats in the spring of 2008.
Finally, for the last two years, the Dean of the College of Education and Human Development has hosted an annual assessment program during the final all college meeting in the spring semester and each department presents assessment related work accomplished during the year. To view an example of the Department of Educational Leaderships’ reports see 2c.5.1 in E-exhibits).
Hard Copy Exhibits in Support of Standard 2
2.1: Advanced Programs Graduate Exit Survey
2a.1-2.1HC: Communication Sciences and Disorders ASHA Accreditation Materials
2b.1-2.2HC: 2006, 2007 Completer and Administrator Survey Results
2b.1-2.3HC: Advisement Satisfaction Survey Report for Spring 2007
2b.1-2.4HC: Documents File Related to Processing of Student Complaints