Logo of the International Journal of Interpreter Education (™)

One Interpreter Education Program, Two Sites: A Comparison of Factors and Outcomes

Volume 1 ~ November 2009

ISSN # 2150-5772 – This article is the intellectual property of the authors and CIT. If you wish to use this article in your teaching or in another format, please credit the authors and the CIT International Journal of Interpreter Education.

Karen Petronio and Kimberly Hale
Eastern Kentucky University, United States

Introduction

When recent bylaw changes enacted by the Registry of Interpreters for the Deaf (RID) in the United States become effective, they will require that, as of July 1, 2012, all hearing candidates who sit for the performance portion of certification tests have a bachelor’s degree. Although the requirement does not stipulate that the degree must be in interpreting, many predict this will cause further growth in the already increasing number of interpreter education programs (IEPs) offering a four-year degree. To illustrate the growth, consider the 43 four-year programs that are listed on the website of the National Consortium of Interpreting Education Centers (NCIEC); 1 this is quite an increase from only 12 four-year programs in the late 1980s and 25 programs in 2006 (Peterson, 2006). Although this growth has resulted in an increase in student enrollment, there is little research that investigates how well students are faring in four-year programs. And as several authors have noted, there are very few research-based articles to guide new programs in best practices for the field of interpreter education (Cokely, 2005; Dean & Pollard, 2001; Shaw, 2007). In order to make effective curricular and program decisions, it is essential that we have a better understanding of the characteristics of currently available programs and the factors that affect student outcomes.
In an effort to begin describing and analyzing data related to student outcomes in four-year IEPs, we focus on and describe one IEP. This IEP offers a unique research opportunity because over a ten-year period, the program was offered at two different sites. As each site used the same curriculum, had the same prerequisites, and had equally qualified faculty, this provided the opportunity to compare the two sites and examine other factors that could impact students. The goals of this article are twofold: (a) to describe this IEP and present data on student outcomes and (b) to provide this information as a starting point that other programs can use to build upon and contribute additional knowledge as we seek to find the best ways to educate IEP students.
We begin with a brief overview of the field of interpreting and interpreter education in the United States and then proceed to describe the IEP that was offered at the two sites. After describing several factors of each site in detail, we then present the results, including student outcome data on graduation rates, sign language assessment scores, certification rates, and the types of certifications achieved. After discussing our findings, we conclude by identifying those factors—or more specifically, a combination of factors—that we believe led to higher student outcomes at one of the sites. Future research directions are provided.

A review of the field of signed language interpreter education in the United States

The passage of the Vocational Rehabilitation Act of 1965 is often noted as the beginning of the signed language interpreting profession in the United States. This act, which identifies interpreting as a service to be provided to deaf clients, was the first federal legislation to set precedence for the payment of signed language interpreters (Humphrey & Alcorn, 1995). Those initial interpreters learned ASL through their involvement with the deaf community, usually from deaf parents and siblings. These early interpreters were selected by the deaf community and deemed to be trustworthy and skilled enough to interpret (Cokely, 2005). As further federal legislation was passed, the demand for interpreters soon surpassed the supply of qualified interpreters. This has lead to the increasing need for interpreters.
Whereas the deaf community initially had the primary role in selecting and grooming interpreters, when the need grew more quickly than the community was able to produce interpreters, outside sources became involved (Cokely, 2005; Peterson, 2006). Just as federal legislation played a major role in creating the demand for interpreting services, it also provided much of the initial funding for training interpreters. This started with the funding of short workshops and courses in the mid-1960s and led to the establishment of the National Interpreter Training Consortium (NITC) in the 1970s. The NITC, which originally included six IEPs, represented the first national effort to establish regional standardized training (Frishberg, 1990). In the 1980s, the original six programs increased to ten regional programs, which provided interpreter training as well as technical assistance and resource development to emerging IEPs in their respective regions. Currently, federal funding supports the NCIEC, which directly and indirectly provides resources and support to IEPs. An understood goal of the NCIEC, as well as individual IEPs, is to find effective ways to prepare students to become qualified, competent and ethical interpreters. 2
Over the years, expectations about the length of time necessary to prepare interpreters have increased. The earliest training opportunities, in the mid-1960s, were workshops or very short courses of two to six weeks (Humphrey & Alcorn, 1995). Eventually IEPs—which offered two-year associate’s degrees—became the norm. Later, recognition of the complexity of interpreting and a corresponding increase in the number of students entering with little or no prior exposure to ASL or the deaf community resulted in longer programs. The first four-year IEP was established during the mid-1970s at Maryville College in Tennessee. Today, although two-year associate’s degree programs are still the most prevalent type of program, the number of four-year programs continues to increase. 3  As stated earlier, this increase in four-year programs has not been accompanied by parallel increases in descriptions or publications on the characteristics or outcomes of programs. One goal of the remainder of this paper is to serve as a springboard for such research by describing characteristics, issues, and outcomes of one four-year IEP.

Description of an interpreter education program: A case study

The IEP under study is housed at a regional state university within the College of Education at Eastern Kentucky University. Until recently, when the program formally became a separate department, it was under the Department of Special Education.
This program was established as a two-year associate’s degree program in 1987 and changed to a four-year bachelor’s degree program in the late 1990s. The bachelor’s degree program was designed for students to take prerequisite, lower level core classes and support classes, and the majority of their general education courses during their first two years. Students applied to the IEP in the spring semester of their second year and, if accepted, started the following fall semester. During their final two years, they took the majority of their IEP core courses, including a full-time practicum during their final semester. Prerequisites to the IEP consisted of four semesters of classes including American Sign Language (ASL) Professional Ethics & Issues in Interpreting (ITP 215), and  Processing Skills for Interpreters (ITP 220).
The bachelor’s degree requires 128 semester hours of coursework. This includes 46 hours of general education courses, 18 hours of ASL, 55 hours of interpreting-related courses, and 9 hours of supporting courses. The core and supporting courses are listed in Table 1, with the number of credits in parentheses.

General education (46 credits)

Core courses (73 credits)

  1. ASL 1?ASL 6 (18)
  1. ITP 115: Heritage and Culture of the Deaf (3)
  1. ITP 210: Application of Fingerspelling and Number Systems (3)
  1. ITP 215: Professional Ethics & Issues in Interpreting (3)
  1. ITP 220: Processing Skills for Interpreters (3)
  1. ITP 310: Interpreting in Private Practice (1)
  1. ITP 320 and ITP 420: Voice to Sign I and II (6)
  1. ITP 325 and ITP 425: Sign to Voice I and II (6)
  1. ITP 350: Historical Perspectives on the Deaf Community (3)
  1. ITP 370 and ITP 430: Interpreting in Specialized Settings I and II (6)
  1. ITP 390 and ITP 490: Linguistics and ASL I and II (6)
  1. ITP 470: Practicum in Interpreting I (3)

Supporting courses  (9 credits)

  1. ANT 120: Introduction to Cultural Anthropology (3)
  1. SED 104: Introduction to Special Education (3)
  1. SED 337: Education of the Deaf and Hard of Hearing (3)

Table 1:  The IEP curriculum

   At the same time the IEP was transitioning to a four-year degree, the administration was in the process of establishing a satellite site that would help increase the number of interpreters in the state. In 1998, a memorandum of agreement (MOA) was signed with another public university located two hours from the main campus. 4 The MOA allowed for IEP faculty to have offices, ASL lab facilities and the use of classrooms at the satellite university. Students who were accepted at the satellite location took all of their coursework on that campus.  Their general education coursework was taken through courses offered by the satellite university; IEP faculty who were housed at the satellite location taught their ASL and interpreting-related courses. The original main campus university conferred the final degree. Because of issues related to ownership and regional accreditation, the MOA was not renewed in 2006. The satellite location accepted its last group of students at commencement of the academic year in  2007, and its final class graduated in May, 2009.

Methodology

The data presented in this article come from students who were accepted into the IEP at either the main campus or the satellite location. Conforming to a long established worldwide demographic in the signed language interpreting field (see for example Bontempo & Napier, 2007; Brien, Brown & Collins, 2002; Cokely, 1981; McIntire, 1990; Napier & Barker, 2003), the students were predominately female; only 10% of graduates from the main campus and 12% of graduates from the satellite location were male. Both sites included both traditional and non-traditional students, and at least 85% of the students from both sites worked while they went through the program. We do not have demographic data for the ages or ethnicity of the participants, although this data should be collected in future studies.
Several variables that may have contributed to student outcomes are discussed in more detail in section four. These variables were chosen because they were the salient factors that differed between the two sites. They include (a) the student selection process, (b) student progression through the program, (c) the physical proximity of facilities, and (d) faculty and staff.
The results presented later in section five, are based on measurable student outcomes, including successful completion of the program, sign language assessment scores, and national certification achieved by graduates. Data on student graduation rates came from the university’s registrar office. Data on language assessment came from summary data provided to the departments by the state Sign Communication Proficiency Interview (SCPI) 5 coordinator and self-reports. Certification data were obtained from self-reports and the searchable RID member database. These data were collected and compared across sites.

Variables that differed between the two sites

While there were many similarities between the two sites, there were also differences. The following sections discuss differences related to how students were selected and progressed through the program, the proximity of facilities, and some characteristics of faculty and staff.

Student selection process

 

The application process at both sites required students to submit transcripts, a letter of intent, and at least two letters of recommendation. Each site accepted students independently; therefore, students applied only to the site(s) they wanted to attend. The minimum requirements for admission to the IEP at both sites included an overall grade point average (GPA) of 2.5 or higher, a passing score on a reading test administered by the IEP, and a minimum grade of C in all of the prerequisite courses (ASL 1?4, ITP 215: Professional Ethics and Issues in Interpreting, and ITP 220: Processing Skills for Interpreters). Students at both sites were commonly enrolled in ASL 4, ITP 215, and ITP 220 during the semester in which they applied to the program. If they were accepted into the program, their acceptance was contingent upon receiving a C or higher in each of these courses.
Although both sites required the same prerequisites and application materials, there were differences in how and when students were accepted at each site. The satellite location accepted students every year. Interested students applied in the spring and, if accepted, started the program that autumn. Between 1999 and 2006, all applicants who submitted the required documentation and met the minimum requirements (as described previously) were accepted at the satellite site. The satellite site implemented a formal screening process in 2006.  This screening process (modeled after the process used at the main campus) is described in the paragraphs that follow. Although screening interviews were used, no students were eliminated during the screening process in 2006, and only one student was eliminated in 2007. At the satellite location, the number of students accepted each year ranged from 10 to 23.
In contrast to accepting students every year, at the main campus applications were accepted in the spring semester of even-numbered years. After students submitted all of the required documentation and met the requirements, an interview was scheduled for each applicant. The screening interview, which typically lasted 40 minutes, was viewed as an essential component of the selection process by the faculty at the main campus. All interviews were held during one week of the spring semester. The applicant was interviewed in ASL by a team of at least six IEP faculty and staff members.
Prior to the interview week, faculty and staff members decided on a series of questions to be asked of the applicants. Questions were designed to provide the interview team with insight into the applicants’ content knowledge, previous experiences, personal traits, and opinions. During each interview, the faculty and staff paid particular attention to the applicants’ content knowledge, signing skills, and disposition or goodness of fit (GOF), for example, would the applicant benefit from, and be a good fit for the program, and vice versa. At the conclusion of each interview, interviewers individually rated the applicant using a 5-point Likert-type scale for each of those three categories: content knowledge, signing skills, and GOF (see appendix for the rating rubric). After completing individual ratings, the interviewers then had a discussion in which they could share other relevant information about the applicant; this could include any concerns or commendations about that particular applicant related to his/her class work, performance in the ASL lab, or comments related to other attributes that were not apparent during the screening interview. On the basis of the discussion, each interviewer then determined a final GOF score for the applicant, which could be the same as the interviewer’s initial score or could be adjusted on the basis of additional perspective gained during the discussion.
Upon completion of all interviews, a final score was calculated for each applicant on the basis of weighted scores in the following areas: GPA, reading test score, content knowledge, signing skills, and  GOF. The final weighted score was used to rank students. The top-ranked students were selected in the spring and entered the program the following fall semester. Typically, 27?32 students applied and 18? 21 were accepted during each cycle. Although the procedure became more refined through the years, a similar interview procedure and ranking process was used with each new group of applicants.

Student progression through the program

In addition to the differing selection processes, the sites also differed in terms of the options available for students to complete the program. On the main campus a cohort approach was used; the students entered as a group, attended full-time, and remained with their group for two years. 6 A cohort approach was not used at the satellite site, as students could attend either full time or part time.
At the main campus, students completed the required coursework and internship in a prescribed sequence over the two years following acceptance; this is known as a closed cohort model (Maher, 2004; Reynolds & Hebert, 1998). Under the cohort approach, the selected students entered the program as a group and moved through the entire sequence of coursework together. Because of the two-year acceptance cycle, each cohort had full faculty attention for the two-year period; one cohort group completed their internship and graduated in the spring before a new cohort group started in the fall. Because skill classes were divided into smaller sections of approximately 10 students, they did not necessarily have the same classmates for each course. However, multiple sections were scheduled during the same time period, thus allowing the sections to occasionally meet together as one group. Although all students did not graduate at the same time, because some needed additional time to complete general education requirements, they all completed the advanced IEP coursework together over the same two-year period.
In contrast, the satellite location did not use a cohort approach. Students, once accepted, could attend either full time or part time. Although the courses were taken in a prescribed sequence, they were not taken in a prescribed time-frame. Students took classes with a wide variety of other students; some of their classmates could have been accepted the same year they were, whereas others could have been accepted several years earlier or later. Some students attended full time, whereas others took one or two classes per semester. After acceptance, the time for students to complete the program at the satellite site ranged from two to seven years.

Physical proximity of facilities

Although both sites had faculty offices, an ASL lab, a media room, and classrooms, the physical proximity of the facilities differed. On the main campus, all of the facilities are close to one another; at the satellite location, often they were not.
On the main campus, all of the faculty offices and the ASL lab are located off the same hallway. With the exception of one classroom (which is one floor above), all of the classrooms typically used by the IEP are located on the same floor and are parallel with the office hallway. The multimedia room that interpreting students use to watch video material and to complete interpreting-related homework is accessible only via the ASL lab. This lab, staffed with full-time and part-time employees who are deaf, is an ASL-only zone and a place of socializing for many ASL users on campus. Because all of the facilities are physically close to one another, students have the opportunity to regularly see and interact with a variety of ASL users—ASL students, interpreting students, deaf students as well as faculty and staff.
At the satellite location, the facilities were not in close proximity. Faculty offices and the multi-media room were housed in the same building; however, the multimedia room was located on the first floor, and  faculty offices were located on the third floor. The ASL lab and classrooms were housed in other buildings that were some distance from the building that houses the offices and multimedia room. Because of the distance between facilities, it was less likely that students would have readily available access to impromptu interactions with ASL users, including faculty, staff, and other students.

Faculty and staff

Although there have been changes in the teaching faculty between 1999 and 2009, the main campus usually had the equivalent of five teaching positions, whereas the satellite program had four teaching positions. This included both tenure-track and non-tenure track positions at both sites. With very few exceptions, faculty at both sites had RID and/or American Sign Language Teachers Association (ASLTA) certification. Typically, teachers at the main campus included two or three hearing individuals (including Codas) and two or three deaf faculty members. The satellite location typically had one or two hearing faculty members and two or three deaf faculty members. Faculty members at both sites were considered well qualified and were respected in the field and in the community. Academic degrees held by faculty members on the main campus range from bachelor’s to doctoral degrees. Faculty members at the satellite location held either a bachelor’s or master’s degree. Both sites also included deaf staff members who worked in the ASL lab and multi-media room.
Although the main campus had the equivalent of one more teaching position and some faculty had higher degrees, we believe there was another factor that created a more important difference in terms of faculty and staff. On the main campus, the IEP is housed in the same building as an interpreter outreach program, a Center on Deafness (COD) and a deaf education program that offers both bachelor’s and master’s degrees. The COD and the interpreter outreach program were located on the same hallway as IEP faculty offices and the ASL lab. Thus, students on the main campus also had regular exposure to, and additional opportunities to interact with, other faculty and staff that are involved in the fields of interpreting or deafness, including interpreters, deaf education teachers, and other deaf professionals.

   In summary, the factors identified as variables that differed between the two sites were the student selection process, student progression through the program, proximity of facilities, and additional faculty and staff members on campus. Given the nature of a case study, it is impossible to directly link any one factor to specific outcomes; however, in the sections that follow, we provide a range of student outcome data by site. The outcomes can be viewed as outputs resulting from the composite inputs of the variables at each site.

Results: Student outcomes

This section presents comparative data from the two sites. Data described include graduation rates, sign langauge assessment scores, certification rates, and the type of certification achieved by graduates.

Graduation rates

Between 1999 and 2009, the graduation rate at the main campus was consistently higher than that of the satellite location. Of the students accepted at the main campus, 97.5% have graduated, compared with 73% of the students accepted at the satellite site. 1 Table 2 shows the number of students accepted by year and then, of those accepted for that year, the number who graduated. (Recall that students were only accepted in even-numbered years at the main campus.)

Main campus

Satellite site

Year
accepted

Number
accepted

Number who graduated

Number
accepted

Number who graduated

1999

0

0

10

9

2000

18

18

11

7

2001

0

0

18

13

2002

20

19

11

7

2003

0

0

16

12

2004

21

20

23

18

2005

0

0

16

14

2006

21

21

13

6

2007

0

0

10

7

Totals

n = 80

n =78

n = 128

n = 93

Percentage

97.5%

73%

Table 2: Graduation rates at the two sites (1999?2009)

   During the ten-year period that the program ran concurrently at the two sites, 128 students were accepted at the satellite location, with 93 graduating. A few students are predicted to finished their general education classes and graduate in 2010. If we include these students, the predicted number of total graduates from the satellite site would be 98.

  In 2008, 23 additional students were accepted at the main campus. These students are not included in Table 2 because they will not graduate until 2010. The 80 students who are listed in Table 2, plus the 23 students accepted in 2008, result in a total of 103 students accepted at the main campus between 1999 and 2009. On the basis of the graduation rate at the main campus, we can predict that at least 22 of these 23 students should graduate in 2010, thus bringing the predicted number of total graduates to 100. Even though the satellite program has accepted more students over the ten-year period (128 students at the satellite site compared to 103 at the main campus), it is expected that slightly more students will have graduated from the main site—a predicted 98 students at the satellite site and 100 students at the main campus.
As detailed in the previous section, students at the satellite site could attend either full or part time and the time period to complete the program ranged from two to seven years. For the 93 students who graduated from the satellite program, Table 3 shows how many students graduated within the different time-frames. The bottom row shows the percentage of students who completed the program in the respective number of years.

Number of years to complete the program after acceptance

2yrs.

3yrs.

4yrs.

5yrs.

6yrs.

7yrs.

Number of students  (n = 93)

51

29

7

3

1

2

Percentage of students

55%

31%

8%

3%

1%

2%

Table 3: Number of years to graduate after acceptance at the satellite location

   Table 3 shows that 55% of those who graduated from the satellite program attended full time and finished in two years, and another 31% completed the program in three years. In general, as the number of years to complete the program increased, the number of students who graduated from the satellite site decreased.

Sign Language assessment

Scores from the SCPI were used as a measure of ASL proficiency. Although we do not have scores for every student, we have scores for 74 students who took the SCPI between 2003 and 2008. The majority of these students took the SCPI during the fall semester of their final year in the program, whereas the others took it the following spring. The number of students receiving the different scores is shown in Table 4.

SCPI scores

Survival

Intermediate

Intermediate +

Advanced

Advanced+

Superior

Superior +

Main campus  n=41

0

3

15

15

8

0

0

Satellite site      n=33

1

11

14

4

0

1

2

Table 4: SCPI scores for students between 2003 and 2008

   At the main campus, 23 of 41 students (56%) received a score of advanced or higher. One of the students who received an Advanced + was a hearing native signer. At the satellite site, 7 of the 34 students (21%) received an advanced or higher score; the top three scores were received by either deaf or hearing native signers.  Excluding the native signers, the scores received by students at the main campus were notably higher than scores from the satellite site. 2 The range of scores was much wider at the satellite site; scores range from survival to superior plus.

National certification

When reporting on certification data in this section, we include only students who have graduated two or more years ago (by 2007). The decision to use this time-frame is based on two factors. First, two years is often assumed to be the expected gap between graduation and certification (Witter-Merithew & Johnson, 2005). The second (and more important) reason is due to state licensure. The state in which the program resides requires interpreters to be licensed. Graduates of a bachelor’s degree IEP can apply for a two-year temporary license. After that time, they must either have national certification or request an extension. Thus, some graduates do not take the test until they approach the two-year deadline. By only looking at data from those who graduated at least two years ago, we hope to eliminate possible differences resulting from graduates who waited to take the test because they had this flexibility compared with those who waited (or failed) due to skill deficiencies.
At both sites, there were a few students who had certification (usually the National Association of the Deaf-3 [NAD-3] or NAD-4) 3 prior to taking any ASL or interpreting classes offered through the IEP. A few of these students have not received any additional certification. These few students are not included in the data presented in this section since their certification was not influenced by the program’s ASL or interpreting classes. On the other hand, there are a few other students who entered with certification and have since achieved additional certification; these students are included in the data because it can be argued that the program influenced their receiving additional certification.
The method used to report certification rates can affect how the results look. Our field does not have a standard way of reporting certification rates. That is, do we report by just counting how many graduates are now certified, do we consider all students who have graduated and see what percentage are now certified, or do we count all students who were accepted into the program and then see what percentage are now certified?  Because there is not an agreed upon reporting method, the data will be reported in the three different ways—straight counts, percent of certified graduates, and percent of accepted students who are now certified.
As a last caveat to the certification data, certification rates are constantly increasing as graduates continue to test and receive results. Thus, the data reported here reflects a snapshot of certification rates at each site. This snapshot is useful in gaining a broad perspective of the certification trends, even though the specific numbers will   increase, most likely, before this article is published.
Using a simple straight count of the certification data, if we look at those who graduated two or more years ago, the numbers for each of the sites are very similar. Thirty-three graduates from the main campus and 31 graduates from the satellite campus have achieved national certification. However, this count does not provide a full picture due to differences in the acceptance and graduation numbers at the two sites.

Main campus

Satellite site

Year
graduated

Number who
graduated

Number certified

Number who graduated

Number
certified

2001

0

0

3

2

2002

13

9

7

4

2003

5

3

6

3

2004

18

10

11

6

2005

1

1

8

4

2006

19

10

23

9

2007

1

0

14

3

Total

n = 57

n =33

n = 72

n = 31

Percent

58%

43%

Table 5: Number of students who graduated by 2007 who are nationally certified

     The second way to view the data is using ratios to compare the percentage of certified graduates between the sites. Table 5 displays the number of students who graduated each year and, of those, the number who are now nationally certified. Note that even though students were accepted only in even years at the main campus and most graduated two years later, there were some students who had to complete general education classes and took three years to graduate; in these cases, they graduated in an odd-numbered year. (Table 5 includes only those who graduated by 2007, following our criteria, those who graduated at least two or more years ago.)
As shown in Table 5, when looking at the percentage of graduates who attained national certification, there are differences between the two sites. At the main campus, 58% of the graduates presently are nationally certified compared with 43% of the graduates from the satellite site. Recall that at the satellite site, students could attend either full or part time. Of the 31 certified graduates from the satellite site, 17 (54%) attended full time and finished the program in two years, and 11 (35%) completed the program in three years.
The last way to look at certification rates is to take the total number of students accepted into the program and then look at the percentage of those who are now certified. Table 6 shows the total number of students accepted each year and then how many of those are now certified. (Only those who were accepted by 2005 are included in this table—those students who could have graduated by 2007, at least two years ago.)

Main campus

Satellite site

Year
accepted

Number accepted

Of those, number nationally certified

Number accepted

Of those, number nationally certified

1999

0

0

10

5

2000

18

12

11

3

2001

0

0

18

6

2002

20

11

11

4

2003

0

0

16

6

2004

21

10

23

5

2005

0

0

16

2

Total

n = 59

n =33

n = 105

n = 31

Percent

56%  

30%

Table 6: Number of students who were accepted between 1999 and 2005 and who are now certified

   Examining the data in terms of the percentage of accepted students who are now certified, produces the largest differences between the two sites. For the time period examined, 56% of the students who were accepted at the main campus are now nationally certified, as compared with 30% of the students who were accepted at the satellite site.

ypes of certifications

In addition to our field having a scarcity of published information on graduate and certification rates, we also lack information about the type of certification that graduates have attained. For the reasons explained previously, the data presented in section 5.3  included only data from students who graduated two or more years ago. In contrast, in this section, we include students who graduated last year and who are now certified. This will increase the number of certified interpreters from 33 to 40 at the main campus and from 31 to 33 at the satellite location. (From the main campus, five students who graduated in 2008 have received their National Interpreter Certification [NIC], one student has her NIC-advanced and one student has the Certificate of Interpretation [CI] and Certificate of Transliteration [CT] from RID; 1 from the satellite site, one 2008 graduate received her CI.) Table 7 shows the types of certifications held by graduates. From the main campus, 42% of the certified graduates have multiple certifications; from the satellite site, 41% have multiple certifications. These are included in the following table. 2

Total number of
certified graduates

NAD-3

NAD-4

NAD-5

CI

CT

NIC

NIC-adv

Total number of certificates

Main campus    n=40

8

1

1

15

13

22

2

62

Satellite site      n=32

6

4

0

13

15

12

0

50

Table 7: Types of certifications held by graduates from 2001 to 2008

   Table 7 contains counts from students who have multiple certification as well as those who had single certification. Twenty-three of the 40 certified graduates from the main campus and 19 of the 32 from the satellite site held a single certification. The single certifications held by these graduates are shown in Table 8.

Single certifications

NAD-3

NAD-4

NAD-5

CI

CT

NIC

NIC-adv

Main campus   n=23

1

1

1

2

16

2

Satellite site    n=19

3

1

1

3

11

Table 8: Types of certification held by graduates with a single certificate

   The various types of certifications shown in Table 7 and 8 reflect changes within the field of interpreting over the ten-year period of this study. The NAD interpreter certification tests stopped being administered around 2003 and RID’s CI and CT stopped in 2008. The NIC, which resulted from a collaboration between NAD and RID, started being offered in 2005.  Thus, unless they have prior certification, the old NAD and RID certifications are no longer options for future graduates.
In summary, section five has looked at graduation rates, sign language assessment scores, certification rates and the types of certification achieved by students from the two sites.  Data showed that graduates from the main campus had higher percentage rates in all four of these areas.

Discussion

The measurable data reported previously clearly indicate that the main campus consistently had higher outcomes than the satellite site. Since the prerequisites, curriculum, and faculty qualifications were equivalent, other factors must have been involved in accounting for the differences. Although a specific individual factor could have influenced the higher outcomes, we believe that it was the combination of factors that, together, created an intense immersion-like type of learning environment that led to higher student success at the main campus. Each of these factors, and the combination thereof, are discussed in the paragraphs that follow.
The student screening and selection process at the main campus site is the beginning of a very intense learning experience. The students know that the program is selective and that completing the required pre-requisites does not guarantee admission into the interpreting program. Even before they are accepted, students work hard to increase their chances of admission. Having a competitive application process with a pool of applicants allowed the faculty and staff to select students who they felt had the qualities and disposition that would allow them to successfully complete the program and enter the field. Once selected, students and faculty were together for the next two years. That is, the students and faculty were part of a closed cohort (Maher, 2004, 2005; Reynolds & Hebert, 1998). Because of the two-year cycle, faculty members were easily able to focus their attention on the needs of the current group of interpreting students. Other studies have found increased academic performance and program completion rates for cohort groups (Dabney, Green & Topalli, 2006; Jamelske, 2009; Lawrence, 2002; Reynolds & Hebert, 1998). At the main campus, we believe that the two-year, closed cohort system contributed to creating an intense, immersion type of atmosphere, that, we would argue, led to the more positive outcomes. At the satellite site, 86% of those who graduated did so in two or three years, and 89% of the certified graduates completed the program in two or three years. It is possible that a modified cohort experience, albeit unplanned, occurred at the satellite site. From the data collected, we are not able to determine which of the three-year-completion graduates of the satellite site actually completed the interpreting program in two years and took additional general education requirements in the third year; thus we were not able to compare certification rates for those who completed the IEP sequence in two years, even if it took three years to complete the degree. This would be important data to collect and consider in future studies.
Given the discussion in the signed language interpreting field regarding the need for longer interpreter education programs, it seems counterintuitive that students who completed the program in shorter amounts of time may fare better in terms of graduation and certification; however, the nature of the group support that occurs in cohort groups appears to have played an important role in the learning and increased retention rate at the main campus (Dabney et al., 2006; Jamelske, 2009; Lawrence, 2002). In addition, we believe that other factors further reinforced the positives already occurring due to the cohort structure. The proximity of facilities and the scheduling of classes to accommodate student work schedules differed between the two sites. As noted previously, all of the main campus IEP facilities were close enough to allow easy and frequent access to ASL users. At each site, both traditional and non-traditional students were enrolled, and a majority of students worked while attending classes. Although we do not have specific data, we estimate that at least 85% of students at each site worked. The two sites used different approaches to accommodate students who had to work. At the main campus, where students had to attend full time, all of the ITP and upper-level ASL classes were offered on Tuesdays and Thursdays, with an occasional evening class. This allowed students more flexibility in scheduling work and also in scheduling time to complete their assignments and attend outside events and workshops. We believe that the two-day-per-week full-time experience added to the language immersion and intensity of the learning environment. For at least two days each week, they were fully immersed in the content and language of interpreting and interacting with deaf faculty and staff. 3  In contrast, the satellite site accommodated the needs of working students by providing a part-time option. The data indicate an inverse relationship between time in the program and graduation rates; as time in the program increased, graduation rates decreased. Thus, it appears that part-time students were disadvantaged in terms of program completion. It also appears that additional support systems and changes in the program design are needed to better meet the needs of part-time IEP students and help them to achieve greater graduation rates.

Conclusions

In this article, we have compared a four-year IEP that was offered at two sites over a ten-year period and presented data showing that one site consistently has had higher student outcomes. Factors that differed between the two sites included the student selection process, how students progressed through the program, the proximity of facilities, and the opportunity for interaction with a variety of faculty and staff as well as other students. Although each of these factors, individually, can have a positive impact on students, we believe that it was the combination of these factors that led to students at the main campus having a very intense and engaging experience, and that it was this intensity and engagement that lead to higher outcomes. In other words, educating interpreting students appears to be another example of “the whole being greater then the sum of the parts.” We described the combination of factors that we believe engendered higher student outcomes on the main campus, including students entering and going through the program full time as a cohort group. 4
Our study was limited in several ways. First, we were limited by the type of data that we had available to us. For example, we did not have consistent and complete data on the SCPI scores, nor did we have complete demographic information on students. Although we used graduation rates, language assessment scores, and certification rates as measurable outcomes, there are other important considerations when looking at student and program success, such as graduates’ employment rates and job satisfaction, time taken to obtain certification, and the number of graduates who remain in the field. This type of data is critical to determining long-range success. We are currently in the process of collecting this data from our graduates. We suggest that future research can help the field standardize student outcome reporting and tease apart the multitude of factors that are essential for promoting increased student outcomes.
One goal of this article is to provide a springboard for more research and discussion about IEPs:  what practices appear to be effective, ineffective, or benign. We hope that by sharing this comparison of factors and outcomes from two sites for an IEP, other signed and spoken language interpreter education programs will begin investigating their own programs more closely and will publish those findings so that the information can be shared and the scholarship of interpreter education can continue to develop.

Acknowledgements

The authors would like to thank several colleagues, particularly Audrey Ruiz Lambert, for their helpful comments and insightful feedback on earlier drafts of this article. We accept responsibility for any and all errors.

References

Ball, C. (2007). The History of American Sign Language interpreter education. Unpublished doctoral dissertation, Capella University, Minneapolis, MN.
Bontempo, K., & Napier, J. (2007). Mind the gap! A skills analysis of sign language interpreters. The Sign Language Translator & Interpreter, 1(2), 275-299.
Brien, D., Brown, R., & Collins, J. (2002). The organisation and provision of British Sign Language/ English Interpreters in England, Scotland and Wales. London: Department for Work and Pensions, British Government.
Cokely, D. (1981). Sign language interpreters: A demographic survey. Sign Language Studies, 32, 261-286.
Cokely, D. (2005). Shifting positionality: A critical examination of the turning point in the relationship of interpreters and the deaf community. In M. Marschark, R. Peterson, and W. Winston (Eds.), Sign language interpreting and interpreter education: Directions for research and practice (pp. 3?28). New York, NY: Oxford University Press.
Dabney, D. A., Green, L., & Topalli, V. (2006). Freshman learning communities in criminology and criminal justice: An effective tool for enhancing student recruitment and learning outcomes. Journal of Criminal Justice Education, 17(1), 44?68.
Dean, R. & Pollard R. (2001). Application of demand-control theory to sign language interpreting: Implications for stress and interpreter training. Journal of Deaf Studies and Deaf Education 6(1), 1?14.
Frishberg, N. (1990). Interpreting: An introduction. Silver Springs, MD: RID Publications.
Humphrey, J. & Alcorn B. (1995). So you want to be an interpreter. Amarillo, TX: H & H Publications.
Jamelske, E. (2009). Measuring the impact of a university first-year experience program on student GPA and retention. Higher Education, 57(3), 373?391. doi: 10.1007/s10734-008-9161-1.
Lawrence, R. L. (2002). A small circle of friends: Cohort groups as learning communities. New Directions for Adult & Continuing Education, 95, 83.
Maher, M. A. (2004). What really happens in cohorts. About Campus, 9(3), 18?23.
Maher, M. (2005). The evolving meaning and influence of cohort membership. Innovative Higher Education, 30(3), 195?211. doi: 10.1007/s10755-005-6304-5.
McIntire, M. (1990). The work and education of sign language interpreters. In S. Prillwitz, & T. Vollhaber (Eds.), Sign language research and application (pp.263?273). Hamburg: Signum Press.
Napier, J., & Barker, R. (2003). A demographic survey of Australian Sign Language interpreters. Australian Journal of Education of the Deaf, 9, 19-32.
Peterson, R. National Consortium of Interpreter Education Centers’ two- to four-year program transition project (pp. 133?147). In E. Maroney (Ed.), Proceedings of the 16th National Convention Conference of Interpreter Trainers. San Diego, CA: CIT Publications.
Radencich, M. C., Thompson, T., Anderson, N. A., Oropallo, K., Fleege, P., Harrison, M., Hanley, P., & Gomez, S. (1998). The culture of cohorts: Preservice teacher education teams. Journal of Education for Teaching, 24(2), 109?127.
Reynolds, K. C., & Hebert, F. T. (1998). Learning achievements of students in cohort groups. Journal of Continuing Higher Education, 46, 34?42.
Shaw, S. (2007). Interpreting. A review of Marschark, Peterson and Winston. Interpreting: International Journal of Research and Practice in Interpreting, 9(2), 267?274.
Witter-Merithew, A. & Johnson, L. (2005). Toward competent practice: Conversations with stakeholders. Silver Springs, MD: RID Publications.

Appendix: Scoring rubric used during applicants’ screening interview

Answers to the Content Knowledge Questions:                                                  1—-2—-3—-4—-5

1 point – Very minimal answer.  Did not show a good understanding of the issue(s).

3 points – Adequate answer. Showed a basic understanding; however, may not have a deep understanding of the issue(s).

5 points – Very clear, detailed answer. Touched on the important points. Displayed a deep understanding of the issue(s).

Signing Skills:                                                                                      1—-2—-3—-4—-5

1 point – Definite NO. Does not have the signing skills or the potential to acquire the necessary skills to successfully complete our program.

2 points – Mostly NO. Does not have the skills or the potential to acquire the skills to successfully complete our program at this time.

3 points – To some degree. Uncertain regarding how his/her signing skills, or potential for acquiring the skills, will allow applicant to successfully complete our program and have success in the field.

4 points – Mostly YES. Has the skills, or the potential to acquire the skills, that will allow him/her to successfully complete our program and be an asset to the field.

5 points – Definite YES. Has the skills or the potential to acquire the skills that will allow him/her to successfully complete the program, be a positive representative of our program upon completion of his/her degree, and an asset to the field.

Goodness of Fit:                                                                                   1—-2—-3—-4—-5

1 point – Definite NO. Would not be a positive asset to our program.

2 points – Mostly NO. Would not be a positive asset to our program at this time.

3 points – To some degree. Uncertain regarding how well he/she would complement our program, have some concerns about commitment to program and/or potential success in the field.

4 points – Mostly YES. Would be a positive asset to our program and would benefit from our program. Has the potential to succeed and be an asset to the field.

5 points – Definite YES, would be a strong asset to our program and would benefit from our program. Has the potential to succeed, would be a positive representative of our program upon completion of his/her degree, and an asset to the field.

Interview Scoring Rubric developed by Karen Petronio, Department of ASL and Interpreter Education, Eastern Kentucky University.

Endnotes

1http://www.nciec.org/resource/iep.html, June 10, 2009.

2For a detailed review of the history of interpreter education in the United States, see Ball (2007).

3On June 10, 2009, 72 associate degree programs and 34 bachelor degree programs were listed on the RID website, www.rid.org, and 97 associate programs and 43 bachelor programs were listed on the NCIEC website, www.nciec.org/resource/iep.html.

4The university is a major research university in a metropolitan area.

5This is now called the Sign Language Proficiency Interview (SLPI). For more information see: http://www.ntid.rit.edu/slpi/.

6If a student previously satisfied all the general education requirements, they did not have to attend full time for the first year after acceptance into the program. Unless special circumstances allowed a student to be waived from a class, all students at the main campus attended full time during the final year of the program.

1 There have been very few students at both sites who were accepted in the spring but, due to financial, health, or family circumstances, never started the program in the fall. These few students are not included in the acceptance counts.

2Although the scores were higher at the main campus, even these seem low for students in their senior year. This issue is of concern and is currently being address by faculty at the main campus.

3The certification system of the NAD, which was used across the United States, awarded certification using a leveled system. The scale measured from 1 to 5; however, certified interpreters were those receiving a score of 3, 4, or 5—with 5 representing top performance and the ability to interpret in a wide range of settings.

1See http://www.rid.org/education/edu_certification/index.cfm for RID’s description of current (NCI) and former (CI/CT) certification options.

2As mentioned earlier, some students had their NAD-3 and NAD-4 certification before they entered the program. If they had achieved additional certification while in our IEP, they are included in Table 7. Because we do not have the dates of when students received their first certification prior to entering the IEP, Table 7 includes some NAD-3 and NAD-4 certifications that were received before the student entered the IEP.

3The Tuesday/Thursday schedule was not applicable for the students’ final semester practicum at the main campus. Their practicum was full time and could take place either in state or out of state. Because of the practicum requirements, it would have been very difficult for students to work during this time.

4The cohort literature points to both positive outcomes (Dabney et al., 2006; Lawrence, 2002; Radencich et al., 1998; Reynolds, 1998) and limited or negative outcomes (Jamelske, 2009). Thus, it is important to review the literature to ascertain if any of the cohort models fit the student population served by individual programs.