<%BANNER%>

Utility of the Lindamood Phoneme Sequencing Program (LiPS) for Classroom-Based Reading Instruction


PAGE 1

UTILITY OF THE LINDAMOOD PHON EME SEQUENCING PROGRAM (LiPS) FOR CLASSROOM-BASED READING INSTRUCTION By ELAYNE PROESEL COLN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Elayne Proesel Coln

PAGE 3

iii ACKNOWLEDGMENTS This research, from start to finish, spa nned over approximately three years. There are several people that I would like to acknowledge as they ha ve contributed a great deal of love, support, and encouragement throughout the duration of this project. First and foremost, I would like to thank my foreve r loving husband, Jorge, for his continuous words of wisdom and reassurance. My parents, Glenn and Carol Proesel, also contributed so much in this way. In addition, I would lik e to thank my Chair, Nancy Waldron, for the countless hours of brainstorming, editing, and advising she offered so patiently. Special thanks also to the members of my committee: Tina Smith-Bonahue, Holly Lane, Lynda Hayes, and John Kranzler. Last, but certainl y not least, I would like to acknowledge my wise and wonderful son, Avery. He was more a pa rt of this research than he realizes, and this is also for him.

PAGE 4

iv TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT....................................................................................................................... ix CHAPTER 1 INTRODUCTION AND REVIEW OF THE LITERATURE.....................................1 Phonological Awareness...............................................................................................5 Definition...............................................................................................................5 Phonological Awareness v. Phonemic Awareness................................................6 Phonological Awareness as a Predictor of Reading Achievement...............................6 Assessment of Phonological Awareness...............................................................7 Intervening with Children St ruggling to Learn to Read...............................................9 Critical Elements of Instruction...........................................................................10 Measuring Progress and Reading Achievement Outcomes.................................12 Selecting Reading Curricula and Delivering Instruction............................................14 The Lindamood Phoneme Sequencing Program........................................................16 Program Purpose.................................................................................................16 Program Sequence...............................................................................................16 Setting the climate for learning ...................................................................16 Identifying and classifying speech sounds...................................................17 Tracking speech sounds...............................................................................17 Associating sounds and symbols..................................................................18 Spelling (encoding) and reading (decoding)................................................18 Program Paths......................................................................................................18 Key Program Components..................................................................................19 Training of Instructors.........................................................................................20 Previous LiPS Research......................................................................................20 Individual implementation...........................................................................21 Small group im plementation........................................................................28 Purpose of this Study..................................................................................................32

PAGE 5

v 2 METHOD...................................................................................................................35 Participants.................................................................................................................35 Settings....................................................................................................................... 36 Instructors...................................................................................................................3 6 Procedure....................................................................................................................37 Treatment Integrity..............................................................................................37 Student Progress/Outcomes.................................................................................41 Measures.....................................................................................................................41 Woodcock Johnson Tests of Achievement (WJ-III)...........................................42 Comprehensive Test of P honological Processing (CTOPP)...............................42 Dynamic Indicators of Basic Earl y Literacy Skills (DIBELS)...........................43 Lindamood Auditory Conceptualization Test (LAC)..........................................44 Analysis of Data.........................................................................................................46 3 RESULTS...................................................................................................................48 Descriptive Data.........................................................................................................50 Whole Class Instruction......................................................................................50 Treatment Integrity..............................................................................................52 Inclusion of key program elements..............................................................52 Program paths...............................................................................................58 Instructor 1...................................................................................................59 Instructor 2...................................................................................................60 Delivery of Instruction........................................................................................61 Decision based on needs of classroom teacher and school..........................61 Decisions based on needs of students..........................................................65 Decisions based on training and experience of instructors..........................68 One-On-One Implementation..............................................................................70 Summary of Descriptive Results.........................................................................74 Student Outcome Data................................................................................................78 Gains Demonstrated After LiPS Intervention for All Students...........................80 Student Outcomes: A Comparison of Measures................................................81 Student Outcomes: Differences Between Instructors.........................................82 Analyses of Covariance.......................................................................................83 Effect Sizes..........................................................................................................84 Student Progress: A Closer Look...............................................................................85 Benchmark Comparisons.....................................................................................85 4 DISCUSSION.............................................................................................................89 Reflecting on the Results............................................................................................93 Treatment Integrity..............................................................................................93 Use of mirrors...............................................................................................94 Tracking with colored blocks.......................................................................95 Assessment of student progress....................................................................96 Support of school and classroom teachers...................................................97

PAGE 6

vi Management of student beha vior during instruction....................................98 Student Outcomes..............................................................................................100 Statistical analyses......................................................................................100 Benchmarks................................................................................................101 Implications for Practice...........................................................................................103 Limitations of the Current Study..............................................................................105 Internal Validity.................................................................................................105 External Validity...............................................................................................106 Directions for Future Research.................................................................................107 APPENDIX A RECORD OF PROGRAM DELIVERY..................................................................109 B CLASSROOM OBSERVAT ION-ERROR HANDLING........................................111 C CLASSROOM OBSERVATION-STUDENT OPPORTUNITY TO RESPOND...112 D STUDENT ENGAGEMENT/ON-TASK BEHAVIOR...........................................113 E INITIAL INSTRUCTOR INTERVIEW..................................................................114 F INSTRUCTOR INTERV IEW PERSPECTIVES..................................................115 LIST OF REFERENCES.................................................................................................116 BIOGRAPHICAL SKETCH...........................................................................................121

PAGE 7

vii LIST OF TABLES Table page 1 Number of Observations by Instru ctor for Whole Group Intervention....................51 2 Record of Program Delivery, Percentages of Observations by Instructors Across Intervention Period...................................................................................................53 3 Description of Instruction: Sessions, Time, and Delivery......................................62 4 Percentage of Student Engagement by Instructor....................................................67 5 Number of Observations..........................................................................................71 6 Record of Program Delivery, Percentages by Instructors for One-on-One Treatment.................................................................................................................74 7 Record of Program Delivery, Percentages for Whole Group versus One-on-One..76 8 Summary of Level of Treatment Inte grity for Key Program Components Across Settings.....................................................................................................................77 9 Raw Score Means, Standard Deviati ons for Pretests/Posttests Across All Participants...............................................................................................................80 10 Estimated Marginal Means.......................................................................................81 11 Bonferroni Pairwise Comparisons...........................................................................82 12 Means and Standard Deviations by Instructor and Classroom................................83 13 Student Differences at Posttest By Instructor..........................................................84 14 Percentage of Students at Benchm arks at Pretest/Postest on DIBELS....................87 15 Percentage of Students Below Reco mmended Minimum at Pretest/Posttest on LAC..........................................................................................................................88

PAGE 8

viii LIST OF FIGURES Figure page 1 Vertical Program Paths (Recommended).................................................................59 2 Horizontal Program Paths........................................................................................59

PAGE 9

ix Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy UTILITY OF THE LINDAMOOD PHON EME SEQUENCING PROGRAM (LiPS) FOR CLASSROOM-BASED READING INSTRUCTION By Elayne Proesel Coln December, 2005 Chair: Nancy Waldron Major Department: Educational Psychology Phonological awareness training has been found to be a crucial component of beginning reading instruction. One reading prog ram that is often used in the schools and offers phonological awareness training is the Lindamood Phoneme Sequencing Program for Reading, Spelling, and Speech (LiPS). The purpose of this study was to investigate how the LiPS program, a program initially desi gned for one-on-one use, was adapted and employed in schools with large groups of ki ndergarten students. Descriptive information was collected to compare the treatment integr ity of the LiPS program in the school setting with a clinical setting where the program was employed one-on-one. Additionally, pretest and posttest data were collected on the students in the kinderg arten classrooms to assess student outcomes. The assessment of stude nt outcomes involved four assessment instruments: the Lindamood Auditory Con ceptualization test (LAC); the WoodcockJohnson Tests of Achievement (WJ-III) Word Attack and Word Identification tasks; the Dynamic Indicators of Basic Early Liter acy Skills (DIBELS) Letter Naming and

PAGE 10

x Phoneme Segmentation tasks; and the Comp rehensive Test of Phonological Processing (CTOPP) Elision, Blending Words, and Sound Matching tasks. Results indicated that, in th e school setting, instructors demonstrated low levels of treatment integrity as compared to a high degree of treatment integrity that was maintained by the instructors in a clinical setting. Important LiPS program components that were omitted in the school setting included Tracking following a prescribed sequence, formal assessment of student progress or mastery, and key instructional materials. When considering student outco mes for participants across the two school sites, statistical analyses yielded posi tive mean gains across all students for each assessment measure. Furthermore, mean ga ins achieved on the LAC were statistically significantly greater than ga ins on the three tasks of the CTOPP (Elision, Blending Words, and Sound Matching) and two task s of the WJ-III (Word Attack and Word Identification). No statistically significant di fferences were noted between mean gains on the LAC versus DIBELS tasks. When considering benchmark progress for the LAC and DIBELS measures, gains across all students we re not as great for the LAC as for the DIBELS tasks. Implications for use of the LiPS program in school settings are discussed.

PAGE 11

1 CHAPTER 1 INTRODUCTION AND REVIEW OF THE LITERATURE From educators to politicians to parents, there is widespread concern that reading instruction in our public schools is not as effective as it should be, resulting in a sense of urgency to improve literacy outcomes for our children (Torgesen, 2002). After all, the ability to read is an esse ntial skill in todays world. Reading is a foundation skill for school learning and life learning the ability to read is critical for success in modern society (Lane, Pullen, Eisele, & Jordan, 2002, p. 101). Data from the 2005 National Assessment of Educational Progress (NAEP) re port indicated that 36 percent of fourth graders and 27 percent of ei ghth graders were reading be low a basic reading level (NAEP, 2005). While learning to read has cons istently been an edu cational priority for young schoolchildren for many decades, the focu s placed on literacy outcomes today far exceeds pressures placed on students, educators, and parents in the past. According to Snow, Burns, and Griffin (1998), The demands are far greater than those placed on the vast majority of schooled literate indivi duals a quarter-century ago (p. 20). This urgency for educators to address stude nts literacy needs is fueled by recent empirical findings related to outcomes for stru ggling readers. For example, Snow et al. (1998) reported that a student who is not a reasonably proficient reader by the end of third grade is very unlikely to graduate from high school. Therefore, it is not just that the teaching of reading is more important than ev er before, but that it must be taught better and more broadly than ever before (Adams, 1990, p. 26).

PAGE 12

2 Issues of quality instruction and early intervention to addr ess those students at risk for reading failure pervade the current reading res earch literature (Adams, 1990; Snow et al., 1998; Torgesen, 2002). Fortunately, know ledge and understanding of how children learn to read and why many st ruggle have increased expone ntially over the last three decades (Denton, Vaughn, & Fletcher, 2003). For example, dozens of professional organizations exist, such as the International Reading Association, the National Reading Conference, and the International Dyslex ia Society, with members dedicated to understanding, remediating, and preventing r eading failure. While reading research continues to evolve and educators learn incr easingly more about what it takes to be a skilled reader, significant moneta ry and intellectual resources within the last decade have been devoted to learning more about and improving student reading achievement at the local, state, and national level. Several groups have worked diligently in recent years to s ynthesize the extant literature related to reading achievement in some meaningful way and offer recommendations and guidelines to focus future efforts. For example, the U.S. Department of Education and the U.S. De partment of Health and Human Services requested that the National A cademy of Sciences establish a committee to examine issues surrounding the prevention of reading difficu lties in young children (Snow et al., 1998). In a report exceeding four hundred pages in length and entitled Preventing Reading Difficulties in Young Children the committee focused on summarizing the extant literature related to the effec tiveness of interventions for ch ildren struggling to learn to read and providing recommendations based on em pirical evidence to assist educators and parents in their work with struggling readers. In anothe r effort, the National Reading

PAGE 13

3 Panel (2000) reviewed more th an 100,000 empirical studies rela ted to reading instruction and created an influent ial document, entitled Teaching Children to Read to assist parents and teachers, among others, in identifying key skills and methods consistently related to reading success. In their review of the read ing research literature, the National Reading Panel identified effective instructional practic es related to various aspects of reading including phonemic awareness, phon ics, fluency, and comprehension. Politicians and government officials have also played a role in the movement toward increasing academic success and the qua lity of instruction for children in the United States by introducing major legisla tion in recent years. In January of 2002, President Bush signed into law the No Child Left Behind Act of 2001, which served to revise and reauthorize the Elementary and Secondary Education Act. In addition to redefining the federal role in K-12 educati on, this act focuses on four primary issues: increasing accountability within the schools, providing increa sed flexibility at the local level, expanding options for parents who ar e dissatisfied with their childs current educational situation, and understanding and infusing research-based practices into educational curricula (No Child Left, 2002). While a sound knowledge base exists rega rding effective read ing practices that produce positive outcomes for students, these instructional methods are not widely included in typical classroo m instruction (Denton et al ., 2003). Thus, there is an increasing focus on bridging the resear ch-to-practice gap and improving our understanding of the process of transferring empirically supp orted instructional methods related to reading into the classroom and su staining these practices. Problems persist in translating research into classroom practices (e.g., from the clinic to the classroom) and

PAGE 14

4 scaling up these research-based practices to affect large numbers of students in the schools (Denton et al., 2003; Gersten & Dimi no, 2001; Klingner, Ahwee, Pilonieta, & Menendez, 2003). Factors that have been cited to affect the scaling up and sustaining of educational innovations include the link between researchers and teachers, teacher access to research-based information (i.e., prof essional development and support), and the feasibility of knowledge and practices (i.e., practical and applicable in classrooms) (Boardman et al., 2005; Denton et al., 2003). Fortunately, research indicates that many of the instructional practices that are effective for special education students are at least equally beneficial for general education students as well (Vaughn, Gersten, & Chard, 2000, in Boardman et al., 2005). Therefore, especially for prevention and early intervention services for younger students, research-based practices demonstrated to a ssist the students at-risk for later reading failure may be delivered in the general e ducation classrooms to serve students with a range of abilities. Meeting th e needs of more students simu ltaneously may contribute to greater acceptance of certain practices among educators and fo ster maintenance of these practices in the schools. Educational researchers continue to work to determine the specific factors that play a role in childrens reading developm ent and success. Although we do not yet understand the conditions that must be in pl ace to prevent readi ng difficulties in all children, we do know what must be done to very substantially re duce the number of children who fail to acquire adequate read ing skills during the primary grades of elementary school (Torgesen, 2002, p. 22). Unfo rtunately, challenges persist regarding ensuring that practitioners are equipped and prepared to implement research-validated

PAGE 15

5 reading practices in classrooms and with groups of students. Regard ing the prevention of later reading difficulties, es pecially for younger students (i.e., kindergarteners) and beginning readers, one foundationa l reading skill that has rece ived significant attention is phonological awareness and its instructi on (Snow et al., 1998; Torgesen, 2002). Phonological Awareness Definition Phonological awareness has been described as the conscious sensitivity to the sound structure of language (Lane et al., 2002, p. 101). In other words, it is the ability to analyze spoken language and recognize that it consists of sma ller units. Phonological awareness is an umbrella term used to de scribe awareness of spoken language at the word, syllable, onset-rime, and phoneme level (Lane et al., 2002). Individuals with strong phonological awareness skills can detect, match, blend, segment, and manipulate speech sounds, and oftentimes the ability to rhyme is the first phonological skill that children master (Lane et al., 2002). In fact, sometim es children have trouble learning to decode because they are completely unaware of the f act that spoken language is segmented into sentences, into syllables, and into phonemes (Williams, 1987, in Blachman, 2000, p. 484). The development of phonological awar eness typically begins by age 3 and improves over many years as the child devel ops academically (Snow et al., 1998). As indicated previously, there is a ra nge of phonological awar eness skills that children develop over time. Adam s (1990) identifies at least five different levels of awareness: (1) knowledge of nursery rhymes an ear for the sound of words; (2) ability to compare and contrast the sounds of words fo r rhyme and alliteration; (3) the ability to blend and segment at the syllable level; (4) the ability to blend and segment at the phoneme level; and (5) the ability to ma nipulate phonemes by adding, deleting, or

PAGE 16

6 moving phonemes to create new words. This may be interpreted as a developmental sequence. However, the issue of phonologica l awareness developing in a stage-like manner is scantily addressed in the reading research literature. Phonological Awareness v. Phonemic Awareness The terms phonological awareness and phonemi c awareness are of ten inaccurately used interchangeably. Whereas phonological awar eness refers to a general awareness of the sound structure of language, including th e ability to rhyme and blend or segment larger word parts, phonemic awareness specifica lly refers to an indi viduals ability to attend to the individual sounds in spok en language. Those with strong phonemic awareness skills are able to manipulate in dividual phonemes, or sounds (Lane et al., 2002). Those with sound phonemic awareness ski lls have an appreciation for rhyme and alliteration, as well as the unders tanding that every word consists or is created from a sequence of phonemes (Snow et al., 1998). T hus, phonemic awareness is believed to contribute to later re ading development and achievement An awareness of phonemes is key to understanding the logic of the alphabetic principle and thus to the learnability of phonics and spelling (Snow et al., 1998, p. 52). Moreover, while some sense of phonemic awareness is generally evident in the typically de veloping child beginning at a young age, this skill often must be specifica lly taught or honed. Becau se of the physical and psychological nature of phonemes as well as the nature of human attention, few children acquire phonemic awareness spontan eously (Adams, Treiman, & Pressley, 1998, in Snow et al., 1998, p. 54). Phonological Awareness as a Predictor of Reading Achievement Skills in phonological awareness have been demonstrated to be reliable predictors of reading achievement. Moreover, phonological awareness is cited as a key to beginning

PAGE 17

7 reading acquisition (Smith, Si mmons, & Kameenui, 1995). Speci fically, tasks such as identifying the first sound in a word, bl ending phonemes into a word, and analyzing sounds within words have been cited as e ffective predictors of reading development (Olofsson & Niedersoe, 1999). It is believed that instru ction in these and similar phonological awareness skills assi st in preparing children to learn and benefit from phonics (Lane et al., 2002). Therefore, childr en with poor phonologica l awareness skills may be at risk for having difficulties in learni ng to read in the primary grades. In fact, it has been noted that children who enter first grade low in knowledge about the phonological features of words or who have difficulties proces sing the phonological features of words are at high risk for difficu lties responding to early reading instruction (Torgesen, 2002, p. 12). Yet, phonological awaren ess skills are not necessarily fully developed or intact prior to beginning read ing instruction. Phonologi cal awareness skills may strengthen as the child develops into a mature reader. The correlation between reading and phonological awarene ss, which is already substa ntial by the start of school, becomes stronger during the early grades (Snow et al., 1998, p. 56). However, phonological awareness abil ities remain a robust predictor of early reading achievement even when assessed in very young preschool children (Blachman, 2000). In fact, even when individual differences in intellig ence are considered, phonological awareness abilities assessed in preschool children continue to be significant pred ictors of later word recognition and spelling skill s (Kennedy & Backman, 1993). Assessment of Phonological Awareness It is important to identify early those child ren that are at risk for reading failure. Given that phonological awaren ess is an accurate and reli able predictor of reading achievement, assessing a childs phonological awareness skills is a logical first step in

PAGE 18

8 helping these children. Educators face the fo rmidable challenge of determining which children have weaknesses in phonological awar eness and, therefore, which children are likely to develop reading problems (L ane et al., 2002, p. 103). Since phonological awareness skills are important to reading de velopment and later achievement, what types of tasks are being used to determine whethe r a child has the necessary precursors for reading success? The assessment of an individuals phonologica l awareness typically involves one or more of the following tasks: isolating or segmenting one or more phonemes in a spoken word, blending or combining a sequence of separate phonemes into a word, manipulating (adding, subtracting, or rearra nging) the phonemes within a word (Snow et al., 1998). Assessment tools evaluating an individuals phonological or phonemic awareness skills do not involve letters (Torgesen, 2002). It is phonological awareness tasks that involve manipulating spoken language that assist in identifying children that are at risk for reading difficulties. Researchers have found that children who are successful on phonological awareness tasks su ch as deletion (e.g., say hit without saying the /h/ sound) and categorization (e.g., bat and big go together because they bot h start with /b/) learn to read and spell with greater ease than those children that perform poorly on such tasks (Blachman, 2000). Although informal methods may be use d, many formal measures of phonological awareness have been developed and are ava ilable for widespread use (see Lane et al., 2002). Furthermore, the assessment of phonol ogical awareness can be accomplished individually or in a group se tting. Ultimately, it is important to assess a broad range of skills in order to have the best estimate of future reading performance. Both conscious

PAGE 19

9 awareness of the phonemes in words and ability to accurately iden tify them within words is necessary in learning to phonemically de code words in print (Torgesen, 2002, p. 12). Research has been conducted in recent years regarding the relationship between phonological awareness and intelligence. It is believed th at strengths or weaknesses in phonemic awareness do not necessarily depend on an individuals intellectual ability or general verbal skills (Pu gh et al., 2001; Shaywitz, 1996; Torgesen, 2002). Weaknesses in phonemic awareness characterize children w ith reading problems across a broad span of general verbal ability (Torgesen, 2002, p. 12). It has been found that phonological awareness skills predict future reading achieveme nt even when intelligence is controlled. Tests of phonological awareness are among the be st predictors of childrens progress in learning to read and typically account for larg e amounts of variance in reading skill even after the effects of age and IQ have been controlled for (McDougall, Hulme, Ellis, & Monk, 1994). Intervening with Children Struggling to Learn to Read For those individuals identified as ha ving weaknesses in phonological or phonemic awareness, it is important to intervene as early as possible in order to prevent further reading difficulties. Children who are delayed in the development of phonemic awareness have a very difficult time ma king sense out of phonics instruction (Torgesen, 2002, p. 12). Therefore, these stude nts must obtain the necessary instruction to strengthen their phonological awareness skills and prepar e them for future reading instruction. Early preventative or remedial e fforts will prevent academic frustrations from consuming these children. Therefore, the to tal number of negative side effects from experiencing reading failure can be re duced (Olofsson & Niedersoe, 1999).

PAGE 20

10 To foster phonological awareness, children must be exposed to print at an early age. Among other things, this can be accomp lished by reading to children, talking about literature and storybook characte rs, and pointing out signs along the roadside. Such global awareness of the forms, functions, and uses of print provides not just the motivation but the basic conceptual backdrop against which reading and writing may best be learned (Adams, 1990, p. 337). Current read ing research literatur e explicitly indicates that incorporating phonological awareness components into ea rly reading instruction is essential. Those children that have str ong phonological awareness skills, either due to explicit instruction or develope d through early family and pres chool literacy experiences, appear to have an early reading and spelling advantage (Blachman, 2000). Critical Elements of Instruction While early literacy experien ces including exposure to prin ted materials provide an important foundation for later reading success, there are several critical elements of reading instruction that should be present in early elementary classrooms. Reading research has demonstrated that there is strong evidence of a positive effect on reading with intervention that combines phonologica l awareness instruc tion and explicit, systematic instruction in read ing for children in kindergarte n, first, and second grades (Blachman, 2000, p. 486). Although it is benefici al for all children in the early elementary grades to receiv e instruction in phono logical awareness that is direct, systematic, and explicit, there is a heightened necessity for this to occur for the struggling reader or those students at risk for later reading failure. Specifically, instruction for children who have difficulties le arning to read must be more explicit and comprehensive more intensive and more supportive than the instruction required by the majority of children (Foorman & Torgesen, 2001, p. 206) Due to deficiencies in phonological

PAGE 21

11 awareness abilities, some children will not discover connections between spoken and written language independently, despite having had quality preschool literacy experiences and opportunities to interact wi th language (Blachman, 2000). By ensuring that these critical elements of classroom instruction are present for all students, especially those most in need of reading support, the pe rcentage of children remaining poor readers can be significantl y reduced (see Torgesen, 2002). For children with the most severe reading difficulties, phonological awareness interventions that are longer, more intense and explicit, and stru ctured to move beyond accuracy in decoding are necessary to facilitate fluent word recognition (Blachman, 2000) and therefore later academic success. To deliver reading instruction in inte nsive, meaningful, and efficient ways, teachers employ various grouping practices in their classrooms. These include organizing and delivering instruct ion to a whole classroom of students simultaneously, in small groups, or individually. The efficacy of thes e different instructi onal arrangements on teaching reading to average and struggling r eaders has been detailed in the reading research literature. Generally, research has demonstrated that small group and one-on-one instructional arrangements represent the mo st effective grouping practices for reading instruction (Elbaum, Vaughn, Hughes, & Moody, 2000). However, many teachers consider whole class instruct ion to be the preferred appr oach to reading instruction (Elbaum et al., 1999) for general and special e ducation students, and th is continues to be the most common practice (Elbaum et al., 1999; Logan, Bakeman, & Keefe, 1997). Unfortunately, the ways in which teachers group students for reading instruction affect student outcomes, and small group and individual instruction have been demonstrated to

PAGE 22

12 be more effective than whole class instru ction (Ehri, 2004; Elbaum et al., 2000; Vaughn et al., 2003). Measuring Progress and Reading Achievement Outcomes There is empirical research to dem onstrate that phonologica l awareness plays a significant role in reading ability and disabi lity (Lane et al., 2002; Olofsson & Niedersoe, 1999; Smith et al., 1995; Stone & Doane, 2001; Torgesen, 2002). Regardless of the grouping arrangements or the specific instructio nal content, it is important to consider methods that can be used to monitor progre ss and evaluate outcomes for those students that are at risk for reading failure or are invo lved in some sort of reading intervention. In selecting measures to document student progress or evaluate the efficacy of an intervention, it is important to consider th e psychometric adequacy and the degree of specificity of each outcome measure (Stone & Doane, 2001). Norm-referenced, standardized assessment instruments provide information about a students current level of functioning compared to a larg e, often nationally representative group of same-aged peers. This information can be especially important when evaluating student achievement gains or in making eligib ility determinations. Norm-referenced tests can be used district-wide, st atewide, or even nationally, to provide a unified method for determining eligibility fo r special education program s (Shinn & McConnell, 1994). Norm-referenced, standardized assessment t ools are often valued for their ease of interpretation, presumed tec hnical adequacy, and provision of norms to compare student performance (Sofie & Riccio, 2002). An exam ple of a norm-referenced, standardized measure that can be implemented to assess skills such as word identification and reading fluency is the Woodcock Johnson Tests of Achievement (WJ-III; Woodcock, McGrew, & Mather, 2001). The Comprehensive Test of Phonological Processi ng (CTOPP; Wagner,

PAGE 23

13 Torgesen, & Rashotte, 1999) is one common st andardized measure used to evaluate an individuals phonological awareness abilities. While there is often an emphasis on selec ting norm-referenced, standardized tests to make special education decisions (Sofie & Riccio, 2002), other eval uative tools can be employed to monitor student progress or doc ument achievement gains. Curriculum-based measurement (CBM) is a dynamic assessment tool that can be employed to monitor student progress and assist in evaluating achievement outcome s relative to a particular intervention. CBM relies on a traditional psychometric framework by incorporating conventional notions of reliability and validity so that the standardized test administration and scoring methods have been designed to yield accurate and meaningful information (Deno, Fuchs, Marston, & Shin, 2001, p.508). Therefore, using CBM, student performance can be closely monitored throughou t instruction, and decisions can be made immediately as to whether academic progress is satisfactory as this progress relates to the curriculum. One example of curriculum-base d reading materials that have gained enormous popularity in recent years as a m eans of identifying at-risk students and monitoring student progress is the Dynamic Indi cators of Basic Early Literacy Skills, or DIBELS (Good, Kaminski, Laimon, & Johns on, 1992; Kaminski & Good, 1996). One type of DIBELS task that has been em ployed with young children is letter naming fluency, which measures the accuracy and sp eed with which a child can provide the names of upper and lower case letters of the alphabet. Lett er naming fluency, or letter identification, is considered to be one of the strongest predic tors of school readiness and later reading achievement (Elliott, Lee, & Tollefson, 2001; Snow et al., 1998; Speece, Mills, & Ritchey, 2003)

PAGE 24

14 Selecting Reading Curricula and Delivering Instruction Through comprehensive meta-analyses of the reading research literature and influential documents such as Preventing Reading Difficulties in Young Children and Teaching Children to Read findings regarding effectiv e prevention and intervention strategies for teaching children to read ar e becoming apparent. Fo r example, through its review of the reading research literature, the National Reading Panel (2000) identified several instructional components that should be present in orde r for children to succeed in learning to read. Among these are the incorp oration of phonemic awareness activities into early reading instruction and the importance of direct instruction (National Reading Panel, 2000). In order for beginning reading inst ruction or interventions to be effective, phonological awareness training must involve explicitly and systematically teaching children in small groups to manipulate phoneme s with letters (Nati onal Reading Panel, 2000). However, while prominent researchers and major legislation appear to resoundingly support certain instructional pr actices and intervention methods as being effective, a significant number of children c ontinue to struggle in learning to read. A large number of students who should be cap able of reading ably given adequate instruction are not doing so, suggesting that the instructi on available to them is not appropriate (Snow et al., 1998, p. 25). A host of instructional conditions remain in a significant number of schools t oday that contribute to the failure of many students in learning to read. These include lack of an appropriate curriculum, low expectations for student success, teachers poorly trained in effective methods for teaching children to learn to read, unavailab ility of appropriate curricular ma terials such as books, and noisy or crowded classrooms (Snow et al., 1998). Oftentimes, phonological awareness

PAGE 25

15 instruction, which has been documented to be a necessary component of early reading instruction or intervention, is not adequately addressed in general classroom instruction. Unfortunately, while the ability to manipul ate and segment phonemes correlates strongly with later reading success, th ese skills are generally unatt ainable unless children receive formal reading instruction in these areas (Adams, 1990). Just as children must acquire knowledge in a variety of academic subjects social studies, science, and mathematics, for ex ample quality readi ng instruction should address various facets of read ing in addition to phonologica l awareness. These skills include phonics, fluency, vocabulary inst ruction, and text reading comprehension (National Reading Panel, 2000) Phonological awareness training offers the necessary foundational knowledge in the alphabetic princi ple and serves as one component in a comprehensive instructional progr am, but other competencies mu st be acquired as well to ensure student success in reading and writing (National Reading Panel, 2000). However, for the young student, phonological awarene ss instruction provides the necessary foundation for later instruction in other reading principles. With the accountability pressures placed on schools and the financial woes of local and state educational systems, educators must continue to find ways to deliver effective reading instruction to students and provide meaningful interventions to those students struggling to learn to read for whatever reason. One such program that focuses on phonological awareness training and has receiv ed attention in th e reading research literature is the Lindamood Phoneme Sequencin g Program. While originally devised for clinical use with students in a one-on-one setting, this prog ram has been scaled up for

PAGE 26

16 use in the schools, in both prevention and in tervention efforts, and has been delivered in various grouping arrangements. The Lindamood Phoneme Sequencing Program Program Purpose The Lindamood Phoneme Sequencing Program for Reading, Spelling, and Speech (LiPS) is a multisensory program that in corporates auditory, visual, and tactilekinesthetic strategies to teach phonemic awar eness, and eventually reading and spelling skills, through direct instruction. The contribution of the LiPS Program is the development of an oral-motor, visual, and auditory feedback system that enables all students to prove the identity, number, and order of phonemes in syllables and words (Lindamood & Lindamood, 1998, p. xiv). The LiPS program can be employed as the primary language arts component of an e ducational curriculum, or can be used in conjunction with existing reading material s used within the schools (Lindamood & Lindamood, 1998). Program Sequence According to the LiPS manual, the progr ession of the program is generally organized into five levels. These component s include Setting the Climate for Learning, Identifying and Classifying Speech S ounds, Tracking Speech Sounds, Associating Sounds and Symbols, and Spelling (E ncoding) and Reading (Decoding). Setting the climate for learning The purpose of the first level, Setting the Climate, is to engage the student actively in the learning process by helping him to know what he will be doing and why (Lindamood & Lindamood, 1998). In this portion of the program, the student learns more

PAGE 27

17 about how to see, hear, and feel the sounds in words in order to make learning to read and spell easier (Lindamood & Lindamood, 1998). Identifying and classifying speech sounds In Identifying and Classifying Speech S ounds, the student is introduced to the process of categorizing speech sounds base d on similarities and differences between them. The student begins the multisensory experience, hearing, feeling, and seeing sounds as they are produced in order to identify, classify, and label each of the consonant and vowel sounds (Lindamood & Lindamood, 1998). As new sounds are introduced and labeled, letter symbols may be presented concurrently, or this component may be postponed to a later level. Tracking speech sounds Tracking involves the manipul ation of concrete objects mouth pictures, colored blocks, and/or colored felts in order to learn the identity, order, and sameness/ difference of speech sounds in syllables and words. The ability to track sounds in sequences and conceptualize them visually is a critical factor in reading and spelling (Lindamood & Lindamood, 1998, p. 11). During this component, the student hones his skills in tracking sounds in sequences and associating sounds and symbols with these sequences (Lindamood & Lindamood, 1998, p. 11). Beginning at the syllable level with two to three sounds, tracking continues thr oughout the sequence of the program to the multisyllable level. During the tracking activities, the student learns to track five types of changes in syllables as one sound at a time is substituted, taken away, added, repeated, or switched (Lindamood & Lindamood, 1998).

PAGE 28

18 Associating sounds and symbols If letter symbols were not previously introduced, they can be introduced at this stage of the program. While lett er symbols can be used for reading and spelling activities (oftentimes mouth pictures are used init ially for younger or more severe students), tracking activities neve r involve the use of letter sym bols. This is because tracking activities involve the manipula tion of phonemes, and letters oftentimes do not match to sounds with a one-on-one corr espondence (e.g., /th/ is one so und, but is represented by two letters). Sound-symbol association activ ities in Spelling and Reading should be overlapped with the Tracking activities as a separate but concurrent task (Lindamood & Lindamood, 1998, p. 13). Spelling (encoding) and reading (decoding) Through reading and spelling activities fi rst with mouth pict ures, then letter symbols the student has the opportunity to in tegrate the auditory tracking skill with the sound-symbol associations developed in prev ious levels of the program (Lindamood & Lindamood, 1998). Spelling and reading tasks ex tend from the simple syllable level to the complex and multisyllable levels, depending on the age and developmental level of the student. Program Paths There are two paths through the LiPS program : the Horizontal Path and the Vertical Path (Lindamood & Lindamood, 1998). The or der of progression through the program materials depends on the age and developmental level of the student, as well as instructor preference. In the Horizontal Path, all consonant sounds are presented first, followed by all vowel sounds. Then tracking, reading, a nd spelling of syllables and words are introduced, from simple, to complex, to multis yllable words. The Vertical Path, however,

PAGE 29

19 presents three consonant pairs and three vowel sounds, and uses thes e to track, read, and spell simple syllables and words. Next, the remaining consonants and vowels are introduced slowly as tracking, reading, and spelling continues to the multisyllable level. The Vertical Path is deemed most appr opriate for younger children, children with developmental delays, or those students that have experienced limited academic success (Lindamood & Lindamood, 1998). According to the program manual, the length of LiPS treatment will vary depending on the type of instructional setting (Lindamood & Lindamood, 1998). In a classroom, it is suggested that instruction should be provided daily for approximately 40 to 50 minutes in order to reach the complex sy llable level within two to three months. For a clinical setting with one-onone or small group instruction, it is suggested that intensive treatment be administered for three to four hours each day. Length of treatment will depend on the age and skill level of the student. Key Program Components A key element of the LiPS program is the quality of exchanges between the instructor and the student or students. The program devel opers describe a Socratic questioning interaction that they term responding-to-the-response (Lindamood & Lindamood, 1998). In responding-to-t he-response, the instructor incorporates simple and direct questioning in such a way as to a llow the student to discover new concepts, monitor his or her own progress, and identify and self-corre ct errors. This questioning elicits the sensory-cognitiv e connections that are th e goal of the LiPS Program (Lindamood & Lindamood, 1998, p. xiii). For exam ple, instead of correcting a students incorrect response by providing the correct answer, the inst ructor uses a series of questions to lead the student to the desired response. This is believed to be the most

PAGE 30

20 critical element of the instructor-student interactions throughout the entire program (Lindamood & Lindamood, 1998). Training of Instructors While training in the LiPS program has varied since the programs inception, persons affiliated with the Lindamood-Bell Learning Processes company train interested individuals throughout the country on a regular basis. Currently, program developers and affiliated trainers offer a three-day work shop to prepare persons to teach the LiPS program to individuals or groups of stude nts. A minimum of 70 to 80 trainings are offered nationally each year for professionals interesting in learning the LiPS program (P. Worthington, personal comm unication, September 28, 2005). Presently, Lindamood-Bell has contracts with over 100 schools and district s nationwide, infusing trainers into these systems to teach teachers how to instruct students in the LiPS program and offering consultative services to these schools for at least one school year (P. Worthington, personal communication, September 28, 2005). Previous LiPS Research The LiPS program has been implemented in dozens, if not hundreds, of educational and clinical sites throughout the nation. However, substantial empirical evidence regarding student outcomes as a result of th is reading intervention remains limited. There are only a small number of studies that ha ve examined issues surrounding one-on-one implementation of the LiPS program, a nd fewer that address LiPS program implementation with small groups or classes of students. The following describes some of the LiPS program research that has been documented in recent years.

PAGE 31

21 Individual implementation A handful of studies have been published in recent years evalua ting the efficacy of the LiPS program with one-on-one impleme ntation. Research involving the Lindamood program has been conducted with samples of va rious sizes, with participants with a wide range of ages, and in both school and clinical settings. In one of the first studies to examine issues of program effectiveness with students with learning disabilities in the schools, Kennedy and Backman (1993) compared student reading achievement scores for nine students who received the Lindamood program in addition to the schools traditional curriculum with nine students in a contro l group who received only the traditional curriculum. Participants were between th e ages of 11 and 17 and attended a nonprofit residential school for high school students with severe learning disabilities. An educational consultant and a teacher who had previously been trained in the Lindamood program, along with a speech-pathologist, trai ned the ten teachers implementing the intervention during a series of in-service trainings and regular bi-weekly meetings. Assessment measures were administered at th e beginning of the school year, at mid-year, and at the end of the school year. Treatment for those receiving the Lindamood program began after pretesting in Sept ember, was administered indi vidually, and consisted of three 50-minute class peri ods per day for six weeks, totaling 75 hours (Kennedy & Backman, 1993). While all of the participants in this study were reported to have made significant gains on standardized reading a nd spelling measures, there was no evidence that those students in the e xperimental condition made signi ficantly more gains than the control group on these standardized r eading and spelling measures (Kennedy & Backman, 1993). However, there was eviden ce of significantly greater gains made by those students receiving the Lindamood pr ogram on measures of phonological awareness

PAGE 32

22 and use of phonetic strategies in spelli ng real and nonwords (Kennedy & Backman, 1993). Overall, the authors concluded that the Lindamood program was a successful addition to a comprehensive remedial program in terms of improved ability to sequence speech sounds and phonetic accuracy in spelling r eal and nonwords within this sample of students with severe LDs (Kennnedy & Backman, 1993, p. 258). While Kennedy and Backman (1993) ev aluated the Lindamood programs effectiveness with a high school sample, Torg esen et al. (1999) addressed the programs success with elementary school students. Torges en et al. (1999) ev aluated the relative effectiveness of three methods, including a variation of the LiPS program, for preventing reading disabilities in child ren with weak phonological ski lls (n = 138). Students were recruited to participate in the two and a half year study midway through their kindergarten year. The research design c onsisted of four c onditions: a phonological awareness plus synthetic phonics (PA SP) condition, an embedded phonics (EP) condition, a regular classroom support (RCS) c ondition, and a control group that did not receive treatment (NTC). Only two of these conditions, PASP and EP, were considered to be truly experimental in nature; the third intervention (RCS) was designed to be most closely aligned with the childrens presen t reading curriculum. In the PASP condition, students received the LiPS program (referred to in this study and formerly known as the Auditory Discrimination in Depth, or ADD, program) with a focus on explicit instruction in phonemic awareness in conjunction with so me instruction in r eading decodable text. While both the PASP and EP programs consis ted of direct instruction in phonemic decoding strategies, the most important in structional contrast involved the degree of explicitness of instru ction in phonological awareness a nd phonemic reading skills as well

PAGE 33

23 as the extent of decontextualized, focused practice on these skills (Torgesen et al., 1999). Participants in each of th e three treatment conditions received four 20-minute sessions of one-on-one instru ction per week over the two and one half year period. Certified teachers led two of the weekly sess ions, and the two additional weekly sessions were led by aides who followed the teachers wr itten lesson plans. The certified teachers, referred to as tutors, were recruited for this study, randomly assigned to either the EP or PASP condition, and received eighteen hours of initial training by members of the research team in the program to which they were assigned. In fact Patricia Lindamood, one of the developers of the program, traine d the tutors involved with the PASP program. This initial training was followed up by biweekly evaluations of treatment integrity from research project members via videotaped se ssions and inservice tr ainings throughout the treatment period. In sum, total treatment time consisted of 88 hours of one-on-one instruction beginning in the middle of kindergarten and extending through the 2nd grade (Torgesen et al., 1999). According to Torgesen et al. (1999), th e most phonemically explicit condition, the PASP condition, produced the strongest growth in word level reading skills. Participants in the PASP condition demonstrated signi ficantly stronger phonol ogical awareness, phonemic decoding, and untimed context-free word reading skills that those in the EP group. Moreover, children in the PASP group also demonstrated greater gains on word level reading skills th an participants in either the RCS or NTC groups. No significant differences were noted between the groups in the area of reading comprehension (Torgesen et al., 1999). During the study, 26% of the sample was retained in either

PAGE 34

24 kindergarten or first grade, and there was a significant difference in retention rates across conditions. It is interesting to note that only 9% of the PASP participants were retained, whereas the percentages for the NTC, the RCS, and EP conditions were 41, 30, and 25 respectively (Torgesen et al., 1999). In additi on, the percentages of children that were referred for special services during the resear ch period also differe d with the NTC, RCS, EP, and PASP conditions at 22, 24, 42, and 18, respectively (Torgesen et al., 1999). Torgesen and his colleagues again contrast ed the relative effec tiveness of the LiPS program, referred to by the authors as th e ADD program, with the Embedded Phonics (EP) program this time with participants between the ages of eight and ten who were previously diagnosed as havi ng a learning disability (Torge sen et al., 2001). While sixty children participated in the treatment phase of the study, only fifty participants are included in the results due to attrition. Participants were randomly assigned to one of two conditions, or instructional approaches. The authors distinguished the two instructional approaches by their relative focus on word level decoding versus application to meaningful text. The EP program provide d much more practice than the ADD program in reading and comprehending meaningful te xt, while the ADD program provided more explicit and extended practice on phonemic awareness and phone mic decoding skills than the EP program (Torgesen et al., 2001, p. 35). Treatment was provided to each participant one-on-one in two 50-minute sess ions each day of the week. Total treatment time for each participant was 67.5 hours, which extended over an eight to nine week period (Torgesen et al., 2001). Additionall y, upon the conclusion of the intensive treatment sessions, clinicians went into the classroom of participants once per week for the next eight weeks to assist in generali zing the materials from tr eatment to classroom

PAGE 35

25 tasks. Each clinician involved in administering the treatment (either ADD or EP) in this study had at least one year of previous expe rience teaching their respective method. Five different teachers taught the ADD program, and five teachers in structed participants in the EP program (Torgesen et al., 2001). Torgesen et al. (2001) concluded that both the ADD and EP programs provided equally effective instruction for the sample of children participating in this study. According to the authors, at the end of the two year follow-up period, no differences existed between the groups on any of the impor tant reading outcomes (Torgesen et al., 2001). While children receiving the ADD program demonstrated significantly stronger growth in accuracy of phonemi c decoding skills and in the accuracy and fluency of word reading in text at the end of the treatment phase, these gain s were not maintained during the follow-up period (Torgesen et al., 2001) While the outcomes in this study differed somewhat from Torgesen et al. (1999) children receiving the ADD program in the previous study obtained cons istently higher scores on m easures of phonemic decoding and word identification, and th ese results were maintained at follow-up the authors of the present study cited teacher experience as one possible explanation. The authors hypothesized that the ex perienced teachers in the present study may have been able to refine components of the EP program to acc ount for the childrens phonemic awareness abilities while reading meaningful text (Torgesen et al., 2001). Another study documenting the Lindamood pr ograms effectiveness is a case study of an adult who received greater than 100 hour s of intensive interv ention in a clinical setting. While this case study of an adult ha s limited generalizability to children and use of the LiPS program in the schools and the fi ndings are less than remarkable, this study

PAGE 36

26 offers a much more detailed description of treatment implementation than any other study reviewed examining the efficacy of the LiPS program. Conway et al. (1998) examined the effects of the LiPS program, formerly know as the Auditory Discrimination in Depth (ADD) program, in a case study with a 50-yea r-old male whom had previously suffered from a stroke that affected the left hemisphe re of his brain. At fift een months post-onset, the patient was administered a series of pr etest assessment measures and treatment began. Treatment was performed one-on-one, for 2 to 4 hours per day, 5 days per week, and totaling 101.1 hours over a two-month peri od (Conway et al., 1998). Six different clinicians, each with extensive training in the program and between 5 and 10 years of clinical experience, administered the tr eatment to the patient. The program was implemented according to the sequence outline d in the program manual (Conway et al., 1998). Conway et al. (1998) describe in so me detail the four major components of treatment (oral awareness training, simple nonword training, complex nonword-word training, and multisyllable nonword-word traini ng) implemented with this participant. For example, the authors explained that, during the simple nonword training component, one to two chains of ten nonsense segments (e.g., /ip/) were administered to the participant using mouth pictures during each treatment session, totaling two to eight chains per day. Once this task, which pr ogressed from one to three phonemes, was completed with 90 to 100 percent accurac y, the mouth pictures were replaced with colored wooden blocks in order to create a less concrete repres entation of the phonemes (Conway et al., 1998). Descriptions of each of the four major components of treatment and benchmarks for advancement are included in this study and ar e described here in

PAGE 37

27 greater detail than many other research st udies detailing one-on-one implementation of the LiPS program. Using a multiple probe design to monito r the progress of th e individual and evaluate reading and spelling achievement outcomes, large gains were cited in phonological awareness, reading and spelling nonwords, and readi ng and spelling real words (Conway et al., 1998). Specifically, Conway et al. (1998) reported improved phonological awareness that was associated with improved reading and spelling for words that were phonologically regular. On preand posttest measures, the authors reported standard scores on the Woodcock Read ing Mastery Test Word Attack subtest to be 99 at pretreatment and 112 at posttreatment, Word Identification subtest scores to be 99 at pretreatment and 103 at posttreatment, and Passage Comprehension subtest scores to be 117 at pretreatment and 124 at posttreat ment (Conway et al., 1998). The patient was reported to have maintained treatment gain s in phonological awaren ess and reading at two months posttreatment (Conway et al., 1998). A few other studies have evaluated the efficacy of the LiPS program with specific populations using various re search design methodologies. For example, Alexander, Anderson, Heilman, Voeller, and Torgesen ( 1991) evaluated the effectiveness of the Lindamood program with ten students with se vere dyslexia. In this study, treatment was implemented one-on-one in a clinical setting, and participants received an average of 65 hours of LiPS training (range of treatment hours varied from 38 to 124 hours). From pretesting to posttesting, phonological awareness and decoding skills improved significantly (as measured by the Woodcoc k Reading Mastery Test and Lindamood Auditory Conceptualization Test). In anot her study, a description of the LiPS program

PAGE 38

28 with suburban high school students with docum ented learning disabili ties in the Midwest (ODea, 1998) was presented. The students in this study received instruction in the Lindamood program for 18 weeks, five days each week, for 55 minutes per day. Gains from pretest to posttest were assessed using the Kaufman Test of Educational Achievement. Results indicated that stude nts made an average growth in reading comprehension of one year and growth in decoding of approximately 6.5 months in the 18 weeks of Lindamood instruct ion. Improved attitudes toward reading were also noted. Based on the information available, it did not appear that a contro l group was employed in either of these studies. Small group implementation While there is limited empirical evidence supporting the use of the LiPS program in both school and clinical settings when administered individually, even less research has been conducted to evaluate the efficacy of this intervention with small groups or classes of students. One study that did addr ess issues of small group implementation of the Lindamood program was McGuinness, McGuinness, & Donohue (1995). This study compared three groups of first-grade children: one class at a Mont essori school receiving the Lindamood program in addition to traditio nal instruction (n = 15), one class at a private school receiving the Lindamood program in addition to the traditional curriculum (n = 15), and a control group at the privat e school receiving on ly the traditional curriculum (n = 12). Teachers implementing the Lindamood program, referred to by the authors as the ADD program, were traine d by the second author for 32 hours in the summer prior to treatment implementation, follo wed by a one-day practicum just prior to the start of school, and a one-day refresher pr acticum prior to the s econd semester of the project (McGuinness et al., 1995). The interven tion was implemented in small groups of

PAGE 39

29 five to seven children, for 20-30 minutes ea ch day over an eight-month period. Preand post-testing was completed on all ch ildren participating in the study. According to the researchers, both treat ment groups significantly outperformed the control group on word attack and word id entification measures. However, results indicated that the Lindamood program had a greater impact on decoding than word recognition, possibly due to the treatmen t programs heavy emphasis on phonologically regular and nonsense words (McGuinness et al ., 1995). All three groups in this study increased noticeably on a measure of phonol ogical awareness, and no significant differences were noted between the tw o experimental groups on any measure (McGuinness et al., 1995). Overall, th e authors considered this small group implementation of the Lindamood program in these school settings to have been effective. The adaptation of the ADD progr am to the classroom was effective to the extent that children who were taught by this method significantly in creased their reading standard scores compared to their own in itial performance, beyond what is normally expected (McGuinness et al., 1995, p. 849). Ho wever, it should be noted that all three groups increased substantially on the Linda mood Auditory Conceptualization Test, a phonological awareness measure that reproduces some of the sp ecific skills introduced in the Lindamood program. Additionally, authors of this study report equivalent success in successive experiments in which they eliminated some specific components of the Lindamood program (McGuinness et al., 1995). One other study evaluating the Lindamood program as an effective reading intervention with groups of students was id entified with outcomes that were less favorable. This study was conducted with bot h typically achieving students and students

PAGE 40

30 receiving exceptional educati on services in the schools (R oberts, 1975). The treatment group consisted of 39 students with either av erage abilities or learning difficulties. The control group consisted of 29 students with similar academic characteristics. While both the treatment and control groups continued on wi th their traditional reading instruction, the treatment group also received instruct ion in the Lindamood program throughout the duration of the study. Phonological awareness (as measured by the Lindamood Auditory Conceptualization Test) a nd general reading achievement (as measured by the Metropolitan Achievement Test) were assessed prior to the intervention, subsequent to the intervention, and eight week s after the interventions te rmination. No statistically significant differences were noted between the treatment and control groups on the measured reading skills. Overall, some limitations exist when draw ing conclusions about the efficacy of the LiPS program for individual and group use. De spite some consistently identified student gains, descriptions of the methodologies or the actual treatment delivered were often limited in the studies described above. Ther efore, it was unclear how closely treatment adhered to the Lindamood program as it was set forth in the training manual. Some studies stated that the treatment or in tervention was based on the Lindamood program, but no detailed descriptions of the treatment were included in the articles. Moreover, it was unclear whether some studies included co ntrol groups or some form of alternate treatment, and many did not. Also, instructor training or previous experience with the LiPS program was not described in any detail in most of the studies, and sample sizes were often quite small.

PAGE 41

31 Regardless, certain conclusions can be drawn regarding the extant literature examining the efficacy of the LiPS program. For those studies implementing the LiPS program with individual students, there is evidence to suggest that students made specific word-level reading gains in the most met hodologically sound and empirically controlled studies. For example, in Torgesen et al (1999), elementary students were randomly assigned to one of four condi tions (three experimental conditions or a control group). Certified teachers underwent extensive training before delivering the instruction to study participants, and the childrens progress was documented over a two and one-half year period. Results of this research indicated that students receiving the Lindamood program made significant gains in phonological aw areness and phonemic decoding. While the research evaluating the LiPS program with group implementation is more scant, one empirically sound study (McGuinness, McGu inness, & Donohue 1995) demonstrated similar gains to those identified by Torgesen and colleagues. In fact, many of the studies described above demonstrated student gain s in phonological awar eness and increased word attack skills. For those studies usi ng the Lindamood Auditory Conceptualization Test as a measure of student outcomes, gains were consistently noted in students receiving treatment in the Lindamood program Based on these studie s, evidence does not suggest, however, that gains in readi ng comprehension, word identification, and vocabulary skills are typically a result of instruction the Lindamood program. Nevertheless, some consistent gains have b een noted across the LiPS research literature in specific word-level reading skills. Particip ants in these studies varied greatly in age (from five-years-old to adult) and include d a range of academic ability levels.

PAGE 42

32 Unfortunately, studies evaluating the e fficacy of the Lindamood program with individuals or groups of student s consistently fail to include detailed descriptions of the treatment. Specific details of the Lindamood program implementation are not offered in sufficient detail to assess treatment integrity. Therefore, in order to replicate these findings, more information is needed about treatment integrity or adherence to the Lindamood program as it was described in th e program manual and how this affects student outcomes. Purpose of this Study Over the last three decades, reading res earchers have learned a great deal about how children learn to read and why some st udents continue to struggle (Denton et al, 2003). One key foundational reading skill that has received significant attention is phonological awareness and its instructi on (Snow et al., 1998; Torgesen, 2002). Phonological awareness involve s an individuals ability to understand that spoken language is made up of smaller parts. Phonol ogical awareness training has been found to be a crucial component of beginning readi ng instruction (Olofss on & Niedersoe, 1999; Smith et al., 1995; Torgesen, 2002). One r eading program that offers phonological awareness training is the LiPS program. While the LiPS program was originally designed for one-on-one implementation, this program is currently being employed in schools with individuals and groups of students. Empirical evidence exists to support the use of this progr am with individuals and small groups (e.g., McGuinness et al., 1995; Torgesen et al., 1999). Unfortunately, little documentation exists detailing the sp ecific procedures that were followed in treatment implementation or how closely inst ructors adhered to the program as it was designed (i.e., treatment integrity).

PAGE 43

33 The present study was designed to address some of the gaps in the literature relative the Lindamood program. The purpose of the present study is two-fold. First, the LiPS program was initially designed for individu al treatment in the clinical setting, and much of the research addressing the efficacy of this program pertains to one-on-one implementation. However, instructors are pr esently being trained to implement this program in school settings, and many teachers have adapted this program to address the needs of students in small groups and whole classrooms. Therefore, the first purpose of this research is to examine issues surroundi ng the implementation of the LiPS program in the school setting with classes of students Specific research questions related to treatment integrity include: 1. When implementing LiPS in kindergarten clas srooms with large groups of students, how closely do the instructors adhere to the program as described in the training manual? 2. What decisions do instructors make about the program sequence in relation to student needs? 3. How does program implementation vary ac ross instructors when considering the training and experience of the two instructors? 4. How does LiPS instruction differ from th e classroom to clinical setting? A second purpose of this study is to evalua te student outcomes in classrooms where the LiPS program is used as a regular part of the reading curriculum. Specific research questions related to student out comes include the following: 1. What gains do students demonstrate in read ing after receiving instruction in the LiPS program? 2. Do student academic gains differ on a meas ure more closely aligned with the LiPS program (i.e., the LAC) as compared to other standardized, norm-referenced measures? 3. Does student reading achievement differ signi ficantly from instructor to instructor?

PAGE 44

34 Earlier research has examined some issues related to individu al implementation of the LiPS program, with samples in these stud ies varying in age and severity of reading difficulty. However, these studies offer little insight into exactly how the treatment was implemented or the specific program sequen ce that was followed. Moreover, even less work has been done to empirically examine group implementation of this program in the school setting. This study seeks to examine the use of the LiPS program as an early intervention method and its application to a group or classroom setting.

PAGE 45

35 CHAPTER 2 METHOD Previous research has documented the e fficacy of the LiPS program with individual children (e.g., Torgesen et al ., 1999) and with small groups of students (e.g., McGuinness et al., 1995). Academic gains have been noted across studies in phonological awareness and phonemic decoding skills. However, despit e the empirical eviden ce to support gains in word-level reading skills subsequent to instructi on in the LiPS program, little information is available regarding how the program was implemented in these studies. Therefore, in order to replicate some of the findings related to LiPS efficacy, more information is needed regarding treatment integrity. The purpose of this study was to examine the treatment integrity of the LiPS program when it was incorporated into kindergarten classroom reading instruction and the student progre ss and outcomes that were achieved over the treatment period. Participants Participants included kinder garten students attending two local elementary schools in North Central Florida. Two kindergarten classes from each school, with approximately 20 students per classroom (n =75) were involved in this stu dy. Students were assigned to each classroom by the school administration prio r to the commencement of this research. It was assumed at the outset of the study that each group was relatively commensurate across academic performance levels, with highe r and lower achieving students present in each of the four classrooms. This was conf irmed based on the pretest assessment data

PAGE 46

36 collected. Informed consent was obtained from each participants parent or guardian prior to the students data being used for the study. Settings Two school sites participated in this research. School 1, th e site of Instructor 1, was a laboratory school affiliated w ith the local state university. This school was considered a public school and served as its own school district w ithin the state. Th e population of the school was diverse with respect to race and et hnicity and was selected to match the state in terms of Florida's socioeconomic and racial-ethnic composition. The school serves students from kindergarten through twelfth grade. School 2, the site of Instructor 2, was a parochial school serving students in kindergarten through eighth grade. Families of children attending both schools underwent admission procedures and chose to have their children attend these pa rticular schools. In addition to the school settings wher e data was collected on whole classroom LiPS instruction, additional data was collected in a clinical setting where LiPS was used with children one-on-one. This clinical sett ing was a private facility in Central Florida offering remedial services to children and a dults with learning difficulties. Individuals seeking assistance at this private center undergo a comprehensive evaluation, and interventions are designed to address the pa rticular academic weaknesses of each person. The LiPS program is one of a number of re medial programs and interventions employed at this facility. Instructors Two instructors participated in this research. One instru ctor taught at each school, administering the Lindamood Phoneme Sequenc ing Program (LiPS) to students in her respective two classrooms. Each was a li censed speech pathologist and had been

PAGE 47

37 previously trained in the Li PS program. The two instructors varied in their amount of overall clinical experience related to speech pathology, tr aining received in the LiPS program, and specific experience administer ing the LiPS program to individuals and groups of students. An initial interview with eac h instructor was conducted early in the semester prior to program implementation to determine the level of training (where, when, number of hours) and experience (amount and type of experience individual/group, clinical/sc hool) each had attained with the LiPS program. The two instructors had worked collaboratively to offer speech/language services in the past; however, each individual designed and impl emented the LiPS program independently at each school. The LiPS instruction of two instructors wa s also observed in the clinical setting. These two instructors participated in exte nsive training and supervision in the LiPS program prior to their work with students at this facility. The two instructors at this clinical site had a combined total of approximately ten year s of experience working with students using the Lindamood programs. Procedure Two variables, treatment integrity and st udent outcomes, were assessed throughout this research. Each variable wi ll be subsequently discussed. Treatment Integrity Many of the previous research studies ex amining the efficacy of the LiPS program employed experienced clinicians or classroom /intervention teachers trained directly by the program developers (Conway et al., 1998; Torgesen et al., 1999; Torgesen et al., 2001). Thus, while it may be assumed that th ese instructors strictly adhered to the program as outlined in the LiPS manual, little has been done to document the specific

PAGE 48

38 program sequence that was followed during tr eatment by these or other less experienced instructors in the studies reviewed. In s ections such as Classroom and Clinical Activities (p.24) and Additional Ways to Practice Consonants in the Classroom and Clinic (p. 82), the LiPS program manua l (Lindamood & Lindamood, 1998) includes some information for classroom implementation to offer instructors ideas for practicing or reviewing previously introduced material with students. This study seeks to document and describe how closely each instructor a dhered to the LiPS manual when implementing the entire program to classes of students, the decisions made by each instructor as the program was implemented in a classroom, and th e types of modificati ons that were made to the program for group instructional purposes From most of the previous research studies, it is unclear exactly how the treatment was implemented. Therefore, it is difficult to interpret or make generalizations regardi ng student outcome data that is presented in each study. The daily and weekly lesson plans of each instructor were gathered in order to assess treatment integrity, or adherence to th e program, as described in the LiPS manual. Additionally, the number of treatment hours each participant received was recorded, as indicated by the instructor lesson plans. Pe riodic, direct, classroom observations were conducted by the primary investigator to ensure that each instructor adhered to stated lesson plans and that lesson plans were revi sed when necessary to accurately reflect introduced material. Additionally, several form s were created by the primary investigator to collect data during the classroom observations. These forms included the Record of Program Delivery, the Classroom Observation Error Handling form, the Student

PAGE 49

39 Opportunity to Respond form, and the Student Engagement/ On-Task Behavior form (see Appendix A through D). The Record of Program Delivery form was used by the primary investigator during observations in each classroom at a mini mum of eight points throughout treatment implementation to document the occurrence or nonoccurrence of key program components incorporated by each instructor into the instruction. The el ements selected to include on the Record of Program Delivery form were chosen based on the perceived importance placed on these components by revi ewing the LiPS training manual, as well as the primary investigato rs personal training and past experien ce in teaching the Lindamood program. For each program or se ssion component listed on the Record of Program Delivery form, the page numbers fr om the LiPS manual ar e cited. During each classroom observation, a Record of Program Delivery form was completed, and the presence or absence of each component wa s recorded. For exampl e, one point on the form addresses whether the instructor avoided the use of the word no when a students answer was not the expected one. If a specific item was not applicable to that particular session, then this was indicated on the form as well. For example, if the class lesson did not include reading or spelling practice, then the assessment of student mastery was not applicable. The frequency or degree to which some of these key program components were incorporated by each instructor into instru ction was also assessed on at least eight occasions throughout the intervention using the Classroom Observation Error Handling form. From the Record of Program Delivery form, the frequency of occurrence for two specific items was recorded. First, each time th e instructor used a li ne of questioning to

PAGE 50

40 lead the class or a particular student to a desired response, a tally mark was made. This is referred to as Socratic questioning in the LiPS training manual (Lindamood & Lindamood, 1998, p. 419). Also, each time the in structor questioned a student even though a correct response was made, a tally mark was recorded. Student engagement was evaluated at le ast seven times per classroom throughout treatment using the Student Opportunity to Respond and St udent Engagement/On-Task Behavior forms. Using the Student Opport unity to Respond form, the frequency with which each student orally responded during ea ch classroom observation was recorded. Class lists were maintained, and a tally mark was made for each instance that an individual student responded directly to th e instructors questi on and the instructor acknowledged that response. On the Student Engagement/On-Task Behavior form, the number of students that were looking at th e instructor at the end of each five-minute period was recorded. Again, this form was comp leted on at least eight occasions in each of the classrooms. Furthermore, focused interviews were conducted with each instructor near the beginning, middle, and end of treatment (s ee Appendix F). This was done to obtain perspectives regarding what each individual in structor perceived to be effective during their LiPS instruction, adaptations or accommoda tions that were made to the curriculum, why specific curricular choices were made and how they perceived implementation would differ if they were inst ructing in a one-on-one setting. In order to make comparisons of how LiPS treatment implementation in a large group setting differed from a one-on-one inst ructional setting, additional observations were conducted in a clinical setting where the program was employed with students one-

PAGE 51

41 on-one. Using the Record of Program Deliver y and Error Handling Observation forms, observations were conducted of two instruct ors working individuall y with two different students across approximately two to four sessions per student (totaling twelve observations). The goal was to observe program presentation in a one-on-one instructional setting until st ability across observations was achieved. Comparisons were then made of the similarities and differenc es in program implementation when conducted one-on-one versus in larger group settings. Student Progress/Outcomes At the outset of the school year, in Septem ber, pretest measures were administered to the participants individually over a two-week peri od prior to the initiation of the LiPS program in their classrooms. Posttesting was conducted at the culmination of the treatment period, during the month of Februa ry, again over a two-week period. The preand posttesting was conducted by the primary investigator and tw o other recruited volunteers trained in administering these m easures. Total testing time was approximately 30 to 45 minutes each for preand posttes ting. The order of the assessment measures given was counterbalanced in order to accoun t for order effects. Once introduced, the LiPS program was implemented in each classroom in addition to the traditional curriculum. Termination of the treatment at each school was at the discretion of each instructor and was similar across both s ites. The program was employed in each classroom at both school sites from a pproximately September to February. Measures Pretesting and posttesting to assess stude nt achievement were conducted with the following measures:

PAGE 52

42 Woodcock Johnson Tests of Achievement (WJ-III) The WJ-III (Woodcock, McGrew, & Mather, 2001) is an individually administered, standardized, norm-referenced, achievement measure. Two read ing subtests of the WJ-III were administered to participants in this study: Letter-Word Identification and Word Attack. The Letter-Word Identification task requ ired the individual to decode real words in isolation. The Word Attack task required the student to identify individual sounds for some letters and decode nonsense words, assessing phonemic awarene ss skills in reading individual sounds and novel words. From this measure, both raw scores a nd standard scores were obtained. This assessment tool is a widely used measure of reading achievement and has demonstrated adequate reliability and validity. For exampl e, for children ages 5 to 19, the Broad Reading cluster, which includes Letter-Wor d Identification and m easures of reading fluency and comprehension, has a median re liability of .93 (Woodcock et al., 2001). Testretest correlations on the Letter-Word Identifi cation and Word Attack subtests were .92 (n=106) and .79 (n=104) respectively, with one year between admini strations for children ages four to seven at first testing (Woodcock et al., 2001). In addition, concurrent validity for Broad Reading has been documented with validity coefficients ranging from .633 to .857 with various measures of intelligen ce and achievement (Hintze et al., 2001). Comprehensive Test of Phonological Processing (CTOPP) The CTOPP (Wagner, Torgesen, & Ra shotte, 1999) is an individually administered, norm-referenced measure that is used to evaluate various facets of an individuals phonological awar eness and processing. The following subtests of the CTOPP were administered to participants in this study: Elision, Blending Words, and Sound Matching. The Elision task measured how well the student could identify and

PAGE 53

43 manipulate word chunks or individual phoneme s within orally presented words. For example, in this task, the student was asked to complete items such as saying the word hotdog without saying hot or saying the word goat without saying /g/. The Blending Words task assessed the individuals ability to combine, or blend, orally presented syllables, onset-rimes, or phonemes. Fina lly, the Sound Matching Task evaluated the students ability to identify objects that c ontained the same initial or final sound as a presented word. For example, which word starts with the same sound as cat ? hat car or dog ? The Elision, Blending Words, and Sound Matching subtests comprised the Phonological Awareness Composite, and both raw and standard scores were obtained for the three tasks and composite. Internal consistency reliab ility estimates have been reported to be .96 for the Phonological Awareness Composite for children ag ed five to six years (Hintze, Ryan, & Stoner, 2003). Moreover, internal consistenc y reliability for speci fic tasks, Elision, Blending Words, and Segmenting Words, ra nges from .84 to .89 (Rashotte, MacPhee, & Torgesen, 2001). Regarding criterion-relate d validity, the correlation between the Phonological Awareness Composite of the CTOPP and the Letter-Word Identification task of the Woodcock Reading Diagnostic Ba ttery was found to be .65 (Havey, Story, & Buker, 2002). Dynamic Indicators of Basic Ea rly Literacy Skills (DIBELS) DIBELS (Good, Kaminski, Laimon, & Johns on, 1992) involves brief curriculumbased assessment probes that can be used to monitor student pr ogress and serve to identify children with reading difficulti es. National normative data corresponding to benchmarks is available for the various DIBELS tasks. The Letter Naming Fluency (LNF) and Phoneme Segmentation Fluency ( PSF) tasks were administered to the

PAGE 54

44 participants in this study. The LNF task re quired the student to rapidly name as many lower and upper case letters as possible from a provided page in one minute. The raw score is the total number of letters correctly identified in one minute. The second task, PSF, required the student to segment orally pr esented real words in a one-minute period. The raw score achieved is the total number of phonemes correctly identified in one minute. Reliability and validity data exists to support the use of these curriculum-based probes. Alternate forms reliability coefficients for the LNF task have been documented to range from .86 (Speece, Mills, & Ritc hey, 2003) to .93 (Kaminski & Good, 1996). Regarding concurrent validity, correlations for the LNF task with the Letter-Word Identification subtest of the WJ-III were .77 (Speece, et al., 2003), .75 for LNF with the Woodcock-Johnson Skills cluster that include d the Letter-Word Iden tification task, and .60 for PSF with the same Woodcock-Johnson Sk ills cluster (Elliott, Lee, & Tollefson, 2001). Additionally, Hintze et al. (2003) examin ed the concurrent va lidity of the DIBELS measures with the CTOPP using data from 86 kindergarten students. Data revealed that the DIBELS kindergarten readiness tasks st rongly correlated with most subtests and composite scores of the CTOPP. Specifi cally, both LNF and PSF correlated with the CTOPP Phonological Awareness Composite at .53. Lindamood Auditory Conceptualization Test (LAC) The LAC (Lindamood & Lindamood, 1971) is an individually administered assessment tool that measures phonological awar eness abilities through a series of tasks involving the manipulation of co lored blocks. After an elabor ate training process wherein the examiner teaches the assessee how the co lored blocks can be used to represent individual sounds that are seque nced from left to right, the individuals ability to identify

PAGE 55

45 and represent phonemes and nonsense words with the blocks is measured. For example, the examiner might ask the student to use the colored blocks to repr esent the following: /p/ /b/ /t/ The student must recognize that thr ee sounds were presented, and each sound was different. Therefore, the student would present three di fferent colored blocks to represent the prompt. As the items become increasingly more complex, the student must use the colored blocks to re present phonemes within words and manipulate these blocks to reflect changes made to the words (e.g., fr om /ap/ to /op/ or /sik/ to /siks/). On the LAC, raw scores are entered into a formula to achieve a Total Converted Score. This formula allows for items of great er complexity to be given greater weight. The maximum score allowable is 99, and benchmarks, or Re commended Minimum Scores, are offered for each grade level from ki ndergarten through adult. As stated on the test protocol, by the end of the first half of kindergarten, a child should achieve a minimum score of 31, and this score should be at least 40 by the end of the second half of the kindergarten year. Of the pre and post measures used in th is study, the LAC was most closely aligned with the LiPS program. The same individua ls who devised the LiPS program developed this assessment tool. Additionally, the tasks completed during the LAC assessment are included in the instruction of the LiPS progr am as outlined in the training manual. Published reliability and validity research relative to the LAC test are significantly more scant than for the other assessment m easures employed in this study. In one study of 660 students ranging in grade from kindergarten through grade 12, correlations between student LAC and Woodcock Readi ng Mastery Test (WRMT) reading and spelling performance yielded scores of .66 to .81, with an average of .73 (Lindamood,

PAGE 56

46 1972). In addition, test-retest reliability using alternate forms of th e LAC at least four weeks apart on a sample of 52 students in ki ndergarten through gr ade 12 was reported at .96 (Lindamood & Lindamood, 1971). Analysis of Data One goal of this study was to document and describe the LiPS program and how it was translated from a one-on-one to large group instructional sett ing. The lesson plans and interviews were employed to create a desc ription of how the tw o instructors adapted the LiPS program to a classroom setting. Us ing data collected from the Record of Program Delivery and the Classroom Observa tion Error Handling forms, a description was developed detailing how closely the inst ructors adhered to the LiPS program as described in the program manual and how ri gorously the instructors incorporated key components of the program into their classr oom instruction. In addition, decisions the instructors made during treatment were de scribed (from observational and interview data), and variance between inst ructors/classes was detailed. A second goal of this study was to determ ine whether the students made progress or demonstrated academic gains in a program that was adapted to a large group setting. Preand posttest data was expressed descri ptively (e.g., means acro ss classes for each measure) and analyzed statistically. Raw sc ore differences from the preand posttest measures (WJ-III, CTOPP, DIBELS, a nd LAC) were analyzed using ANCOVA procedures to control pretest scores and look at posttest di fferences between instructors and between schools. Statistical analyses were conducted to determine whether the students of one instructor made significantly gr eater gains from preto posttest than the other instructor. Additionally, a Repeated Measures ANOVA (2x4) was conducted using all four assessment measures to determine if student academic gains differ on a measure

PAGE 57

47 more closely aligned with the LiPS program (i.e., the LAC) as compared to other standardized, norm-referenced measures. Last ly, benchmark data were analyzed for the measures best suited for monitoring student progress (i.e., DIBELS, LAC) to examine percentages of students at each school m eeting certain criteria at pretesting and posttesting.

PAGE 58

48 CHAPTER 3 RESULTS The primary purpose of this study was to examine how the Lindamood Phoneme Sequencing Program for Reading, Spelli ng, and Speech (LiPS) was adapted and implemented with large groups of kindergarten students in a classroom setting. Over the course of a six month period, two instruct ors at two different school sites offered supplemental reading instruction to two cla ssrooms each (four classrooms total in study, 75 kindergarten students) using the LiPS pr ogram. Descriptive information, including classroom observations, instruct or interviews, and lesson plan s, was gathered to better understand what this program, a program initially designed for one-on-one clinical use, looked like when it was modified to meet th e needs of a group of students in a classroom setting. Three specific questions guided this research to ascer tain treatment integrity and the delivery of instruction. 1. When implementing LiPS in kindergarten clas srooms with large groups of students, how closely do the instructors adhere to the program as described in the training manual? 2. What decisions do instructors make about the program sequence in relation to student needs? 3. How does program implementation vary ac ross instructors when considering the training and experience of the two instructors? To answer these questions, information wa s gathered to monitor how this program was implemented and the students response to the intervention. In other words, which program path or sequence did instructors choose, which program components were included or omitted, what decisions did instruct ors make in response to student progress,

PAGE 59

49 and how engaged were the students during th is intervention time? Regarding treatment integrity, two global factors were consider ed during the data collection as LiPS was implemented to whole classrooms of kindergar ten students: instructor decision-making during intervention delivery (i.e., adherence to program design) and student engagement or responsiveness durin g the LiPS instruction. In addition, the results of observations conducted in a clinical setting where LiPS was employed during one-on-one instruction are presented for comparison. Issues of treatment integrity and adherence to the program manual can exist regardless of group size. However, this program was initially desi gned for use with individual students in a clinical setting, and it was important to cons ider whether program implementation varied in these two different environm ents. Specifically, data were gathered to ascertain whether differences existed in the inclusion of ke y program components between the one-on-one and classroom-based LiPS instruction. Lastly, pretest and posttest data were colle cted on the students in the kindergarten classrooms to assess student outcomes. The pr etest and posttest data provided a means to quantify student reading gains across the treatment period. Various reading assessment measures were used to evaluate such skill s as phonemic awareness, decoding, and letter naming fluency in the kindergarten participants One measure of par ticular interest was the Lindamood Auditory Conceptualization test (LAC), a measure closely aligned with the LiPS program and designed to assess a st udents ability to detect sameness and difference in sounds.

PAGE 60

50 Descriptive Data Whole Class Instruction To ascertain how this program was impl emented by the two instructors and the students response to the Li PS instruction, observations we re conducted in the four participating kindergarten classrooms. Over the six months of classroom intervention, weekly observations of the intervention implementation were performed, and the instructors daily lesson plans were collected. In addition, inte rviews with the instructors were conducted at the beginning, middle, and e nd of treatment. Four specific forms were generated for this research to capture as much information as possible about treatment integrity, student response to the interv ention, and the decisions instructors made throughout program implementation. The forms included a Record of Program Delivery, Error Handling, Student Opportunity to Respond, and Student Engagement/On-Task Behavior. Refer to Appendix A through D for each of these four observation instruments. The Record of Program Delivery form was completed during each classroom observation. Using this form, the presence or absence of specific program components was recorded. Included on this form were criti cal aspects of the program that should be present during each LiPS session. The Error Hand ling form was used to record frequency with which the instructors employed questioni ng in their instruction, particularly when errors were made, to lead the student to th e desired response. The Student Opportunity to Respond form was employed to track the number of times each student engaged in dialogue with the instructor during instruc tion. Lastly, the Student Engagement/On-Task Behavior form employed a time sampling me thod to record the number of students engaged in instruction at certa in intervals during instruction.

PAGE 61

51 Table 1 displays the number of observa tions that were conducted by school and classroom over the intervention period. E ach classroom was observed a minimum of seven times, with a range from seven to sixt een distinct observations in each classroom. Some classes were observed more often due to the frequency with which the instruction was delivered (e.g., Instructor 1 was in her classrooms two to three times per week; Instructor 2 delivered new instruction in her classr ooms one time per week). Table 1. Number of Observations by Instructor for Whole Group Intervention Instructor 1 Instructor 2 Classroom 1 Classroom 2 Classroom 3 Classroom 4 Record of Program Delivery 11 15 8 8 Error Handling 10 13 10 8 Opportunity to Respond 10 11 7 7 On Task 8 16 10 9 In addition to the classroom observations interviews were conducted with each instructor prior to and throughout the intervention period. The Initial Instruct or Interview was conducted with both instructors prior to LiPS implementation to ascertain their respective levels of training and prior expe rience with the program. Additionally, the researcher met with each instructor near the beginning, midway through, and at the conclusion of the intervention period to disc uss their thoughts regard ing student progress and decisions about program implementation. Ther efore, a total of four interviews were completed with each instructor at various points throughout the study. The presentation of the descriptive data for whole class LiPS instruction will be organized under two concepts: treatment integrity and delivery of instruction. Treatment integrity consists of the inclusion of key program elements and the program paths that each instructor selected. Deliver y of instruction includes a disc ussion of the decisions that

PAGE 62

52 the instructors made based on the needs of th e classroom teachers, the students, and their respective levels of training and experience with the LiPS program. Subsequent to the presentation of the descriptive data for whol e class instruction, data will be presented regarding the observations c onducted in a one-on-one settin g. Lastly, student outcomes data related to whole class LiPS instruction will be presented. Treatment Integrity The purpose of conducting classroom obser vations and collecting daily lesson plans was to assess the degree to which the instructors followed the program as it was designed to be implemented. In other words, data were collected to determine how closely the instructors adhere d to the program scope and se quence as described in the LiPS Trainers Manual (Lindamood & Lindamood, 1998). This was assessed by comparing the instructors lesson plans with the sequence of skills to be introduced as delineated in the program manual and using the devised observation forms to assess the presence or absence of pa rticular program components and delivery techniques. Inclusion of key program elements At the outset of the program, LiPS offers instruction in phonemic awareness at the oral level. Students hear, see, and feel th e physical characteristics of sound units and work to compare and contrast them. The major premise of the LiPS Program is that the auditory element of speech sounds should not be separated from the more basic oralmotor activity that produces the soun ds (Lindamood & Lindamood, 1998, p. 7). A signature component of this pr ogram is the mouth pictures. As soon as the student is introduced to the first sounds, mouth pictures ar e paired with those sounds so as to offer a visual representation of what the mout h should look like when specific sounds are produced.

PAGE 63

53 A focus on oral awareness and individual sound units was preci sely what occurred in the classrooms of both instructors. Stude nts were introduced to individual sounds, and mouth pictures were paired with each set of consonant sounds th at was introduced. For example, the first consonant sounds that were introduced by both instructors were /p/ and /b/. These sounds are identified with the labe l Lip Popper because, in order to produce these two sounds, ones lips are pushed toge ther and then pop ope n. A picture card, or mouth picture, was paired with the discussi on of these sounds. The basic dialogue offered in the LiPS manual to introduce this consonant pair and all others was employed by both instructors. Table 2. Record of Program Delivery, Percen tages of Observations by Instructors Across Intervention Period GENERAL Instructor 1 Instructor 2 T. reviews previously introduced material at beginning of session 92% 100% S. provided with/encouraged to use mirror when introduced to or practicing new sounds 0% 0% All Ss. observed to be actively enga ged in learning process 8% 13% TRACKING, READING, SPELLING In structor 1 Instructor 2 S. instructed to follow 3 steps in Tracking repeat words, touch & say, make change 0% 0% T. questions S. about label of sounds during Tracking 100% 100% Real and nonsense words used in Tracking/Reading/Spelling 69% 50% T. assesses S. mastery on T/R/ S chains before new material introduced 0% 0% ERROR HANDLING Instructor 1 Instructor 2 T. incorporates responding-to-response (allows student to self-correct) 100% 94% T. uses line of questioning to lead S. to desired response (Socratic) 96% 87% T. avoids use of word no when students answer is not expected one 85% 75% T. questions S. even when correct response provided 8% 31% T. avoids providing correct answer for S. having difficulty 27% 88%

PAGE 64

54 Differences were noted, however, in the in clusion of some key program elements, as measured by the Record of Program Deliver y form. These data are displayed in Table 2. The numbers noted in this table indicate th e percentages of observations in which each program component was present. One important element that can be employed to help the students see what their mouths are doi ng during sound introduction was omitted by both instructors. Mirrors are suggest ed as a way to support the stud ent with more sensory input until the mouth action can be felt strongl y (Lindamood & Lindamood, 1998, p. 47). As new sounds were introduced, both instructors discussed with the students what their mouths looked like and paired the new sounds with mouth pictures to visually represent the sounds, but neither instruct or included mirrors in their instruction (0% of observed sessions for Instructor 1 and Instructor 2 as measured by Record of Program Delivery form). Student engagement, as measured by the Record of Program Delivery form, was another key program element that was calcu lated for each instructor. During each observation, it was recorded whether all student s were observed to be actively engaged in the learning process, and the presence of this component was recorded only if all students appeared to be engaged in the instruction. For Instructor 1, this occurred during 8% of the observations; for Instructor 2, this occurre d during 13% of the observations. More detailed information regard ing student engagement was collected using the Student Engagement/On-Task Behavior form, and the results of this data will be presented later in the Delivery of Instruction section. Instructor differences were present in the incorporation of another key program element: Tracking. Tracking refers to the pr ocess of sequencing mouth pictures or

PAGE 65

55 colored blocks to represent the number, orde r, and sameness of sounds heard auditorally. This task can be accomplished with isol ated sounds, single syllable words, and multisyllable words. Tracking develops the students ability to compare and contrast sequences of speech sounds and represent them visually (Lindamood & Lindamood, 1998, p. 93). While Tracking in the LiPS program is traditionally completed with colored blocks and felts to represent individual sounds and syllables, it can be accomplished with the mouth pictures for younger or severely im paired students. For example, with mouth pictures, the examiner would say a series of sounds (e.g., /p/ /b/ /d/), and the student(s) would identify the mouth pictures that repres ented that sequence of sounds (i.e., lay out the mouth pictures to represent the lip popper, lip popper, tongue tapper). It should be noted that Tracking with mouth pictures is essentially Spelling prior to the introduction of letter symbol s. In other words, mouth pict ures are used to represent the order and sameness of sounds prior to the introduction of symbols. Once letter symbols are introduced, these two tasks become different. Tracking continues to hone in to those phonemic awareness skills, focusing the students attenti on on the sounds that they hear and making the cha nges in the mouth pictures (or colored blocks) only where they hear the changes in the presented sequence (e.g., from /p/ /b/ /d/ to /p/ /p/ /d/ only the second sound changes in this sequence). On ce letter symbols are introduced, Spelling then taps into different skill sets, assessing th e students ability to represent sounds with letters. For both instructors, Tracking was only completed with mouth pictures. Colored blocks were never employed for Tracking by either instructor. Certain instructional elements of the Tracking process were specifically noted during observations using the R ecord of Program Delivery form First, when appropriate,

PAGE 66

56 both instructors questioned the students about the labels for the mouth pictures. This occurred during 100% of the observations during Tracking. Anothe r observation worth noting is the instructor variability in the selection of real and nonsense words employed for Tracking, Reading, and Spelling. The neces sity of incorporating nonsense, or pseudo words, into LiPS instruction is explicitly stat ed in the program manual. Especially in the beginning stages of the program, the inclusi on of nonsense words allows the student to experience more extensive Tracking, Read ing, and Spelling practice with two sound, consonant-vowel and vowel-consonant combinati ons than if the student were limited to real words alone (Lindamood & Lindamood, 1998) Instructor 1 incorporated real and nonsense words into her instruction all of the time for one classroom and half of the time for the other classroom (For the other half of this instructional time, she sometimes only used real words and sometimes only used nonsen se words). Instructor 2 used this second approach and presented both real and nons ense words to the students for Tracking, Reading, and Spelling half of the time in he r respective two classrooms. Lastly, regarding Tracking, Reading, and Spelling, neither inst ructor employed any means of assessing student mastery prior to introducing new material. Even the process of Tracking was observe d to look differently in the respective classrooms of the two instructors. The LiPS manual suggests that, while Tracking can be introduced to a whole class of students, inte nsive practice should be conducted in small groups or individually to assure attention to individual differences in perceptual difficulty (Lindamood & Lindamood, 1998, p. 94). The way that this task was implemented differed by instructor. Instru ctor 1 employed two different means to accomplish Tracking during her lessons. At times, she would complete Tracking

PAGE 67

57 classwide. Using mouth pictures printed on 8 by 11 sheets of paper, she would provide a series of sounds or a given word a nd have students stand in front of the larger group holding the mouth pictures that repres ented their sounds. Fo r example, if the sounds were /p/ /f/ /k/, th ree students would stand hol ding their respective mouth pictures. If the sequence of sounds changed from /p/ /f/ /k/ to /t/ /f/ /k/, the first student would be seated and another st udent would take her place wi th the new mouth picture to represent the sound that changed. At other tim es, Instructor 1 would have the students complete Tracking with their own individual sets of mouth pictures. Instructor 2 only completed Tracking class wide. For Instruct or 2, no student comple ted his or her own individual Tracking chains. A section of the Record of Program Deliv ery form was devoted to monitoring Error Handling, or the ways in which the instructor s offered verbal feedback to the students during instruction. An important instructiona l element included under Error Handling that was considered during observations of inst ruction was the use of responding to the response (Lindamood & Lindamood, 1998, p. 14). The overarching goal for instruction is to foster independence in reading and spelli ng, and this technique allows the student to self-monitor and self-corr ect their work. You cannot tell this information to your students; you need to ask them questions and as k them to do things so that they use their own sensory-cognitive systems to discover information and arrive at concepts (Lindamood & Lindamood, 1998, p. 47). According to Lindamood and Lindamood (1998), this is the most critical element in th e interactions between the instructor and the students at every level of the program. As reported in Table 2, both instructors were observed to consistently incorporate the r esponding to the respons e technique in their

PAGE 68

58 instruction (Instructor 1 = 100% of observati ons, Instructor 2 = 94%). For example, on one occasion, when a student responded incorrec tly to a question, Instructor 2 redirected the student back to the mouth pictures hanging at the front of the room and used a series of questions to guide the student to the desired response. Another program element related to E rror Handling that was noted to occur infrequently during instruction involved the level of questioning instructors included when the students responses were correct. It is deemed important to question students about their responses and decisions regardless of their accuracy in order to promote selfmonitoring and self-correcting. As stated in the LiPS manual (Lindamood & Lindamood, 1998), Questioning students only when they are wrong gives them a set toward selfdoubt and impulsive changing of answers when questioned (p. 419). As measured by the Record of Program Delivery form and noted in Table 2, both instructors included this component inconsistently (during 8% of the observations for Instructor 1, 31% of the observations for Instructor 2). Program paths Outlined in the LiPS Trainers Manual are tw o distinct program paths, Vertical and Horizontal, which can be followed in introduci ng new concepts to students as they move through the program (Lindamood & Lindam ood, 1998, p. 16). The content of each path remains the same; variation occurs only in the sequence of con cepts introduced. The Vertical Path allows for the presentation of three consonant pairs and three vowels, then moves the student quickly into Tracking, R eading and Spelling with these sounds. The Horizontal Path presents all of the consona nt sounds first, then the vowel sounds, and then introduces Trackin g, Reading, and Spelling with all of the sounds. Figure 1 offers a visual depiction of the first few program elements as they would be introduced for each

PAGE 69

59 Setting the Climate for Learning Discover/ Label 1st 3 Consonant Pairs Discover/ Label Remaining Consonant Pairs Discover Vowel Circle/ Label 3 Vowels Discover/ Label Other Consonant Groups path. The manual suggests the Vertical Path for young students (Lindamood & Lindamood, 1998), such as those in this study. Figure 1. Vertical Program Paths (Recommended) Figure 2. Horizontal Program Paths Instructor 1 Instructor 1 chose the Vertical Path to introduce new concepts to the kindergarten students. She moved through the Vertical Pat h, introducing new concepts in the following Setting the Climate for Learning Discover / Label 1s t 3 Consonant Pairs Track Isolated Consonants Discover Vowel Circle / Label 3 Vowels Track/Read/ Spell Simple Syllables Discover / Label Remaining Consonant Pairs

PAGE 70

60 manner: After setting the climate, this inst ructor introduced four consonant pairs (Lip Poppers, Tongue Tappers, Tongue Scrapers, Lip Coolers) followed by three vowel sounds (/ee/, /o/, /oe/), then proceeded to Tracking/Spelling and Reading with mouth pictures. Next, two new consonant pairs we re introduced (Skinny Air, Fat Steady Air), followed by the vowel sounds /ae/ and /oe/. One consonant pair, Fat Pushed Air, was omitted altogether. Subsequent to Tracking, Reading, and Spelling with the above mentioned sounds, the Tongue Cooler and T ongue Lifter were introduced last, and Tracking, Reading, and Spelling resumed with a ll of these sounds. Instructor 1 chose to introduce letter symbols to the students at Lesson 6. As mentioned previously, Tracking was only ever completed with the mouth pi ctures; blocks for Tracking were never introduced. However, Instructor 1 did mention in her final instructor interview that she would have incorporated blocks for Tracki ng had she been implementing this program one-on-one. She expressed that small manipulatives were di fficult to manage with the larger group of students. Instructor 2 Instructor 2 selected the Hori zontal Path to introduce ne w concepts to the students. Instructor 2 moved through the Horizontal Path, introducing new concepts in the following manner: First, all of the consonant pairs, or brothers, were presented (i.e., Lip Poppers, Tongue Tappers, Tongue Scrapers Lip and Tongue Coolers, Skinny Air, Fat Steady and Pushed Air sounds). Next, th e three cousins we re presented (i.e., Windy, Nose, Tongue Lifters). Then, the vowel s /ee/, /o/, and / oo/ were introduced. Lastly, students completed Tracking (mouth pi ctures only), Reading, and Spelling with these sounds. Letter symbols were introduced to the students at Lesson 12. Similar to Instructor 1, blocks for Tracking were ne ver introduced during the course of program

PAGE 71

61 implementation, and, during the instructor interv iews, she made no mention of a desire to include this component in her instruction. Delivery of Instruction Decision based on needs of cl assroom teacher and school The instructors negotiated with each of th e four classroom teachers regarding how the LiPS intervention would be delivered to the students. Therefore, th e days and times of instruction varied by school, as well as the total instructi onal time across the intervention period. Table 3 presents how the LiPS interv ention was delivered ac ross instructors and classrooms. While both instructors spent sim ilar amounts of time in the classrooms, with Instructor 1 averaging 15 total hours and Instructor 2 averagi ng 14 total hours per classroom across the intervention period, the way that the instru ction was delivered varied by school site. For example, Instruct or 1 delivered the LiPS intervention in her respective classrooms three to four times per week in twenty-minute sessions. Instructor 2 spent forty-five minutes in each classroom one day per week. Furthermore, as reflected in Table 3, the classroom teachers at Instru ctor 2s school site reviewed recently introduced content with the students on days th at Instructor 2 was not present. According to the LiPS Trainers Manual, in a classroom situation, a formal work period and followup reinforcement should be provided daily for a minimum of 40 to 50 minutes if competency in Tracking, Spe lling, and Reading is desired in to the complex syllable level within 2 to 3 months (Lindamood & Linda mood, 1998, p. 18). Due to the grade level of the intervention students (i.e., kindergarten), the children were not expected to reach the complex syllable level. Regardless, the interven tion was not intensive at either school site so as to meet the criteria of 40 to 50 minutes daily.

PAGE 72

62 Table 3. Description of Instruction: Sessions, Time, and Delivery Instructor 1 Instructor 2 Class 1 Class 2 Class 3 Class 4 Whole Class Sessions 20 9 14 14 Small Group Sessions (concurrently) 19 28 28 Switch to Small Group Sessions 42 Session Length 20 min sessions 20 min sessions 30 min sessions (15 min whole class, then 15 min small group table activities); 15 min for small group 30 min sessions (15 min whole class, then 15 min small group table activities); 15 min for small group Total Instructional Time: Time in Whole Class 6 hours 40 min 3 hours 7 hours 7 hours Time in Small Group 6 hours 20 min 14 hours 7 hours 7 hours Notes 3 times per week, 2 days whole class and 1 day small group of 5 4 times per week, whole class 2 groups of 10, then switch to small group of 7 1 day per week, then teacher reviewed content (~5 hours); on some occasions during observations, instructor divided class in for whole class instruction 1 day per week, then teacher reviewed content; on some occasions during observations, instructor divided class in for whole class instruction As reflected in Table 3, the two instruct ors also varied in the delivery of the intervention relative to group si ze (i.e., whole classroom versus small group). In fact, Instructor 1 delivered the LiPS intervention to her two respective classrooms differently based on the previously established curricula r organizations of th e classroom teachers.

PAGE 73

63 For Instructor 1, in Classroom 1, she spent tw o of her days each week engaged in whole class instruction and one day per week with a small group of five students both she and the classroom teacher deemed as most in ne ed of additional instru ction. In Classroom 2, Instructor 1 introduced the LiPS program to th e whole class. Then, after nine sessions, she switched to small group instruction and continued to work only with the seven students deemed most in need of the interv ention by the classroom teacher. These seven students continued with the in tervention during their center time while the remainder of the students in the class attended other centers during that time. For Instructor 2, the LiPS intervention was delivered similarly across her two respective classrooms. She divided each class in half. Then, half the students wo rked with Instructor 2 to learn new content while the remainder of the class completed ta ble activities to review previously learned material. After 15 minutes, the groups switched (i.e., 15 minutes of instruction for those previously working on small group table act ivities, 15 minutes small group table activities for those previously engaged in instruction with Instructor 2). The school sites also varied in their plans for the duration of the LiPS intervention, and this affected the decisions the instructor s made regarding the de livery of instruction. At the school site of Instructor 1, the clas sroom teachers had no specific time frame for program implementation or duration of instruct ion. Instructor 1 had discretion to continue the intervention as long as she deemed nece ssary and appropriate. At the second school site, the teachers desired to complete the LiPS program by the winter of the school year (i.e., February) and introduce a different interv ention program to the st udents at that time. Therefore, Instructor 2 anticipated that she would have a specific number of weeks at the outset to work with the students.

PAGE 74

64 While both school sites agreed to have the instructors come into the classrooms to work with the kindergarten students, the sc hool in which Instructor 2 was working was more enthusiastic about the process. Furthermore, the level of classroom teacher involvement varied by school, and by classroom to some extent. At Instructor 2s school site, both teachers desired to learn the program themselves as their students were introduced to it. These two teachers revi ewed the LiPS program manual and closely followed the students instruction. Additionall y, both teachers at this site prominently displayed large mouth pictures in thei r classrooms and independently reviewed previously introduced material with the stude nts on the days that Instructor 2 was not teaching. In contrast, the classroom teacher s at the school site of Instructor 1 demonstrated less interest in learning the program themselves and were available during LiPS instruction primarily for classroom monitoring and management of student behavior. During the instructor interview th at was conducted after only a few days into the intervention period, Instructor 1 expressed some frustration with the limited amount of classroom teacher involvement and support in the process. Specifically, she noted that the teachers did not display the mouth pictur es in the classroom or reinforce the LiPS content with students at times when Instructor 1 was not in the classroom. Additionally, Instructor 1 stated that it was difficult for her to bring her materi als to the different classrooms each day and negotiate space in the rooms. For example, she noted that even finding markers and space on the board to write were difficult on some days. During instructor interviews that we re conducted throughout the intervention process, Instructor 2 did not mention any classroom or teacher factors that affected her choices in the delivery of the LiPS intervention.

PAGE 75

65 Decisions based on needs of students During the course of the intervention period, both in structors made decisions regarding the delivery of inst ruction based on student needs. First, decisions about the sizes of the groups receiving instruction in the various classrooms changed during the intervention period. For example, in Classroom 1, Instructor 1 previewed new material with a small group of five students (deemed by herself and the classroom teacher as most at-risk or in need of additi onal instruction) the day before the content was introduced to the whole class of students. In another instance, Instructor 2 modified her LiPS instruction to incorporate small group tabl e activities to reinforce newly introduced material. The decision of Instructor 2 to di vide each classroom of students into two groups was also made after she introduced ne w content to the entire classes of students initially; she expressed that behavior management issues with whole classrooms of students made it difficult to introduce new mate rial effectively. Therefore, she modified the instructional arrangements for her tw o classrooms and how she delivered LiPS instruction based on student needs. Second, regarding the pace of instruction, Instructor 1 had more discretion to introduce material slowly and based on her pe rceptions of student mastery. Instructor 2, however, was not able to consider student need s as much in her decisions regarding when to introduce new content. From the outset a nd as mentioned previously, Instructor 2 was aware that she had a specific time period in which to deliver the LiPS intervention to the students at this school site. Th erefore, Instructor 2 chose to introduce a ne w concept to her students at each session and based this decision more on the needs of the teachers. She did express, however, in the instructor inte rview at the outset of the intervention that she desired a slower pace and r ecognized that it was not feas ible in the classroom setting

PAGE 76

66 in the time frame that was allotted for th is intervention. Additi onally, Instructor 2 mentioned that, if she were implementing this program one-on-one, she would have followed more closely with the pace of the st udents in introducing new material. With the larger groups of students in her classrooms, In structor 2 expressed that she attempted to aim the pace of her instruction to the middl e students, while at the same time reviewing previously introduced material and intr oducing something new each session. Regarding the pace of instruction, Instructor 1 voiced similar comments during the interviews. Even from the outset of the intervention, Instructor 1 felt that she would be further along in the program had she been working with a student one-on-one. Regardless of the pace of the students in each classroom, Inst ructor 1 stated that she atte mpted to keep both classrooms at the same instructional pace. While student engagement data was co llected throughout the intervention period for research purposes, neither instructor employed any sort of specific behavior management system nor written records of st udent progress in the intervention. However, during interviews, both instructors recogni zed from the outset that management of student behavior was one of the most difficult aspects of implementing the LiPS intervention with whole classes of students. In fact, as mentioned previously, this was one reason Instructor 2 modified her instructiona l arrangements only a few sessions into the intervention. Additionally, both instructors we re able to elicit assistance from the classroom teachers to manage student behavior at least to some extent or on some occasions. While neither instructor collected specific data on student engagement, this data was collected throughout the duration of the intervention period by the primary

PAGE 77

67 investigator using the Record of Program Delivery and the Student Engagement/On-Task Behavior forms. The Record of Program De livery form offered information regarding whether all students were engaged in the LiPS in struction for a given observation period. As mentioned previously, for Instructor 1, this occurred during 8% of the observations; for Instructor 2, this occurred during 13% of the observations. However, more detailed information was also collected using the Student Engagement/On-Task Behavior form. With this form, a time sampling method was used to record the number of students engaged in instruction at designated time intervals. During each observation, the number of students looking at the instructor at the end of each five minute time period was recorded. The number of students looking at the instructor was considered the best m eans of quantifying and recording student engagement in a concrete, observable way. From this information, percentages were calculated for students engaged based on the number of students in attendance during each observational period, and an average was calculated across classrooms at each school site. Similarities were noted across school sites. Table 4 displays the average percentages of students engaged in the instru ction in each of the four classrooms during the intervention period. Table 4. Percentage of Student Engagement by Instructor Classroom 1 Classroom 2 Instructor 1 77% 73% Instructor 2 72% 83% The percentages across instructors regardi ng student engagement were similar and generally consistent. For Instructor 1, pe rcentages based on the Student Engagement/OnTask Behavior form ranged from 59 to 85 percent for Classroom 1 and 53 to 87 percent

PAGE 78

68 for Classroom 2. For Instructor 2, percentages ranged from 50 to 87.5 percent for Classroom 1 and 72 to 95 percent for Classroom 2. Lastly, through the formal instructor inte rviews that were c onducted throughout the intervention period, more information was gl eaned regarding the instructors views on how they were altering or tailo ring their instruction to meet the needs of the students. Both instructors mentioned incorporating ac tivities involving movement in order to involve more participants and maintain attent ion to the tasks. The instructors performed such activities as Spelling w ith large mouth pictures and Reading and Spelling on a large dry erase board. Additionally, both instruct ors discussed the usef ulness of maintaining close proximity to struggling students during instruction. Regarding treatment integrity, it should be noted that Instructor 2 specifi cally stated during th e interview that was conducted midway through the intervention pe riod that she would have adhered more closely to the LiPS protocol, or manual, ha d she been implementing this program one-onone. Decisions based on training and experience of instructors Because most of the instructors previous experiences with the LiPS program were in a one-on-one setting, the instructor interv iews conducted at the beginning, middle, and end of the intervention period allowed them to reflect on how their instruction might be different in this classroom se tting than it would be if they were working one-on-one with students. As gleaned from the Initial Instruct or Interviews conducted with the instructors prior to program implementation, the instruct ors had differing trai ning and experiences with the LiPS program, although the amount of experience each had with the program was similar. Training for Instructor 1 in th e LiPS program was included in her graduate coursework and involved a combination of live and videotaped in struction followed by

PAGE 79

69 clinical work that was supervised by a prof essional trained in th e program. Instructor 2 had no formal training in the LiPS program. Sh e had purchased the program kit, read the manual, and reportedly taught herself the pr ogram. Subsequently, she attended trainings in other Lindamood-Bell programs, and those trainings involved a discussion of the LiPS program. Regarding their experiences with the program, both instructors were speech language pathologists and had previous e xperiences implementing the program one-onone in both clinical and school settings. Clinically, Instructor 1 had worked with several clients, including children and adults, whom she had taken through the program. In the year prior to this study, Instructor 1 spent one semester teaching the LiPS program to small groups of third through fifth graders. In th e Initial Instructor In terview, Instructor 2 reported that she had approximately twelve y ears of experience using elements of the program in schools with individual childre n ranging in age from five to twelve. Additionally, in the pa st, Instructor 2 had worked w ith small groups of kindergarten students teaching components of the program. Instructor 2 stated that she had never completed the LiPS program from start to finish with a student. Both instructors reported limited experience teaching larger groups or whole classrooms of students in other reading and writing curriculums prior to this study. Overall, Inst ructor 1 had more rigorous training and supervision in teachi ng the LiPS program, while Instructor 2 had more experience teaching the program to students in the schools. As can be seen in Table 2, some differenc es were noted between the two instructors in the frequency of occurrence of some critical program elements. For example, the frequency of questioning the students res ponses even when correct responses were

PAGE 80

70 provided varied by instructor (Instructor 1 = 8%, Instructor 2 = 31%). While both instructors employed this teaching technique infrequently during instruction, Instructor 2 used this strategy to encourage studen t self-checking on occasion. For example, Instructor 2 had the students cover their ears to confirm if a sound was quiet or noisy. In addition, the frequency with which the instru ctors avoided providing correct answers for students having difficulty vari ed (Instructor 1 = 27%, Instructor 2 = 88%). Often, Instructor 1 would state the correct answer if a particular student was having difficulty or she would elicit the answer from another stude nt. Instructor 2 tended to remain with the student having difficulty, leading them to the desired response, which is recommended in the program manual (Lindamood & Lindamood, 1998, p. 419). One-On-One Implementation As mentioned previously, observations were conducted in a clinical setting where clinicians worked one-on-one with individua l students. The purpose of this activity was to offer a comparison of what LiPS should look like in a clinical setting where the program does not need to be modified or adap ted to meet the needs of a larger group of students, and it could be implemented as it was intended or designed based on the program manual. The particular setting where the oneon-one observations were conducted was a private center offering remedial services to ch ildren and adults with learning difficulties. Individuals seeking assistance undergo a comp rehensive assessment, and interventions are designed based on the particular needs of each person. The LiPS program is one of a number of academic interventions or programs th at are offered at this private center. The two instructors observed at this facility unde rwent extensive training and supervision in

PAGE 81

71 the LiPS program and had a combined total of approximately ten years of experience working with students using the Lindamood programs. Table 5 displays the number of observations that were conducted in the one-on-one setting. Similar to the larger group observat ions, observations were conducted with two instructors in the clinical setting. Moreove r, the goal was to observe the program presentation in a one-on-one instructional setting until stability across observations was achieved. In other words, it was important th at the observational data accurately reflect typical behaviors or responses in this setting. Therefore, a total of twelve observations were conducted across instru ctors in this setting. Table 5. Number of Observations Instructor 1Instructor 2 Record of Program Delivery 6 6 Error Handling 6 6 The intent of conducting the observations in a one-on-one setting was to compare the level of inclusion of key instructiona l components with the program as it was designed (i.e., based on the program manual) While this program was originally designed for clinical use with individual students and it was expected that treatment integrity would be high in this setting, so me variations or deviations from the LiPS manual were anticipated during one-on-one obser vations as student differences exist and instruction has to be modified. In other word s, as the instructors worked to adapt the instruction to their individual students, it wa s expected that the instructors would vary some in their delivery of the LiPS program. Table 6 displays the percentages by instru ctor for one-on-one instruction of the inclusion of key program components as meas ured by the Record of Program Delivery. Overall, certain program elements were cons istently included in the sessions of both

PAGE 82

72 instructors in the clinical setting. Both inst ructors in the clinical setting offered high levels of the following key program component s in their LiPS instruction: reviewing previously introduced material, use of mirro rs, following three specific steps in Tracking, assessing student mastery, incorporating va rious error handling techniques such as responding-to-the-response and Socratic questioning. There were, however, areas where the inst ructors differed from the LiPS manual or from each other in their instruction as measur ed by the Record of Program Delivery form. First, there were two program components wh ere the instructors significantly differed from the LiPS manual. The first is related to questioning the student a bout the label of the sounds during Tracking (i.e., I took out a Lip Popper, and repl aced it with a Lip Cooler. or The new sound is a Lip Cooler.) Neith er instructor working one-on-one with students included this component with great frequency (Instructor 1 = 33%, Instructor 2 = 25%). While this program element may not be as critical as others based on the specific needs of the students, it is nevertheless a co mponent that is explicitly discussed in the LiPS manual (Lindamood & Lindamood, 1998, p. 34) and was employed infrequently in the clinical setting by both instructors. A second program component that diffe red from the LiPS manual and was not included to a high degree invol ved questioning students even when their responses were correct. As mentioned previously, in order to promote self-monitori ng and self-correcting behaviors, the LiPS manual emphasizes ques tioning students regardless of the accuracy of their responses (Lindamood & Lindamood, 1 998). In this way, the students tend to become less dependent on the instructor and more reliant on their own skills and decision-making abilities. As measured by the Record of Program Delivery form and

PAGE 83

73 noted in Table 6, both instruct ors included this component in consistently (in 50% of the observations for Instructor 1, 67% of the observations for Instructor 2). Furthermore, while both instructors in the one-on-one setting tended to include most of the measured elements of the LiPS program with similar fr equency, they differed from each other on two components. First, the instructors differed significantly in their inclusion of real and nonsense words for Tr acking, Reading, and Spelling (Instructor 1 = 33%; Instructor 2 = 100%). However, the le sser percentage for In structor 1 can be attributed to her response to one of the students and the modi fication of the curriculum to meet his needs. This particular student that Instructor 1 was working with was an older student who had developed a great deal of sight word knowledge (i.e., had memorized a great deal of real words). Therefore, Inst ructor 1 included only nonsense words in the beginning of his LiPS instruction to ensure that he had the oppor tunity to use the skills he was learning to sound out new or unfamiliar wo rds. Otherwise, the percentages gathered on the Record of Program Delivery forms we re similar across Instructors 1 and 2. A second program element where the two in structors differed from each other in their delivery of instruction was related to th e use of the word no. Instructor 1 avoided the use of the word no duri ng 100% of the observations, while Instructor 2 avoided this word during only 33% of the observations. This difference may be attributed to individual differences in the teaching styles of the two instructors. For example, in avoiding the use of the word no during inst ruction, Instructor 1 was noted to use such statements as, Use your mirror. Do those sounds look the same? and Thats not a bad guess

PAGE 84

74 Table 6. Record of Program Delivery, Pe rcentages by Instructors for One-on-One Treatment Summary of Descriptive Results The purpose of this section was to descri be how the LiPS program was delivered to larger groups of students in kindergarten classrooms. Additionally, for comparative purposes, data were presented regarding what the program looked like in a clinical setting where clinicians worked with students oneon-one. While some LiPS program elements were present across both setti ngs, a number of differences existed in how this program was implemented in the school versus clinical setting. Certain program elements, as measured by the Record of Program Delivery form, were present significantly more often in the one-on-one set ting than in the classroom GENERAL Instructor 1 Instructor 2 T. reviews previously introduced material at beginning of session 100% 100% S. provided with/encouraged to use mirror when introduced to or practicing new sounds 100% 100% All Ss. observed to be actively enga ged in learning process 67% 100% TRACKING, READING, SPELLING In structor 1 Instructor 2 S. instructed to follow 3 steps in Tracking repeat words, touch & say, make change 100% 75% T. questions S. about label of sounds during Tracking 33% 25% Real and nonsense words used in Tracking/Reading/Spelling 33% 100% T. assesses S. mastery on T/R/ S chains before new material introduced 100% 100% ERROR HANDLING Instructor 1 Instructor 2 T. incorporates responding-to-response (allows student to self-correct) 100% 100% T. uses line of questioning to lead S. to desired response (Socratic) 100% 100% T. avoids use of word no when students answer is not expected one 100% 33% T. questions S. even when correct response provided 50% 67% T. avoids providing correct answer for S. having difficulty 100% 67%

PAGE 85

75 setting. Table 7 displays the percentages fo r whole group versus one-on-one instruction as measured by the Record of Program Delivery form. First, while mirrors were not employed by either instructor in the classr oom setting, both instructors in the one-on-one setting consistently encouraged the use of mirrors for their students when introducing or practicing new sounds. Second, during a majority of the instructional time in the clinical setting, the students were instru cted to use a specific proce ss during Tracking (i.e., repeat the old and new word, touch the blocks while stating the individual sounds, and make the change with the blocks). In th e clinical setti ng, Instructor 2 did not consistently have the student touch and say the individual sounds with each new word, but she did have the student state the change each time. In th e classroom setting, the students were only encouraged to make the changes that they he ard. They were not encouraged or required to complete the first two step s in the Tracking process. Another important difference between the cla ssroom and clinical settings was in the monitoring of student progress. Clinicians in the one-on-one setting recorded individual student performance on each task comple ted during each session. Students had to demonstrate 80% or higher mastery of the mate rial in order to move on to the next level or receive new material. No specific record s were kept regardi ng student progress or mastery of the curriculum cont ent in the classroom setting. Lastly, differences were noted between th e clinical and classroom settings in the amount and type of questioning that was present. In the c linical setting, students were questioned more frequently by the instructors, even when their responses were accurate. Furthermore, instructors in the clinical setti ng were more inclined to avoid providing the

PAGE 86

76 correct answers for the students and allowed the students to work toward the correct answers via instructor questioning. Table 7. Record of Program Delivery, Per centages for Whole Group versus One-on-One It should be noted that on e specific difference existed between the classroom and clinical settings that was not reflected in the data collec tion forms but was noted by the primary investigator during observations in both settings. This difference was in the amount of work that was completed during ea ch session. The amount of time devoted to LiPS instruction in each session was similar acr oss the classroom and clinical settings (i.e., approximately 30 minutes per session). However, in the clinical setting, the instructors managed to complete Tracking, Reading, and Spelling (typically ten words GENERAL Whole Group One-onOne T. reviews previously introduced material at beginning of session 95% 100% S. provided with/encouraged to use mirror when introduced to or practicing new sounds 0% 100% All Ss. observed to be actively enga ged in learning process 10% 83% TRACKING, READING, SPELLING Whole Group One-onOne S. instructed to follow 3 steps in Tracking repeat words, touch & say, make change 0% 86% T. questions S. about label of sounds during Tracking 100% 29% Real and nonsense words used in Tracking/Reading/Spelling 65% 71% T. assesses S. mastery on T/R/ S chains before new material introduced 0% 100% ERROR HANDLING Whole Group One-onOne T. incorporates responding-to-response (allows student to selfcorrect) 98% 100% T. uses line of questioning to lead S. to desired response (Socratic) 93% 100% T. avoids use of word no when students answer is not expected one 81% 67% T. questions S. even when correct response provided 17% 58% T. avoids providing correct answer for S. having difficulty 50% 83%

PAGE 87

77 each) during each session. In the classroom setting, often only one of these tasks was completed. Therefore, students in the one-on-one setting had more practice with the tasks than the students in the classroom setting. In an effort to summarize the data, the re searcher looked across all data sources including the four observati on instruments (Record of Pr ogram Delivery, Error Handling, Opportunity to Respond, and Student Engage ment/On-Task Behavior), instructor interviews, and anecdotal observational notes Table 8 reflects conclusions made by the primary investigator regarding the treatment integrity maintained for instructors across the classroom and clinical settings where LiPS was employed as compared to the program as it was designed. Instructional elemen ts rated as low for treatment integrity for either the classroom or clini cal setting indicates that this component was demonstrated infrequently or not at all. In other words, this instructional element appeared significantly different from how the program was designed. Instructional elements noted as high in treatment integrity were present on most or all occasions. Table 8. Summary of Level of Treatment In tegrity for Key Program Components Across Settings Instructional Elements Classroom Setting Clinical Setting Instructor 1 Instructor 2 One-on-One Presence of key instructional materials (use of mirrors, incorporation of small mouth pictures, use of colored blocks) Lo Lo Hi Student engagement in learning process Lo Lo Hi Choice of program paths (vertical path is recommended for young students) Hi Lo Hi Tracking following a prescribed process Lo Lo Hi Formal assessment of student progress/mastery of concepts Lo Lo Hi Error handling techniques (e.g., incorporation of responding-to-theresponse and Socratic questioning) Hi Hi Hi Hi = high treatment integrity, Lo = low treatment integrity

PAGE 88

78 On key instructional elements, or program components, a high degree of treatment integrity was consistently maintained by the instructors in the clin ical setting. In the classroom setting, both instructors had simila r amounts of previous experience with the LiPS program and implemented the program as it was designed to similar degrees. Both Instructor 1 and Instructor 2 in the classroom setting de monstrated low levels of treatment integrity when adapting this progr am to teach larger groups of students. However, Instructor 1 in the classroom se tting did demonstrate some higher levels of treatment integrity on certain ke y components, as can be seen in Table 8. For example, Instructor 1 received a high treatment inte grity rating based on her selection of the vertical program path for LiPS implementa tion, as it was recommended in the program manual for younger students. Finally, it should also be noted that Instructor 1 also attempted the use of small mouth pictures during instruction on one occasion. However, she expressed that managing student behavior during this activity was very difficult as each student was engaged in the task independe ntly. Therefore, after this one attempt, Instructor 1 discontinued the use of small mout h pictures and used la rger mouth pictures with the whole group working together to co mplete the task. This was just one of a number of program elements that were modifi ed or excluded as this program, which was originally created for one-on-one use, was adapted to a classroom setting. Student Outcome Data The purpose of this research was to document and evaluate how the Lindamood Phoneme Sequencing Program for Read ing, Spelling, and Speech (LiPS) was implemented in the schools with kindergart en students. This study included pretesting, six months of intervention, and posttesti ng. The LiPS program was conducted in four classrooms of kindergarten students by two diffe rent instructors at two school sites (i.e.,

PAGE 89

79 one instructor at each school site working w ith two classrooms of students; 35 students at School 1, 40 students at School 2). Pretest and posttest measures included the Lindamood Auditory Conceptualization (LAC) test, th e Phonological Awarene ss Composite of the Comprehensive Test of Phonological Proce ssing (CTOPP), the Word Identification and Word Attack tasks from the Woodcock-Johns on Tests of Achievement (WJ-III), and the Letter Naming and Phoneme Segmentation tasks from the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). The LiPS pr ogram served as a supplemental reading intervention in these four kindergarten classr ooms, offered in addition to the traditional reading curriculum at each school. Below are the results of student outc omes after six months of classroom intervention using the LiPS program. The answer s to three distinct research questions were sought to determine the academic gains students made after exposure to this curriculum. These questions included: 1. What gains do students demonstrate in read ing after receiving instruction in the LiPS program? 2. Do student academic gains differ on a meas ure more closely aligned with the LiPS program (i.e., the LAC) as compared to other standardized, norm-referenced measures? 3. Does student reading achievement differ signi ficantly from instructor to instructor? First, to analyze the results related to st udent outcomes, the ac ademic gains of all student participants collectively are examin ed. Then, student outcomes by school site, and therefore by instructor, are considered. La stly, an examination of student benchmarks for certain measures offers a closer look at the data from some of the assessment instruments that are more sensitive to the small changes in student achievement over the intervention period.

PAGE 90

80 Gains Demonstrated After LiPS Intervention for All Students The raw score means and standard deviations at pretest and posttest for all students across each measure are presented in Table 9. At pretest, the m eans ranged from 3.44 (SD = 3.09) for the raw score of the Word Attack subtest of the WJ-III to 28.99 (SD = 17.13) for the raw score of the Letter Naming ta sk on the DIBELS. At posttest, the means ranged from 5.56 (SD = 2.91) for the same Word Attack task to 47.47 (SD = 19.08) on the LAC. When considering all participants collectively, positive gains from pretest to posttest were achieved on all measures. Table 9. Raw Score Means, Standard Devi ations for Pretests/Posttests Across All Participants Measure Pretest (n=75) Posttest (n=72) CTOPP Elision 4.05 (2.97) 6.72 (3.48) CTOPP Bldg Words 4.71 (2.89) 9.68 (3.19) CTOPP Sound Matching 6.79 (5.08) 12.63 (5.52) LAC 23.97 (17.36) 47.47 (19.08) DIBELS LN 28.99 (17.13) 45.64 (15.89) DIBELS PS 15.73 (13.17) 38.39 (14.26) WJ Wd Identification 15.56 (7.17) 22.47 (6.95) WJWd Attack 3.44 (3.09) 5.56 (2.91) A two-way within-subjects analysis of variance (ANOVA) was conducted to assess student gains across measures and over time (i .e., from pretest to posttest). The dependent variable was the mean number of items corr ect on each measure across all participants. The within-subjects factors were test with eight levels (Elision, Blending Words, and Sound Matching from the CTOPP; the LAC; Letter Naming and Phoneme Segmentation from the DIBELS; and Word Identification and Word Attack from the WJ-III) and time with two levels (pretest and posttest). The main effect for time was statistically significant, F (1, 71) = 480.93, p < .01. The main effect for test was also statistically

PAGE 91

81 significant, F (3, 203) = 241.85, p < .01. Additio nally, the interaction between time and test was statistically signifi cant, F (3, 243) = 78.35, p < .01. Therefore, when looking at all students collectively, there was a statistically significant difference between mean test scores from pretest to postte st (i.e., time) and between the means of at least one measure (i.e., test). Furthermore, the statistically significant interaction indicates a statistically significant difference between mean pretest and posttest scores of at least one measure. Student Outcomes: A Comparison of Measures To determine which measure or measures yielded the greatest academic gains from pretest to posttest, follow up procedures were conducted from the results achieved on the ANOVA discussed previously. Of most intere st were the gains achieved on the LAC, a measure closely aligned with the LiPS read ing intervention, compar ed to gains achieved on the other assessment measures. Table 10 pr esents the mean test differences from pretest to posttest for all students on each m easure. Mean gains from pretest to posttest ranged from 2.44 for the Word Attack task of the WJ-III to 24.24 on the LAC. In other words, the mean increase from pretest to pos ttest for the WJ-III Word Attack task was 2.44 points, and the mean increase from pret est to posttest on the LAC was 24.24 points. Mean gains from pretest to posttes t on all measures were positive. Table 10. Estimated Marginal Means Measure Mean Test Difference (Posttest-Pretest) Standard Error CTOPP Elision 2.85 .27 CTOPP Blending Words 5.04 .34 CTOPP Sound Matching 6.07 .62 LAC 24.24 1.93 DIBELS Letter Naming 17.54 1.51 DIBELS Phoneme Segment. 22.92 1.61 WJ-III Word Identification 7.50 .49 WJ-III Word Attack 2.44 .24

PAGE 92

82 In order to adjust for multiple comparisons and control for familywise error rate, Bonferroni pairwise comparisons were calc ulated. Results of these comparisons are presented in Table 11. Mean te st differences were employed to determine if the gains from pretest to posttest on the LAC were sta tistically different and greater than the mean gains achieved on the other measures administer ed to participants. For all participants, gains from pretest to posttest were statisti cally significant and gr eater on the LAC than on the subtests of the CTOPP (Elision, Blendi ng Words, and Sound Matching) and the tasks from the WJ-III (Word Identification and Word Attack). The test differences, or gains from pretest to posttest, on the LAC were not statistically significantly different from those on the DIBELS Letter Naming and Phone me Segmentation tasks. Student outcomes relative to the LAC and DIBELS tasks will be discussed in greater detail later in this chapter. Table 11. Bonferroni Pairwise Comparisons Measures Mean Difference Standard Error LAC vs. CTOPP Elision* 21.39 1.99 LAC vs. CTOPP Blending Words* 19.19 1.90 LAC vs. CTOPP Sound Matching* 18.17 1.99 LAC vs. DIBELS Letter Naming 6.69 2.20 LAC vs. DIBELS Phoneme Segment. 1.32 2.36 LAC vs. WJ-III Word Identification* 16.74 1.87 LAC vs. WJ-III Word Attack* 21.79 1.93 *p<.01 Student Outcomes: Differences Between Instructors As stated previously, the LiPS program was employed at two different school sites with two different instructors. Each instru ctor taught the progra m in two kindergarten classrooms. The means and standard deviat ions at pretest and posttest across each

PAGE 93

83 measure for students at each sc hool site (i.e., Instructor 1, In structor 2) are reported in Table 12. Additionally, the table includes mean s and standard deviations by classroom. Table 12. Means and Standard Devia tions by Instructor and Classroom Measure Instructor #1 Instructor #2 Class 1 Class 2 Class 3 Class 4 Pre (n=17) Post (n=17) Pre (n=18) Post (n=17) Pre (n=20) Post (n=20) Pre (n=20) Post (n=18) CTOPP Elision 3.88 (2.57) 6.59 (4.42) 4.61 (3.91) 7.82 (3.34) 4.25 (2.97) 6.90 (2.97) 3.50 (2.37) 5.61 (3.05) CTOPP Bldg Wds 5.29 (3.37) 9.53 (3.50) 5.89 (2.89) 10.00 (2.62) 4.75 (2.38) 10.55 (3.35) 3.10 (2.34) 8.56 (3.09) CTOPP Snd Mchg 7.41 (4.99) 11.94 (5.79) 6.72 (4.76) 13.06 (4.70) 6.20 (5.19) 12.55 (5.48) 6.90 (5.61) 12.94 (6.38) LAC 25.70 (17.58) 42.53 (24.80) 20.50 (18.18) 44.88 (13.61) 24.65 (16.09) 50.55 (21.35) 24.35 (18.54) 51.17 (14.20) DIBELS LN 21.76 (16.73) 36.12 (17.77) 33.39 (21.02) 43.88 (14.72) 28.95 (9.66) 49.75 (9.91) 31.20 (18.66) 51.72 (17.06) DIBELS PS 17.35 (13.00) 36.24 (12.54) 20.89 (13.01) 43.06 (5.88) 16.85 (13.93) 40.10 (18.25) 8.60 (10.25) 34.11 (15.73) WJ-III Wd Ident 13.18 (6.11) 19.00 (6.21) 16.78 (10.10) 21.82 (6.32) 16.45 (6.14) 24.25 (6.50) 15.60 (5.70) 24.39 (7.77) WJ-III Wd Attack 2.65 (2.06) 5.41 (2.98) 3.72 (4.93) 5.47 (3.24) 3.75 (2.45) 5.70 (2.05) 3.55 (2.28) 5.61 (3.50) Two separate analyses were conducted in order to assess differences between instructors on student outcomes. These in cluded analyses of covariance (ANCOVA) procedures and the calculation of effect sizes. In subsequent sections, the results of these analyses are presented. Analyses of Covariance First, ANCOVA procedures were conducted on all academic variables using the pretest score for each measure as the cova riate and comparing the posttest score for Instructor 1 and Instructor 2 students. The independent variable, in structor, included two levels: Instructor 1 and Instru ctor 2. The dependent variables were the mean test scores on each measure at posttest, and the covariates were the mean test scores at pretest. The

PAGE 94

84 resulting ANCOVA F values appear in Table 13. Data are reported separately for each measure administered. Statistically significant differences between students for Instructor 1 and Instructor 2 were noted for posttest scores on the DIBELS Letter Naming task and the WJ-III Word Identification task, where studen ts for Instructor 2 performed better than students for Instructor 1 on both tasks. For all other measures, no statistically significant differences were noted. Table 13. Student Differences at Posttest By Instructor Measure Instructor 1 Mean (SD) Instructor 2 Mean (SD) F Effect Size CTOPP Elision 7.21 (3.91) 6.29 (3.04) 2.81 .04 CTOPP Blending Words 9.76 (3.06) 9.61 (3.34) 1.75 .03 CTOPP Sound Matching 12.50 (5.22) 12.74 (5.85) .19 .00 LAC 43.71 (19.73) 50.84 (18.08) 2.59 .04 DIBELS Letter Naming* 40.00 (16.54) 50.68 (13.61) 10.66 .13 DIBELS Phoneme Segment. 39.65 (10.25) 37.26 (17.14) .17 .00 WJ-III Word Identification**20.41 (6.33) 24.32 (7.03) 4.23 .06 WJ-III Word Attack 5.44 (3.07) 5.66 (2.79) 2.47 .04 *p<.01, **p<.05 Effect Sizes Second, in addition to the ANCOVAs, effect sizes were calculated to assess student outcomes. Also included in Table 13 are the e ffect sizes examining instructor differences for each measure at posttest when pretes t was held constant. Unlike the ANCOVAs which account for sample size, effect sizes we re calculated to determine if differences existed on outcome measures between instruct ors when sample size was not considered. Results of calculated effect sizes yielded no statistically significant differences between instructors on any posttest measure when pretes t was held constant. In other words, when correcting for pretest variability, no meani ngful differences between instructors on posttest measures were identified. Interestin gly, effect sizes range d from .00 for the DIBELS Phoneme Segmentation task to .13 for the DIBELS Letter Naming task. While

PAGE 95

85 previously reported ANCOVA results identified some statistically significant differences between measures, specifically for the DIBELS Letter Naming and WJ-III Word Identification tasks, no practical significant differences were noted between the measures because all calculated effect sizes were small. Student Progress: A Closer Look Benchmark Comparisons In the analyses of statistically significan t differences for student outcomes between measures, mean test differences were grea ter on the LAC than that of the WJ-III or CTOPP. However, no statistically significan t differences were noted between the LAC and DIBELS tasks. Both of these assessm ent instruments can be used for progress monitoring and may be more sensitive to sma ll changes in student performance relative to an intervention. Therefore, in order to furt her investigate any diffe rences that may exist between these two measures, a comparison was made between mean pretest and posttest scores for the students of each instructor and benchmarks for expected levels of reading achievement or progress. Benchmarks are typically employed for screening or grouping students (Good & Kaminski, 2003) and can serve to demonstr ate meaningful differences in progress monitoring. For this purpose, the benchmarks offered a sens e of student reading growth for the kindergarten participants from th e beginning of the school year until the conclusion of the LiPS intervention, which was in February. Typically, the benchmarks represent minimal levels of satisfactory progress for the lowest achieving students (Good, Gruba, & Kaminski, 2001 in Good & Kaminski, 2003). A comparison of student changes in benchmark placement by instructor was considered for both the LAC and DIBELS tasks. Three distinct benchmarks were considered for the DIBELS tasks:

PAGE 96

86 students at-risk ( 20th percentile), students considered to have some risk (21-38th percentile), and those considered at low risk ( 39th percentile) for reading difficulties. The LAC offered a distinct and unique set of benchmarks whic h were recommended minimum scores, and this is discussed in more detail below. Table 14 includes the percentages of stude nts identified in each of the three benchmark categories at pretes ting and posttesting by instruct or on the DIBELS tasks. For both instructors, a majority, or over eigh ty percent, of students were considered low risk at pretesting and posttesting on the Lette r Naming task. However, on this particular task, little change was noted from pretest to posttest. In other words, the number of students identified in each of the three benchmark categories remained relatively stable, and the percentages of students considered at ri sk or with some risk did not change from pretest to posttest. On the Phoneme Segmentation ta sk of the DIBELS, great improvements were noted in the percentages of students considered low risk from pretest to posttest for both instructors. While half of the students for Instruct or 1 were considered low risk at pretesting, over ninety percent we re considered low risk at posttesting on the Phoneme Segmentation task. For Instructor 2, approximately one quarter of the students were considered low risk at pretesting, while over eighty percen t were low risk at posttesting on this DIBELS task. Moreover, while almost half of the students working with Instructor 2 were at-risk for readi ng difficulties, as measured by the Phoneme Segmentation task, at pretesting, only eight pe rcent of the students remained in this category at posttesting.

PAGE 97

87 Table 14. Percentage of St udents at Benchmarks at Pretest/Postest on DIBELS Task Instructor #1 Instructor #2 At-Risk 20th percentile Some Risk 21-38th percentile Low Risk 39th percentile At-Risk 20th percentile Some Risk 21-38th percentile Low Risk 39th percentile Ltr Naming Pre 9 (3) 9 (3) 83 (29) 0 (0) 5 (2) 95 (38) Post 9 (3) 9 (3) 82 (28) 0 (0) 5 (2) 95 (36) Phoneme Seg Pre 17 (6) 34 (12) 49 (17) 48 (19) 25 (10) 28 (11) Post 0 (0) 6 (2) 94 (32) 8 (3) 11 (4) 82 (31) Note: Instructor 1: Pre n=35; Post n=34 Instructor 2: Pre n=40; Post n=38 ( ) indicates actual numbers of students in each category Table 15 includes benchmark data by instru ctor from pretest to posttest on the LAC. Unlike the percentile classifications for the DIBELS tasks, the LAC offers recommended minimum scores for students at each grade level from kindergarten through the seventh grade. According to Lindamood and Lindamood (1971), these recommended minimum scores were selected based on statistical data and clinical experience. As stated in the test manual, The recommended scores represent a level of performance that correlates highly with ade quate or better-thanadequate spelling and reading skills for particular grades in typical American clas srooms (Lindamood & Lindamood, 1971, p.29). Unlike the DIBELS ta sks, however, no specific percentile equivalents are offered. The recommended minimum score for the first half of kindergarten was used for comparison of students working with the two instructors at pretest ; the recommended minimum score for the second half of kindergarten wa s used for comparison at posttest On the LAC, improvements were seen across instructors, and the percentages decreased from pretesting to posttesting as expected. Ho wever, while the patter n of percentages was similar to the DIBELS tasks in that less students were of con cern at posttesting, the

PAGE 98

88 improvements made across participants were somewhat less than for the DIBELS tasks. For the LAC, half of the students were below the recommended minimum score at pretesting, and less than a thir d were below this minimum score at posttesting. For the DIBELS tasks, however, typically less than tw enty percent of the students were in the higher risk categories (< 38th percentile) at posttesting. Furthermore, on the DIBELS, these percentages were as small as five pe rcent in some instances at posttesting (e.g., Letter Naming for Instructor 2. Table 15. Percentage of Students Below R ecommended Minimum at Pretest/Posttest on LAC Measure Recommended MinimumPercentage Instructor #1 Instructor #2 LAC Pre (First of K = 31) 54 (19) 53 (21) Post (Second of K = 40) 32 (11) 24 (9) Note: Instructor 1: Pre n=35; Post n=34 Instructor 2: Pre n=40; Post n=38 ( ) indicates actual numbers of students in each category

PAGE 99

89 CHAPTER 4 DISCUSSION As schools move toward early identifi cation and the prevention of reading difficulties, reading curricula are being caref ully selected and instructional time is increasing. Especially for younger students or beginning readers, more instructional time devoted to phonological awarene ss activities is criti cal. The reading res earch literature suggests that phonological awareness, or the ab ility to recognize that spoken language consists of smaller units, is a strong predic tor of later reading ach ievement (Bus & van Ijzendoorn, 1999). Therefore, curr icula that emphasizes or at least includes direct instruction in phonological awar eness should be incorporated into beginning reading instruction, as including this instruction early on may be more effective than waiting until students are older (Bus & van Ijzendoorn, 1999). One such program that offers phonological awareness training and is incr easingly appearing in schools is the Lindamood Phoneme Sequencing Program for Reading, Spelling, and Speech (LiPS). While only a small base of empirical resear ch exists to support this program, the LiPS program is employed nationally in both public schools and private facilities as beginning reading instruction and in remedial efforts for children and adults. Limited empirical research exists suppor ting the use of this program in one-on-one settings (e.g., Kennedy & Backman, 1993; Torgesen et al., 19 99), and even less systematic research exists documenting the efficacy of the LiPS program for small groups or classrooms of students (e.g., McGuinness, McGuinness, & Donohue, 1995). No study to date has documented in detail how the LiPS program, whic h was originally designed for clinical

PAGE 100

90 use, is modified for use in the school sett ing or what LiPS instruction looks like when employed with large groups of students. The primary purpose of this study was to investigate how the LiPS program was adapted and employed in a school or classroom setting with large groups of kindergarten students. Along with documenting the program implementation, student outcomes after approximately six months of LiPS instruc tion were examined. Second, in addition to examining program implementation in a cl assroom setting, another purpose of this research was to compare the LiPS instruction of the classroom settings with the more traditional implementation in a clinical setti ng to assess treatment fidelity across the two settings. To examine LiPS program implementation in a classroom setting, data were collected at two school sites employing this pr ogram with kindergarten students. In the school setting, participants included 75 ki ndergarten students from four different kindergarten classrooms two classrooms at each of the two school sites. Two LiPS instructors, both trained as speech pathologists, one at each school site, also participated in this study. One participating school was a laboratory school affiliated with the local public state university. This school served st udents in kindergarten through the twelfth grade. The second school site was a paroch ial school serving stude nts in kindergarten through eighth grades. For comparative purposes, data regarding treatment fidelity were also collected at a clinical site. Observati ons were conducted with two instructors at a private educational center offeri ng remedial services to child ren and adults with learning difficulties.

PAGE 101

91 Data collection included descriptive da ta of LiPS program implementation and quantitative data involving the assessment of student outcomes subsequent to LiPS instruction. To assess treatment fidelit y, descriptive data included classroom observations, instructor interviews, and th e collection of instructor lesson plans. Programmatic or instructional elements that were considered included the following: the inclusion of key instructional materials, student engagement, choice of program path, procedures employed for Tracking, instructor assessment of student program/mastery of content, and use of error handling technique s. Assessment of student outcomes involved the use of four assessment instruments: the Lindamood Auditory C onceptualization test (LAC); the Woodcock-Johnson Tests of Achi evement (WJ-III) Word Attack and Word Identification tasks; the Dynamic Indicators of Basic Early Liter acy Skills (DIBELS) Letter Naming and Phoneme Segmentation tasks; and the Comprehensive Test of Phonological Processing (CTOPP) Elision, Blen ding Words, and Sound Matching tasks. Data were collected in various ways to gl ean an accurate repres entation of the LiPS implementation, especially in the classroom setting. Therefore, observations were conducted on a number of occasions in each of the four classrooms, several interviews were conducted with the two instructors throughout the in tervention, and daily lesson plans were gathered. In considering all of the descriptive data that were collected, certain program elements were deemed most important in assessing treatment integrity based on one or more of the following: emphasis pl aced on particular program elements in the LiPS program manual (e.g., through repeated mention, such as error handling); the uniqueness of program elements (i.e., maki ng the LiPS program different from other reading programs), such as Tracking; and the previous clinical experiences of the primary

PAGE 102

92 investigator employing the LiPS program w ith children and adults with learning difficulties. From the descriptive data collected at both the school sites an d in the clinical setting, conclusions regarding tr eatment fidelity or integrity were ascertained. Overall, in the classroom setting, instructor s demonstrated low levels of treatment integrity. In the classroom setting, the two instructors rarely or never included such program components as Tracking following a prescribed sequence, formal assessment of student progress or mastery, and key instructional materials. In the clinical setting, a high degree of treatment integrity was maintained by the instructors. In considering student outcomes for pa rticipants across th e two school sites, statistical analyses yielded positive mean ga ins across all students for each assessment measure. Furthermore, across all students, mean gains achieved on the LAC were statistically significantly greater than gain s on the three tasks of the CTOPP (Elision, Blending Words, and Sound Matching) and two tasks of the WJ-III (Word Attack and Word Identification). No statistically signi ficant differences were noted between mean gains on the LAC versus DIBELS tasks (i.e., Letter Naming Fluency, Phoneme Segmentation). In analyzing student outcome data between instructors, or between the two school sites, the students of Instructor 2 achieved sta tistically significantly greater gains than students of Inst ructor 1 on the DIBELS Le tter Naming and WJ-III Word Identification tasks. No other statistically significant differences were noted between instructors. Further analyses were conducted regardi ng student gains on the LAC and DIBELS measures, as both were well suited to monito ring student progress and more sensitive to

PAGE 103

93 smaller student reading skill gains. Using benchmarks for these assessment measures (Good & Kaminski, 2003), students reading achievement gains were compared to expected performance at pretesting and postte sting. Overall, across all participants, improvements were seen from pretesting to posttesting on the LAC and DIBELS tasks. Therefore, for the DIBELS tasks, the percen tages of students in the lowest benchmark category (i.e., 20th percentile) decreased, and percen tages of students in the highest benchmark category (i.e., 39th percentile) increased from pretesting to posttesting. On the LAC, the percentages of students working with each instructor who fell below the minimum recommended score decreased from pr etesting to posttesting. However, these benchmark analyses yielded slightly different outcomes than the statistical analyses when considering amount of growth. In considering benchmarks, ga ins across all students were not as great for the LAC as for the DIBELS tasks. In other words, at posttesting, more students performed at the highest benchmark level (i.e., 39th percentile) on the DIBELS tasks than at the recommended minimum scor e on the LAC. Therefore, while students generally did not perform significantly diffe rently on the LAC and DIBELS tasks when the outcomes were analyzed statistically, in considering benchmarks, student outcomes for the LAC were not as great overall. Reflecting on the Results Treatment Integrity Based on the classroom data collected, in cluding lesson plans, observations, and instructor interviews, the two instructors at the school site s demonstrated low treatment integrity for LiPS instruction. This study repr esents an example of the difficulty in scaling up interventions that we re designed for one-on-one use to a classroom with larger groups of students. High-quality implemen tation depends on factors such as the

PAGE 104

94 teachers knowledge of the subject matter, beliefs about instructional priorities, and personal teaching philosophy and style (D enton et al., 2003, p. 207). While the LiPS instructors involved in this study did not have teacher educ ation backgrounds, they did bring specific content knowle dge of the instructional progr am. Unfortunately, some of their difficulties related to treatment integr ity may have been related to their teaching styles and lack of facility in adapting their instruction to larg er groups of students. In one study examining teachers views of research-bas ed practices, teachers reported that they selected instructional practices that were feasible (Boardman et al., 2005). While the LiPS program has empirical support to docum ent its effectivene ss with individual students, the intricacies of the program may make it less feasible to implement in the classroom without a great deal of instructional support. Certain adaptations could have been made to the instruction in the kindergarten classrooms to improve the treatment integrity or adherence to th e program as it was designed. These involve the use of mirrors, Tracking with colored blocks, assessment of student progress, increased support from sc hool and classroom teachers, and improved management of student behavior during LiPS instruction. Each of the components will be described in greater detail below. Use of mirrors First, the instructors at both school site s omitted the use of mirrors in their LiPS instruction in the kindergarte n classrooms. For students lack ing sufficient oral awareness to feel new or unfamiliar sounds, mirrors can serve to support one sensory system through the use of another (Lindamood & Lindamood, 1998). Mirrors during instruction, especially during the introduction of new s ounds, would have proved beneficial for all students and particularly thos e students with poor oral motor awareness or phonological

PAGE 105

95 awareness. Mirrors would have been helpful to aid in the discovery of new sounds and to serve as a self-checking device. Understandabl y, it could be costly to provide mirrors to classrooms of students, and the employment of mirrors may have served as a distraction for kindergarten students. Therefore, addi ng individual manipulatives to classroom instruction would require even greater instructor resources devoted to managing student behavior. However, as improving oral awarenes s is such an important component in the early stages of the LiPS program, mirrors ma y have offered students individual feedback to aid in the discovery of new sounds. Tracking with colored blocks Second, Tracking with colored blocks is a nother program element that could have been incorporated into instruction to im prove treatment integrity in the classroom settings. Tracking is a unique element that helps to set the LiPS program apart from other reading programs focusing on phonological awareness. By Tracking with mouth pictures and colored blocks, students can gain experience in two levels of coding before they need to use the learned medium of specific letters to code speech sounds for Reading and Spelling (Lindamood & Lindamood, 1998, p. 166). In other words, students can have the opportunity to strengthen phonemic awareness skills without having to think about letter symbols. While both instructors incor porated Tracking with the m outh pictures into their instruction to at least some extent, neither tr ansitioned to completing the task with blocks. In her final instructor interview, Instructor 1 noted that she woul d have used colored blocks and small mouth pictur es for Tracking had she been working with individual students. In a classroom, she perceived behavi or management issues to interfere with completing this task in the same manner sh e had done previously in a one-on-one setting.

PAGE 106

96 Instructor 2 most likely did not employ colo red blocks for Tracking because she had no previous experience taking chil dren through the program in its entirety; in the past, she had incorporated parts of the program into therapy in addition to other programs. Instructor 2 was probably most familiar with the oral awareness component of LiPS (e.g., mouth pictures) and not the Tracking task. Th erefore, it would have been significantly more difficult for her to adapt this task to a larger group of students due to her limited experience with this task in a one-on-one setting. In order to incorporate this key prog ram component, instructors could have modified the task for use with a larger group of students to use colo red construction paper instead of the traditional colored blocks. This idea is described in detail in the LiPS manual (page 181, Lindamood & Lindamood, 19 98). Interestingly, students achieved statistically significant mean gains that were greater on the LAC than on the tasks of the CTOPP or WJ-III, even though no direct instruct ion was offered by either instructor in the use of colored blocks for Tracking. Tracking with colored blocks is precisely what is required on the LAC assessment; the LAC assesses students abilities to detect sameness, difference, or changes among sounds by repres enting these sounds with colored blocks. Assessment of student progress A third component that was absent in the classroom setting involved assessing student progress or mastery of concepts during instruction and prior to the introduction of new material. While this was demonstrated in th e clinical setting, neith er instructor in the school setting monitored student progress in concrete, measurable terms for the purposes of instructional decision making in the LiPS program. Instructor 1 met with a small group of identified struggling students in both of he r kindergarten classrooms, so she seemed to have a sense anecdotally of how those st udents were grasping the new material.

PAGE 107

97 Furthermore, Instructor 2 maintained cl ose communication with the two classroom teachers at her site, meeting with them w eekly to receive feedback. Unfortunately, however, no systematic or regular assessment was incorporated at either site for LiPS instructional purposes. In the LiPS manua l, Lindamood and Lindamood (1998) offer a standard of 80% accuracy at the current leve l prior to introducing new material. This was the standard adhered to in the clinical setting. Support of school and classroom teachers A lack of classroom teacher support contri buted to the low treat ment integrity of the LiPS program at both school sites. At the first school site, the kindergarten teachers provided minimal support to the LiPS instructio n or the efforts made by Instructor 1. It appeared that the instructor was perceived as a visitor to the cl assrooms; someone who offered additional reading instruction that was considered by the classroom teachers as tangential to the rest of their curriculum. While one teacher at this school site did assist with behavior management of her student s during a majority of the LiPS sessions, Instructor 1 expressed feeling a lack of t eacher support during several of the interviews. The teachers at the school site of Instructor 2 were more supportive of the efforts to incorporate this program into their curriculum. Teachers in both classrooms of Instructor 2 were present throughout each LiPS sessi on, assisting with classroom management. Additionally, one teacher at this school site read the relevant section in the LiPS manual independently prior to the introduction of new material with the students in her classroom. She even expressed an interest in attending formal training in the LiPS program during the summer subsequent to th e intervention. Students may have benefited from the LiPS instruction to an even greater degree had the program been infused more into their curriculum.

PAGE 108

98 Management of student behavior during instruction To offer a complex and intensive reading pr ogram such as LiPS to large groups of students requires an instructor proficient not only in the program itself, but also in managing student behavior during instruc tion. Throughout this study, each instructor made varied attempts to create optimal le arning environments for the students. In one classroom, Instructor 1 previewed new materi al with a small group of students the day before introducing this material to the rest of the class. Sh e also incorporated games into her instruction in both classrooms to allow fo r movement. Within the first few sessions, Instructor 2 modified her inst ruction to include table activit ies for review so that she could work with smaller groups of students to introduce new material. Regardless of the efforts made by both instructors to manage th e various behaviors and personalities of the students in their respective classrooms, both in structors expressed in their interviews at least some level of frustration with this aspect of the intervention. When designing reading instruction or interven tions for students in a school se tting, one must consider the level of training and content expertise necessa ry for a desired intervention and also the instructors training and experien ce in managing student behavior. Along these lines, the two instructors ma de various decisions throughout the LiPS instruction to modify the grouping arrangement to better meet the needs of the students. Prior to initiating LiPS at the schools, both in structors had the inten tion of delivering the instruction to whole classrooms of students. For various reasons that can attributed to both the instructors and the classroom teacher s, only one classroom remained engaged in whole class LiPS instruction at the end of the intervention period. Modifications were made in the other three classrooms by the th ird week of LiPS instruction. Despite the intentions of the instructors at the outset of the intervention, the changes made in

PAGE 109

99 grouping arrangements suggest the difficulty in adapting the program to meet the needs of classrooms of students. As mentioned previously, the instructors in the clinical setting demonstrated high levels of treatment integrity. Overall, differences in levels of treatment integrity between the instructors in the school setting and those in the clinical setti ng can be explained by two factors: grouping arrangements and inst ructor skill. Firs t, regarding grouping arrangements, the LiPS program is an intric ate program and working with larger groups of students requires some modifications to the instruction. On th e other hand, the lower levels of treatment integrity demonstrated by the instructors in the schools could be a result of instructors with limited training or e xperience in the program and their facility in adapting it. Instructor 2 had ne ver previously completed the LiPS program in its entirety with a student, even at the individual leve l, and this is probably not unlike many school settings in which the program is employed. Ultimately, however, there is evidence to suggest that the arrangements of the groups (e.g., larger numbers of students) a ffected the treatment integrity. Previous research has demonstrated that, when highly qualified teachers implement a welldesigned intervention, the academic benefit to students is the same for students taught individually or in small groups of two to six students (Elbaum, Vaughn, Hughes, & Moody, 2000). Unfortunately, for the instructors at the schools in this study, instructional groups were rarely (for Instruct or 1) or never (for Instructor 2) less than six students. In their interviews, both instructors recognized and acknowledged that they would have approached the LiPS program differently had they been working with students one-onone (e.g., use of individual mouth pictures). Yet, from this research, what is unknown is

PAGE 110

100 whether these instructors possessed the skil ls to implement the program as it was designed (i.e., higher treatment integrity) in a one-on-one set ting, or whether they simply had the content knowledge but less practical skill. In other words, the instructors identified, through the interviews, specific as pects of the program that they would have executed differently in an individual sett ing. However, it is unknown whether these instructors possessed the skillfulness to accomplish these tasks in practice. Student Outcomes Statistical analyses As stated previously, across all students, significantly greater gains were noted on the LAC than on the tasks of the CTOPP a nd WJ-III, and no significant difference was noted between LAC and DIBELS performance. The LAC is closely aligned with the LiPS program, and the skills necessary to co mplete the tasks on the LAC are virtually identical to the Tracking task using colored blocks within the LiPS program. Therefore, at the outset of this research, it was hypot hesized that students would achieve greater gains on this measure since direct instruction in this process would be offered to students during the traditional LiPS instruction. Intere stingly, however, neithe r instructor included Tracking with the colored blocks in their instruction. Regardless, students performed statistically better on the LAC measure than the tasks of the CTOPP and WJ-III even though similar phonemic awareness skills were required by some tasks of the CTOPP and WJ-III. This may be attributed to the level of difficulty of the items on these various measures. On the first two parts of the LAC, the items are primarily assessing phonological awareness at the phon eme level, and identifying number and sameness or difference of up to three sounds in isolation is all that is required. According to the scoring system on the LAC, performing well on th ese items puts a kindergarten student at

PAGE 111

101 approximately grade level. However, on the CTOPP and WJ-III tasks, some of the skills required are slightly more complex. In fact some phonological awareness demands at the word level are required on the tasks of thes e measures blending several sounds together to form words or nonsense words of seve ral sounds. No statistically significant differences were noted in st udent performance between th e LAC and DIBELS measures, and these tasks assessed reading abilities at more similar skill levels. For example, the DIBELS tasks measured such skills as iden tifying letters in isolation and segmenting simple words of two to three phonemes. Benchmarks To further investigate student outco mes related to the LAC and DIBELS, benchmarks were considered. When analyz ing benchmarks of student progress, the results differed somewhat students demonstr ated greater growth as measured by the DIBELS. As stated previously, the skills assessed on the LAC (e.g., tracking sounds using colored blocks) were neve r taught directly by either in structor at the school sites. Therefore, while the two measures assessed si milar reading skills, the incorporation of manipulatives on the LAC added a visual-motor component to the assessment task that may have made it more difficult and thus affected scores. Furthermore, regarding student benchmark performance, phonemic awareness and letter knowledge are considered the two best school-entry pred ictors of childrens reading acquisition during the first two years of in struction (Ehri, 2004). These were two components of early literacy sk ills that were assessed in pa rticipants at the schools at pretesting and posttes ting through the DIBELS and LAC measures. At pretesting, the percentages of students cons idered at-risk varied gr eatly by assessment tool. No students were considered at-risk, or below the 20th percentile, for Instructor 2 on the

PAGE 112

102 DIBELS Letter Naming task, and as many as 48 pe rcent of the same instructor’s students fell in the at-risk range on the DIBELS Phoneme Segmentation measure. Judging from DIBELS benchmark performance at posttesti ng, percentages of stude nts considered “atrisk” were minimal, and most of these student s were not predicted to have later reading difficulties based on these tasks alone. Actual percentages of students persisting with at least “some risk” at posttes ting ranged from 5 to 18 percent. Based on LAC data, however, 24 to 32 percent of students remain ed below the recommended minimum score at posttesting. What remains unclear, unfortunately, are the specific percentiles or criteria established to determine the benchmark cu t scores for the LAC. The LAC manual does not provide information regarding how the recommended minimum scores correspond with percentiles in such a way as the DIBELS measures. Regardless, if one were to use these benchmarks as presented to dete rmine which students should receive more intensive interventions, different decisions w ould be made based on the data from the two measures. Therefore, schools should consider the implications of choosing each measure to make intervention decisions for students. The LAC, yielding slightly more students considered “at risk,” would serve to place more students in intensive interventions. If one were to consider DIBELS data alone, the results may produce false negatives, meaning students would not be considered most at risk for reading fa ilure when they truly should be receiving more intensive interventions. In other words, if decisions are based on DIBELS data alone, schools may be missing st udents that should receive some type of intervention. The LAC may serve as a more sensitive measure in this way, however, potentially providing more false positives. T hus, if decisions were made using the LAC

PAGE 113

103 data alone, some students may be deemed in need of more intensive interventions when they do not necessarily require them. Lastly, once data-based decisions have b een made, according to Torgesen (2004), there are two ways to increas e the intensity of reading in struction for the struggling readers or those most at risk for devel oping later reading difficulties: increase instructional time or provide instruction individually or in small groups. For the kindergarten students in the schools participa ting in this study, their instructional time was increased with the incorporation of the LiPS program in addition to the core reading curriculum. What remains unclear is whether th e increase in instructional time (due to the addition of the LiPS program) affected be nchmark performance at posttesting and potentially prevented later reading struggles. Implications for Practice Ultimately, to prevent future reading difficulties, some type of phonological awareness training should be in cluded in the academic curri cula of kindergarten students as phonological awareness is th e key to beginning reading acquisition (Smith, Simmons, & Kame’enui, 1995). The inclusion of the Li PS instruction at the two school sites increased the intensity of reading instructi on and the amount of student engaged time, and this is especially important fo r struggling readers or those mo st at risk for later reading failure. Because the LiPS program is empi rically supported in the reading research literature (Kennedy & Backman, 1993; Torgesen et al., 2001), schools ch oose to use this program. However, the LiPS program may not be the best choice of a phonological awareness training program for all students or for large groups of students. While this study may not be representative of all LiPS in struction in the schools, treatment integrity when using a one-on-one designed program with large groups of students was low.

PAGE 114

104 Furthermore, neither instructor in the sc hools managed to move beyond the oral-motor awareness component of this program, and th e narrow scope of focus on the articulatory features of individual sounds as compared to the orthographic leve l may not have been the best use of time for all students. Reading instruction must be taught bette r and more broadly than ever before (Adams, 1990). In considering phonological aw areness instruction in the schools, the qualities and qualifications of the provider ar e as important as the type of program. The instructors participating in this study had gr aduate level training in speech and language pathology and experience teaching a variet y of reading and language intervention programs. However, they still delivered th e LiPS program to classrooms of kindergarten students with low treatment integrity. The LiPS program is extremely intricate and complex, and schools should select a program that may be easier to implement with groups of students and focuses on the same phonological awareness principles cited in the literature as necessary and relevant. As mentioned previously, a significant portion the LiPS program is devoted to developing oral-motor awareness and focuses on the articulatory feat ures of individual sounds. This program should only be used with students who have severe reading disabilities or those students who have b een unsuccessful with other broader-based phonological awareness programs beginning at the orthographic level. Furthermore, regarding grouping arrangements, the LiPS progr am is best suited for one-on-one or small group (ideally four or fewer) use. Inst ructors employing this program in the schools should undergo formal training in the program and receive adequate supervision by a

PAGE 115

105 trained professional prior to implementi ng the LiPS program independently with students. Limitations of the Current Study This research offered insights into what the LiPS program looked like when it was delivered to large groups of students in the schools. However, some limitations of the current study affect the interpretation and generalizability of findings. These include threats to internal a nd external validity. Internal Validity The critical question regardi ng internal validity relates to the extent to which the research design reduces uncertain ty about the relationship of cause and effect (Fletcher & Francis, 2004). In other words, internal validi ty is related to the extent to which one can be certain that the variable of interest (e.g., treatment) is responsible for the measured outcomes. Educational research in an applie d or naturalistic setting makes threats to internal validity problematic in interpreting cause and effect relationships. This study was no exception. First, the participants involved in this research were unique. The sample of students most likely differed, at least to some extent, from students in typical public schools. One participating school was a paro chial school with generally hi gher achieving students, and the other was a developmental laboratory school affiliated with the local public university. Parents of students at both schools ch ose to have their child ren at these sites, sometimes involving lengthy waiting lists to do so. Additionally, both participating schools valued reading instru ction so much as to incl ude the LiPS program for all students as additional reading instruction above and beyond the core reading curricula. Furthermore, the history of these students a nd the kinds of experien ces they had outside

PAGE 116

106 of school and above and beyond what was meas ured could have accounted for some of the differences in outcomes. These factors ma ke it difficult to attr ibute student outcomes directly to student expos ure to the LiPS program. Second, while employing LiPS as a supplemental reading program benefited the students by offering them more time engaged in reading instruction, this factor also served as a limitation to this study. The st udent outcomes achieved from pretesting to posttesting cannot be attributed solely to their exposure to the LiPS program. Scores may have improved due to the read ing curricula used concurren tly with the LiPS instruction. Of course, maturation should also be consid ered, and a control group was not included in this study for comparative purposes. External Validity External validity relates to the gene ralizability of findings. As mentioned previously, the participati ng schools were unique in that students were probably achieving higher academically as well as overa ll parent participation being at higher levels than those in many public schools in this region. Therefore, student outcomes, or performance achieved at posttesting, may ha ve been greater than for other schools employing similar reading methods. Additionally, both instru ctors had unique training a nd experiences that may differ from other instructors utilizing the LiPS progr am in school settings. Due to their training in speech language pathology and numerous years of clinical experience, the two instructors participating in this study likely had considerably more background knowledge and experience than others teaching this program in the schools. Furthermore, because these instructors agreed to participate in the study, they were aware that they were being observed and were willing to submit weekly lesson plans to the primary

PAGE 117

107 investigator. Therefore, their delivery of in struction may have been different than it otherwise would have been. Any of these a bove-mentioned factors offer threats to the external validity of this research. Directions for Future Research Results of this study suggest that the treatment integrity of the LiPS program can be compromised when it is adapted to serve larger groups of students in the schools. In this study, the instructors employing this program in the schools omitted several key program components, and LiPS implementation in the kindergarten classrooms differed greatly from observations conducted in a clinical setting. Regardless of this, reading achievement gains were noted across students and schools on several reading assessment measures. While previous research has compared student outcomes in the LiPS programs with other reading curricula and methods, this study was the first of its kind to detail what the LiPS program looked like in a school setting w ith large groups of students. Furthermore, this study was the first of its kind to compare specific program elements that are included or excluded from LiPS program implementa tion in a classroom and clinical setting. Future research of the LiPS program shoul d consider several things related to treatment integrity. First, trea tment integrity of the LiPS pr ogram with larger groups of students should be considered in other mo re traditional school settings. Findings may differ in public schools with children of more diverse learning potentials in this region and elsewhere. Second, in this study, the in structors teaching Li PS in the schools had training and experience in speech and langua ge pathology. Researchers should consider LiPS instructors with differing levels of tr aining and experience, including those with general and special education backgrounds. A dditionally, this study investigated program implementation for kindergarten students. Future research should explore how LiPS

PAGE 118

108 implementation differs in a school setting fo r instructors employing this program with older elementary students or beyond. Lastl y, reading researchers should consider the treatment fidelity of other reading curricula and the deci sions instructors make when adapting various programs to meet the needs of larger groups of students at varying achievement levels in the schools. Related to student outcomes, future experi mental research should be conducted to make direct comparisons of various grouping arrangements. For example, studies should be conducted to assess student outcomes of distinct samples when LiPS is delivered oneon-one, in small groups, and in large groups. Fu rthermore, empirical research should be conducted to compare outcomes for student s receiving LiPS small group instruction versus other small group phonological awareness instruction. Finally, it is important to better understand which specific components of the LiPS program contribute to positive student reading gains, and st udies should be designed to e xperimentally manipulate the various aspects of the program to determine their relative effectiveness and contribution to student reading gains.

PAGE 119

APPENDIX A RECORD OF PROGRAM DELIVERY

PAGE 120

110Instructor: ___________ School/Class: __________ Date: _____________ Length of Observation: ___________ NOTES GENERAL YES NO N/A T. reviews previously introduced material at beginning of session S. provided with/encouraged to use mirror when introduced to or practicing new sounds (pg. 47) All Ss. observed to be actively engaged in learning process (pg. 418) TRACKING, READING, SPELLING YES NO N/A S. instructed to follow 3 steps in Tracking-repeat words, touch & say, make change (pg. 169) T. questions S. about label of sounds during Tracking (pg. 434) Real and nonsense words used in Tracking/Reading/Spelling (pg. 161) T. assesses S. mastery on T/R/S chains befo re new material introduced (e.g., pg. 46) ERROR HANDLING YES NO N/A T. incorporates responding-to-response (allows student to self-correct – pg. 14) T. uses line of questioning to lead S. to desired response (Socratic – pg. 419) T. avoids use of word “no” when student’ s answer is not expected one (pg. 418) T. questions S. even when correct response provided (pg. 419) T. avoids providing correct answer for S. having difficulty (pg. 14)

PAGE 121

111 APPENDIX B CLASSROOM OBSERVATION-ERROR HANDLING Instructor: _____________ School/C lass: ___________ Date/Time: ____________ T. uses line of questioning to lead S. to desired response (from RPD) *Tally for each series T. questions S. even when correct response provided (from RPD) *Tally for each instance Instructor: _____________ School/C lass: ____________ Date/Time: ___________ T. uses line of questioning to lead S. to desired response (from RPD) *Tally for each series T. questions S. even when correct response provided (from RPD) *Tally for each instance

PAGE 122

112 APPENDIX C CLASSROOM OBSERVATION-STUDENT OPPORTUNITY TO RESPOND (Measure of Students’ Active Engageme nt in Learning Process from RPD) Instructor: ____________ School/C lass: ____________ Date/Time: _____________ Name Frequency *One tally mark for each inst ance in which individual stude nt responds to instructor **Circle tallies to indicate b ack-forth dialogue between indi vidual student and instructor

PAGE 123

113 APPENDIX D STUDENT ENGAGEMENT/ ON-TASK BEHAVIOR (Measure of Students’ Active Engageme nt in Learning Process from RPD) Instructor: _____________ School/C lass: ____________ Date/Time: _____________ 5 minutes 5 minutes 5 minutes 5 minutes Students Engaged* *Number of students looking at in structor at end of 5-minute period **Number of students in attendance: _______________ Instructor: ________________ School/C lass: _______________ Date/Time: _____________ 5 minutes 5 minutes 5 minutes 5 minutes Students Engaged* *Number of students looking at inst ructor at end of 5-minute period **Number of students in attendance: _______________ Instructor: ______________ School/C lass: ___________ Date/Time: _____________ 5 minutes 5 minutes 5 minutes 5 minutes Students Engaged* *Number of students looking at inst ructor at end of 5-minute period **Number of students in attendance: _______________

PAGE 124

114 APPENDIX E INITIAL INSTRUCTOR INTERVIEW (Prior to treatment implementation) 1. Describe the initial training you received in the LiPS program. When were you trained? Where and by whom? How many hours of initial training did you receive? 2. Did you receive any additional training s ubsequent to the initial training in the LiPS program? 3. Describe your experience with the LiPS program – both clinical and in the schools. Does this experience include work w ith individuals or groups? Describe the populations you’ve worked with using the LiPS program (child/adolescent/adult, LD, etc.). 4. Approximately how many hours have you logged teaching the LiPS program? 5. Describe the LiPS program. 6. Describe your other experiences te aching large groups of students.

PAGE 125

115 APPENDIX F INSTRUCTOR INTERVIE W – PERSPECTIVES (beginning, middle, end) 1. What strategies have you found to be most effective during the LiPS instruction? 2. What specific adaptations have you made to the LiPS program and why? 3. What specific student accommodations have you made during your LiPS instruction and why? 4. At this point, would your LiPS in struction look different if you were implementing it in a one-on -one setting? If so, how? 5. At this point, is there anything you wish you could do differently?

PAGE 126

116 LIST OF REFERENCES Adams, M. (1990). Beginning to read: Thinking and learning about print Cambridge: MIT. Alexander, A., Anderson, H., Heilman, P ., Voeller, K., & Torgesen, J. (1991). Phonological awareness training and remediati on of analytic decoding deficits in a group of severe dyslexics. Annals of Dyslexia 41, 193-206. Blachman, B. A. (2000). Phonological awareness In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (pp. 483-502). Mahwah, NJ: Lawrence Erlbaum Associates. Boardman, A. G., Arguelles, M. E., Vaughn, S., Hughes, M. T., & Klingner, J. (2005).Special education teachers’ vi ews of research-based practices. Journal of Special Education 39, 168-180. Bus, A. G., & van Ijzendoorn, M. H. (1999). Phonological awareness an d early reading: A meta-analysis of experi mental training studies. Journal of Educational Psychology 91, 403-414. Conway, T. W., Heilman, P., Rothi, L. J., Alex ander, A. W., Adair, J., Crosson, B. A., & Heilman, K. M. (1998). Treatment of a case of phonological alexia with agraphia using the Auditory Discrimination in Depth (ADD) Program. Journal of the International Neurops ychological Society 4, 608-620. Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review 30, 507-524. Denton, C. A., Vaughn, S., & Fletcher, J. M. (2 003). Bringing research-based practice in reading intervention to scale. Learning Disabilities Research & Practice 18, 201211. Ehri, L. C. (2004). Teaching phonemic awar eness and phonics. In P. McCardle and V.Chhabra (Eds.), The voice of evidence in reading research (pp. 153-186). Baltimore, MD: Paul Brooks Publishing. Elbaum, B., Vaughn, S., Hughes, M., & Moody, S. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619.

PAGE 127

117 Elliott, J., Lee, S. W., & Tollefson, N. ( 2001). A reliability and validity study of the dynamic indicators of basic ea rly literacy skills-modified. School Psychology Review 30, 33-49. Fletcher, J. M., & Francis, D. J. (2004). Scientifically based educational research: Questions, designs, and methods. In P. McCardle and V. Chhabra (Eds.), The voice of evidence in reading research (pp. 59-80). Baltimore, MD: Paul Brooks Publishing. Foorman, B. R., & Torgesen J. (2001). Critic al elements of classroom and small-group instruction promote readi ng success in all children. Learning Disabilities Research & Practice 16, 203-212. Gersten, R., & Dimino, J. (2001). The realities of translating research into classroom practice. Learning Disabilities Research & Practice 16, 120-130. Good, R., Kaminski, R., Laimon, D. & Johnson, D. (1992). Advances in assessment for the primary prevention of early academic problems (Summary). The Oregon Conference Monograph, Eugene OR: University of Oregon. Good, R.H., & Kaminski, R.A. (2003). Dynamic Indicators of Basic Early Literacy Skills (DIBELS), 6th Edition: Administration and scoring guide. University of Oregon: Sopris West Educational Services. Havey, J. M., Story, N., & Buker, K. (2002). Convergent and concurrent validity of two measures of phonologi cal processing. Psychology in the Schools 39, 507-514. Hintze, J. M., Callahan, J. E., Matthews, W. J., Williams, S. A., & Tobin, K. G. (2001). Oral reading fluency and prediction of reading comprehension in African American and Caucasian elementary school children. Manuscript submitted for publication. Hintze, J. M., Ryan, A. L., & Stoner, G. (2003). Concurrent validity and diagnosticaccuracy of the Dynamic Indicator s of Basic Early Literacy Skills and the Comprehensive Test of Phonological Processing, School Psychology Review, 32, 541-556. Kaminski, R. A., & Good, R. H. (1996). Towa rds a technology for assessing basic early literacy skills. School Psychology Review 25, 215-227. Kennedy, K. M., & Backman, J. (1993). E ffectiveness of the Lindamood Auditory Discrimination in Depth program with students with learning disabilities. Learning Disabilities Research & Practice 8, 253-259. Klingner, J. K., Ahwee, S., P ilonieta, P., & Menendez, R. (2003). Barriers and facilitators in scaling up research-based practices. Exceptional Children 69, 411-429.

PAGE 128

118 Lane, H. B., Pullen, P. C., Eisele, M. R., & Jordan, L. (2002). Preventing reading failure: Phonological awareness asse ssment and instruction. Preventing School Failure 46, 101-110. Lindamood, C. (1972, May). The LAC test: A new look at auditory conceptualization and literacy development K-12. Paper presented at th e Annual Meeting of the International Reading A ssociation, Detroit, MI. Lindamood, C., & Lindamood, P. (1971). Lindamood Auditory Conceptualization Test. Austin, TX: Pro-Ed. Lindamood, P., & Lindamood, P. (1998). The Lindamood Phoneme Sequencing Program for Reading, Spelling, and Speech. Austin, TX: Pro-Ed. Logan, K.R., Bakeman, R., Keefe, E.B. (1997) Effects of instruc tional variables on engaged behavior of students with disabi lities in general education classrooms. Exceptional Children 63, 481-497. McDougall, S., Hulme, C., Ellis, A., & Monk, A. (1994). Learning to read: The role of short-term memory and phonological skills. Journal of Experimental Child Psychology 58, 112-133. McGuinness, D., McGuinness, C., & Donohue, J. (1995). Phonological training and the alphabet principle: Eviden ce for reciprocal causality. Reading Research Quarterly 30, 830-852. National Assessment of Educa tional Progress (NAEP). (2005, Ju ly). National trends in reading by average scale scores Retrieved October 6, 2005, from http://nces.ed.gov/nationsreportcard/ltt/ results2004/nat-readin g-scalescore.asp. National Reading Panel (2000). Teaching children to read: An evidence-based assessment of the scientific research liter ature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development. No Child Left Behind Act of 2001: Reauthor ization of the Elementary and Secondary Education Act legislation and policies we bsite. (2002, July 11). Retrieved July 28, 2003, from http://www.ed.gov/offices/OESE/esea/. O’Dea, D. (1998). Improving reading and decoding skills through the use of multisensory teaching strategies. (ERIC Document Reproducti on Service No. ED422685). Olofsson, A., & Niedersoe, J. (1999) Early language development and kindergartenphonological awareness as predic tors of reading problems: From 3 to 11 years of age. Journal of Learning Disabilities 32, 464-472.

PAGE 129

119 Pugh, K. R., Mencl, W. E., Jenner, A. R., Lee, J. R., Katz, L., Frost, S. J., Shaywitz, S. E.,& Shaywitz, B. A. (2001). Neuroimagi ng studies of reading development and reading disability. Learning Disabilities Research & Practice, 16 240-249. Rashotte, C. A., MacPhee, K., & Torgesen, J. K. (2001). The effectiveness of a group reading instruction program with poor readers in multiple grades. Learning Disability Quarterly 24, 119-134. Roberts, R. G. (1975). Effects of the “Auditory Disc rimination in Depth Program” on auditory conceptualization and reading achievement. (ERIC Document Reproduction Service No. ED117675). Shaywitz, S. E. (1996). Dyslexia. Scientific American, 275 98-104. Shinn, M. R., & McConnell, S. (1994). Im proving general education instruction: Relevance to school psychologists. School Psychology Review, 23 351-371. Sofie, C. A., & Riccio, C. A. (2002). A comparison of multiple methods for the identification of children with reading disabilities. Journal of Learning Disabilitie s, 35 (3), 234-244. Smith, S. B., Simmons, D. C., & Kame’enui, E. J. (1995). Synthesis of research on phonological awareness: Principles and implications for reading acquisition. Technical Report No. 21, National Center to Improve the Tools of Educators, University of Oregon. Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. Speece, D. L., Mills, C., & Ritchey, K. D. ( 2003). Initial evidence that letter fluency tasks are valid indicators of early reading skill. Journal of Special Education 36, 223233. Stone, C. A., & Doane, J. A. (2001). The potential for empirically based estimates of expected progress for students with learni ng disabilities: Legal and conceptual issues. School Psychology Review 30, 473-486. Torgesen, J. K. (2002). The prev ention of reading difficulties. Journal of School Psychology 40, 7-26. Torgesen J. K. (2004). Lessons learned from research on interventions for students who have difficulty learning to read. In P. McCardle and V. Chhabra (Eds.), The voice of evidence in reading research (pp. 355-382). Baltimore, MD: Paul Brooks Publishing.

PAGE 130

120 Torgesen, J. K., Alexander, A. W., Wagner, R. K., Rashotte, C. A., Voeller, K., Conway, T. (2001). Intensive remedial instruc tion for children with severe reading disabilities: Immediate a nd long-term outcomes from tw o instructional approaches Journal of Learning Disabilities 34, 33-58. Torgesen, J., Wagner, R., Rashotte, C., Ro se, E., Lindamood, P., Conway, T., & Garvin, C. (1999). Preventing reading failure in young children with phonological processing disabilities: Group and i ndividual responses to instruction. Journal of Educational Psychology 91, 579-593. Vaughn, S., Linan-Thompson, S., Kouzekanani, K., Bryant, D.P., Dickson, S., & Blozis, S.A. (2003). Reading instruction grouping for students with reading difficulties. Remedial and Special Education 24, 301-316. Wagner, R., Torgesen, J., & Rashotte, C. (1999). Comprehensive test of phonological processing (CTOPP). Austin, TX: Pro-Ed. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III Tests of Achievement. Itasca, IL: Riverside Publishing.

PAGE 131

121 BIOGRAPHICAL SKETCH Elayne Proesel Coln was born and raised in Orlando, Florida. She was the only child of Carol and Glenn Proesel. She attende d the University of Central Florida (UCF) for her undergraduate studies in psychology and gr aduated with a Bachel or of Arts (B.A.) degree from UCF in 1998. Upon graduation, sh e and her husband, Jorge Coln, moved to Gainesville, Florida, to continue their studies. In the fall of 2000, Elayne entered th e School Psychology Program (SPP) in the Department of Educational Psychology at the University of Florida. She received her Master of Arts in Education (M.A.E.) degree in December of 2002. During the 20042005 academic year, Elayne completed her intern ship at the Multidisciplinary Diagnostic and Training Program (MDTP) at the Univ ersity of Florida. Upon completing her internship, Elayne was hired by MDTP a nd currently serves as an Educational Consultant. After five and one-half years in the SPP program, she intends to graduate with her Doctor of Philosophy (Ph.D.) de gree in school psychology in December, 2005 from the University of Florida. Elayne and Jorge have one beautiful son, Avery Ryan, who was born in December of 2003.


Permanent Link: http://ufdc.ufl.edu/UFE0013098/00001

Material Information

Title: Utility of the Lindamood Phoneme Sequencing Program (LiPS) for Classroom-Based Reading Instruction
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013098:00001

Permanent Link: http://ufdc.ufl.edu/UFE0013098/00001

Material Information

Title: Utility of the Lindamood Phoneme Sequencing Program (LiPS) for Classroom-Based Reading Instruction
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013098:00001


This item has the following downloads:


Full Text












UTILITY OF THE LINDAMOOD PHONEME SEQUENCING PROGRAM (LiPS)
FOR CLASSROOM-BASED READING INSTRUCTION















By

ELAYNE PROESEL COLON


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2005





























Copyright 2005

by

Elayne Proesel Col6n















ACKNOWLEDGMENTS

This research, from start to finish, spanned over approximately three years. There

are several people that I would like to acknowledge as they have contributed a great deal

of love, support, and encouragement throughout the duration of this project. First and

foremost, I would like to thank my forever loving husband, Jorge, for his continuous

words of wisdom and reassurance. My parents, Glenn and Carol Proesel, also contributed

so much in this way. In addition, I would like to thank my Chair, Nancy Waldron, for the

countless hours of brainstorming, editing, and advising she offered so patiently. Special

thanks also to the members of my committee: Tina Smith-Bonahue, Holly Lane, Lynda

Hayes, and John Kranzler. Last, but certainly not least, I would like to acknowledge my

wise and wonderful son, Avery. He was more a part of this research than he realizes, and

this is also for him.
















TABLE OF CONTENTS



A C K N O W L E D G M E N T S ................................................................................................. iii

L IST O F T A B L E S ................................................................................ ..... vii

LIST OF FIGURES ...................................... ......... ................... viii

ABSTRACT .............. .......................................... ix

CHAPTER

1 INTRODUCTION AND REVIEW OF THE LITERATURE ...............................1

P h on logical A w aren ess.................................................................... .....................5
D efin ition ............................. .............................. 5
Phonological Awareness v. Phonemic Awareness..............................................6
Phonological Awareness as a Predictor of Reading Achievement.............................6
Assessm ent of Phonological Awareness ...........................................................7
Intervening with Children Struggling to Learn to Read ............................................9
Critical Elements of Instruction ............. .... ...... ..... .. ................... 10
Measuring Progress and Reading Achievement Outcomes.............................12
Selecting Reading Curricula and Delivering Instruction................... ............. 14
The Lindamood Phoneme Sequencing Program ................................ ...............16
Program Purpose ................................................... ........... ..16
Program Sequence ........ ............................. ...... ...... .... .......... .. 16
Setting the clim ate for learning. ....................................... ............... 16
Identifying and classifying speech sounds. ............. .................................. 17
Tracking speech sounds. ........................................ ......... ............... 17
Associating sounds and symbols ........... ........... ...... ................. 18
Spelling (encoding) and reading (decoding). .............................................18
P program P ath s ................................................................... ............... 18
K ey Program Com ponents ............................................................................ 19
Training of Instructors ...................................... ......... ..... ............... 20
P reviou s L iP S R research ........................................................... .....................20
Individual im plem entation. ........................................ ...................... 21
Small group implementation .......................................................................28
Purpose of this Study ................................. ........... ........... .............. .. 32










2 M E T H O D ....................................................... 35

P a rtic ip a n ts ........................................................................................................... 3 5
S e ttin g s ...............................................................................3 6
In stru cto rs .............................................................................3 6
P ro c ed u re .................. ..............................................................................3 7
Treatment Integrity ................. ............. .. ............... 37
Student Progress/O utcom es .................................................. .......... ... ......... 41
M easu res .................. ............ ... .............. ............................ 4 1
Woodcock Johnson Tests of Achievement (WJ-III) ..........................................42
Comprehensive Test of Phonological Processing (CTOPP) .............................42
Dynamic Indicators of Basic Early Literacy Skills (DIBELS) ........................43
Lindamood Auditory Conceptualization Test (LAC) ....................................44
Analysis of Data .............. .. ................ ........................46

3 R E S U L T S .............................................................................4 8

D e scrip tiv e D ata ................................................................................................... 5 0
W hole Class Instruction .............................................. ............... 50
Treatm ent Integrity ............... ....... ........................ ........ .......52
Inclusion of key program elements. ............. ................................52
P ro g ram p ath s......................................................................................... 5 8
In stru cto r 1 ............................................................................................. 5 9
Instructor 2 ........................ ......... ......................... .... 60
D delivery of Instruction ..................................... ............................................61
Decision based on needs of classroom teacher and school .......................61
Decisions based on needs of students ................................... .......... 65
Decisions based on training and experience of instructors. ......................68
O ne-O n-O ne Im plem entation ................................................................. ....... 70
Sum m ary of D descriptive Results ................................................................... 74
Student Outcom e Data ..................... ..... ... ........... ....... ......... ............... 78
Gains Demonstrated After LiPS Intervention for All Students ...........................80
Student Outcomes: A Comparison of Measures ........................................ 81
Student Outcomes: Differences Between Instructors................. ........... 82
A naly ses of C ov ariance ................................................................................. 83
E effect Sizes ................................................................................. 84
Student Progress: A Closer Look ....................................................... 85
Benchmark Comparisons............................................ ...............85

4 D ISC U S SIO N ............................................................................... 89

R reflecting on the R results ....................................................... ................... ...... 93
Treatm ent Integrity .................. .......................... .. .... .. .............. ... 93
U se o f m irro rs ................................................... ...................... ...............9 4
Tracking w ith colored blocks.................................................................. 95
Assessm ent of student progress................... ............... ............... 96
Support of school and classroom teachers. ............................................ 97



v









Management of student behavior during instruction...............................98
Stu d ent O u tcom es............................................................ ............................ 10 0
Statistical analy ses............. ... ...................................... ...... ... ......... 100
Benchmarks ............................ .......................... .................101
Im plications for Practice ............................. ............................ ............... .103
L im stations of the C current Study ................................................................ ....... 105
Intern al V alidity ............................................. ................. 10 5
External V validity ................................... .. .. ...... ...............106
D directions for Future R esearch.......................................................................... 107

APPENDIX

A RECORD OF PROGRAM DELIVERY .............................109

B CLASSROOM OBSERVATION-ERROR HANDLING ........................................111

C CLASSROOM OBSERVATION-STUDENT OPPORTUNITY TO RESPOND... 112

D STUDENT ENGAGEMENT/ON-TASK BEHAVIOR ........................................113

E INITIAL INSTRUCTOR INTERVIEW .............. .............................................114

F INSTRUCTOR INTERVIEW PERSPECTIVES...............................................115

L IST O F R E FE R E N C E S ......... .. ............. .............................................................. 116

B IO G R A PH ICA L SK ETCH .................................... ........... ................. .....................121
















LIST OF TABLES


Table page

1 Number of Observations by Instructor for Whole Group Intervention ..................51

2 Record of Program Delivery, Percentages of Observations by Instructors Across
Intervention Period .................................................... .... .. ............ 53

3 Description of Instruction: Sessions, Time, and Delivery ....................................62

4 Percentage of Student Engagement by Instructor .......................................... 67

5 N um ber of O observations ................................................ .............................. 71

6 Record of Program Delivery, Percentages by Instructors for One-on-One
T re atm en t ......................................................................... 7 4

7 Record of Program Delivery, Percentages for Whole Group versus One-on-One ..76

8 Summary of Level of Treatment Integrity for Key Program Components Across
S e ttin g s .............................................. .. .................... ................ 7 7

9 Raw Score Means, Standard Deviations for Pretests/Posttests Across All
Participants ..................................... ................................ ........... 80

10 E stim ated M arginal M eans............................................................ .....................8 1

11 Bonferroni Pairw ise Com prisons ........................................ ....................... 82

12 Means and Standard Deviations by Instructor and Classroom ................................83

13 Student Differences at Posttest By Instructor .................................. ............... 84

14 Percentage of Students at Benchmarks at Pretest/Postest on DIBELS ....................87

15 Percentage of Students Below Recommended Minimum at Pretest/Posttest on
L A C ................................ ............................ ............................ .. 8 8
















LIST OF FIGURES

Figure page

1 Vertical Program Paths (Recommended) ......................................................59

2 H orizontal Program Paths .............................................. .............................. 59















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

UTILITY OF THE LINDAMOOD PHONEME SEQUENCING PROGRAM (LiPS)
FOR CLASSROOM-BASED READING INSTRUCTION

By

Elayne Proesel Col6n

December, 2005

Chair: Nancy Waldron
Major Department: Educational Psychology

Phonological awareness training has been found to be a crucial component of

beginning reading instruction. One reading program that is often used in the schools and

offers phonological awareness training is the Lindamood Phoneme Sequencing Program

for Reading, Spelling, and Speech (LiPS). The purpose of this study was to investigate

how the LiPS program, a program initially designed for one-on-one use, was adapted and

employed in schools with large groups of kindergarten students. Descriptive information

was collected to compare the treatment integrity of the LiPS program in the school setting

with a clinical setting where the program was employed one-on-one. Additionally, pretest

and posttest data were collected on the students in the kindergarten classrooms to assess

student outcomes. The assessment of student outcomes involved four assessment

instruments: the Lindamood Auditory Conceptualization test (LAC); the Woodcock-

Johnson Tests of Achievement (WJ-III) Word Attack and Word Identification tasks; the

Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Letter Naming and









Phoneme Segmentation tasks; and the Comprehensive Test of Phonological Processing

(CTOPP) Elision, Blending Words, and Sound Matching tasks.

Results indicated that, in the school setting, instructors demonstrated low levels of

treatment integrity as compared to a high degree of treatment integrity that was

maintained by the instructors in a clinical setting. Important LiPS program components

that were omitted in the school setting included Tracking following a prescribed

sequence, formal assessment of student progress or mastery, and key instructional

materials. When considering student outcomes for participants across the two school

sites, statistical analyses yielded positive mean gains across all students for each

assessment measure. Furthermore, mean gains achieved on the LAC were statistically

significantly greater than gains on the three tasks of the CTOPP (Elision, Blending

Words, and Sound Matching) and two tasks of the WJ-III (Word Attack and Word

Identification). No statistically significant differences were noted between mean gains on

the LAC versus DIBELS tasks. When considering benchmark progress for the LAC and

DIBELS measures, gains across all students were not as great for the LAC as for the

DIBELS tasks. Implications for use of the LiPS program in school settings are discussed.














CHAPTER 1
INTRODUCTION AND REVIEW OF THE LITERATURE

From educators to politicians to parents, there is widespread concern that reading

instruction in our public schools is not as effective as it should be, resulting in a sense of

urgency to improve literacy outcomes for our children (Torgesen, 2002). After all, the

ability to read is an essential skill in today's world. "Reading is a foundation skill for

school learning and life learning the ability to read is critical for success in modem

society" (Lane, Pullen, Eisele, & Jordan, 2002, p. 101). Data from the 2005 National

Assessment of Educational Progress (NAEP) report indicated that 36 percent of fourth

graders and 27 percent of eighth graders were reading below a basic reading level

(NAEP, 2005). While learning to read has consistently been an educational priority for

young schoolchildren for many decades, the focus placed on literacy outcomes today far

exceeds pressures placed on students, educators, and parents in the past. According to

Snow, Burns, and Griffin (1998), "The demands are far greater than those placed on the

vast majority of schooled literate individuals a quarter-century ago" (p. 20).

This urgency for educators to address students' literacy needs is fueled by recent

empirical findings related to outcomes for struggling readers. For example, Snow et al.

(1998) reported that a student who is not a reasonably proficient reader by the end of

third grade is very unlikely to graduate from high school. Therefore, "it is not just that the

teaching of reading is more important than ever before, but that it must be taught better

and more broadly than ever before" (Adams, 1990, p. 26).









Issues of quality instruction and early intervention to address those students at risk

for reading failure pervade the current reading research literature (Adams, 1990; Snow et

al., 1998; Torgesen, 2002). Fortunately, knowledge and understanding of how children

learn to read and why many struggle have increased exponentially over the last three

decades (Denton, Vaughn, & Fletcher, 2003). For example, dozens of professional

organizations exist, such as the International Reading Association, the National Reading

Conference, and the International Dyslexia Society, with members dedicated to

understanding, remediating, and preventing reading failure. While reading research

continues to evolve and educators learn increasingly more about what it takes to be a

skilled reader, significant monetary and intellectual resources within the last decade have

been devoted to learning more about and improving student reading achievement at the

local, state, and national level.

Several groups have worked diligently in recent years to synthesize the extant

literature related to reading achievement in some meaningful way and offer

recommendations and guidelines to focus future efforts. For example, the U.S.

Department of Education and the U.S. Department of Health and Human Services

requested that the National Academy of Sciences establish a committee to examine issues

surrounding the prevention of reading difficulties in young children (Snow et al., 1998).

In a report exceeding four hundred pages in length and entitled Preventing Reading

Difficulties in Young Children, the committee focused on summarizing the extant

literature related to the effectiveness of interventions for children struggling to learn to

read and providing recommendations based on empirical evidence to assist educators and

parents in their work with struggling readers. In another effort, the National Reading









Panel (2000) reviewed more than 100,000 empirical studies related to reading instruction

and created an influential document, entitled Teaching Children to Read, to assist parents

and teachers, among others, in identifying key skills and methods consistently related to

reading success. In their review of the reading research literature, the National Reading

Panel identified effective instructional practices related to various aspects of reading

including phonemic awareness, phonics, fluency, and comprehension.

Politicians and government officials have also played a role in the movement

toward increasing academic success and the quality of instruction for children in the

United States by introducing major legislation in recent years. In January of 2002,

President Bush signed into law the No Child Left Behind Act of 2001, which served to

revise and reauthorize the Elementary and Secondary Education Act. In addition to

redefining the federal role in K-12 education, this act focuses on four primary issues:

increasing accountability within the schools, providing increased flexibility at the local

level, expanding options for parents who are dissatisfied with their child's current

educational situation, and understanding and infusing research-based practices into

educational curricula (No Child Left, 2002).

While a sound knowledge base exists regarding effective reading practices that

produce positive outcomes for students, these instructional methods are not widely

included in typical classroom instruction (Denton et al., 2003). Thus, there is an

increasing focus on bridging the research-to-practice gap and improving our

understanding of the process of transferring empirically supported instructional methods

related to reading into the classroom and sustaining these practices. Problems persist in

translating research into classroom practices (e.g., from the clinic to the classroom) and









scaling up these research-based practices to affect large numbers of students in the

schools (Denton et al., 2003; Gersten & Dimino, 2001; Klingner, Ahwee, Pilonieta, &

Menendez, 2003). Factors that have been cited to affect the scaling up and sustaining of

educational innovations include the link between researchers and teachers, teacher access

to research-based information (i.e., professional development and support), and the

feasibility of knowledge and practices (i.e., practical and applicable in classrooms)

(Boardman et al., 2005; Denton et al., 2003).

Fortunately, research indicates that many of the instructional practices that are

effective for special education students are at least equally beneficial for general

education students as well (Vaughn, Gersten, & Chard, 2000, in Boardman et al., 2005).

Therefore, especially for prevention and early intervention services for younger students,

research-based practices demonstrated to assist the students at-risk for later reading

failure may be delivered in the general education classrooms to serve students with a

range of abilities. Meeting the needs of more students simultaneously may contribute to

greater acceptance of certain practices among educators and foster maintenance of these

practices in the schools.

Educational researchers continue to work to determine the specific factors that play

a role in children's reading development and success. "Although we do not yet

understand the conditions that must be in place to prevent reading difficulties in all

children, we do know what must be done to very substantially reduce the number of

children who fail to acquire adequate reading skills during the primary grades of

elementary school" (Torgesen, 2002, p. 22). Unfortunately, challenges persist regarding

ensuring that practitioners are equipped and prepared to implement research-validated









reading practices in classrooms and with groups of students. Regarding the prevention of

later reading difficulties, especially for younger students (i.e., kindergarteners) and

beginning readers, one foundational reading skill that has received significant attention is

phonological awareness and its instruction (Snow et al., 1998; Torgesen, 2002).

Phonological Awareness

Definition

Phonological awareness has been described as the "conscious sensitivity to the

sound structure of language" (Lane et al., 2002, p. 101). In other words, it is the ability to

analyze spoken language and recognize that it consists of smaller units. Phonological

awareness is an umbrella term used to describe awareness of spoken language at the

word, syllable, onset-rime, and phoneme level (Lane et al., 2002). Individuals with strong

phonological awareness skills can detect, match, blend, segment, and manipulate speech

sounds, and oftentimes the ability to rhyme is the first phonological skill that children

master (Lane et al., 2002). In fact, "sometimes children have trouble learning to decode

because they are completely unaware of the fact that spoken language is segmented into

sentences, into syllables, and into phonemes" (Williams, 1987, in Blachman, 2000, p.

484). The development of phonological awareness typically begins by age 3 and

improves over many years as the child develops academically (Snow et al., 1998).

As indicated previously, there is a range of phonological awareness skills that

children develop over time. Adams (1990) identifies at least five different levels of

awareness: (1) knowledge of nursery rhymes an ear for the sound of words; (2) ability

to compare and contrast the sounds of words for rhyme and alliteration; (3) the ability to

blend and segment at the syllable level; (4) the ability to blend and segment at the

phoneme level; and (5) the ability to manipulate phonemes by adding, deleting, or









moving phonemes to create new words. This may be interpreted as a developmental

sequence. However, the issue of phonological awareness developing in a stage-like

manner is scantily addressed in the reading research literature.

Phonological Awareness v. Phonemic Awareness

The terms phonological awareness and phonemic awareness are often inaccurately

used interchangeably. Whereas phonological awareness refers to a general awareness of

the sound structure of language, including the ability to rhyme and blend or segment

larger word parts, phonemic awareness specifically refers to an individual's ability to

attend to the individual sounds in spoken language. Those with strong phonemic

awareness skills are able to manipulate individual phonemes, or sounds (Lane et al.,

2002). Those with sound phonemic awareness skills have an appreciation for rhyme and

alliteration, as well as the understanding that every word consists or is created from a

sequence of phonemes (Snow et al., 1998). Thus, phonemic awareness is believed to

contribute to later reading development and achievement. "An awareness of phonemes is

key to understanding the logic of the alphabetic principle and thus to the learnability of

phonics and spelling" (Snow et al., 1998, p. 52). Moreover, while some sense of

phonemic awareness is generally evident in the typically developing child beginning at a

young age, this skill often must be specifically taught or honed. "Because of the physical

and psychological nature of phonemes as well as the nature of human attention, few

children acquire phonemic awareness spontaneously" (Adams, Treiman, & Pressley,

1998, in Snow et al., 1998, p. 54).

Phonological Awareness as a Predictor of Reading Achievement

Skills in phonological awareness have been demonstrated to be reliable predictors

of reading achievement. Moreover, phonological awareness is cited as a key to beginning









reading acquisition (Smith, Simmons, & Kame'enui, 1995). Specifically, tasks such as

identifying the first sound in a word, blending phonemes into a word, and analyzing

sounds within words have been cited as effective predictors of reading development

(Olofsson & Niedersoe, 1999). It is believed that instruction in these and similar

phonological awareness skills assist in preparing children to learn and benefit from

phonics (Lane et al., 2002). Therefore, children with poor phonological awareness skills

may be at risk for having difficulties in learning to read in the primary grades. In fact, it

has been noted that "children who enter first grade low in knowledge about the

phonological features of words or who have difficulties processing the phonological

features of words are at high risk for difficulties responding to early reading instruction"

(Torgesen, 2002, p. 12). Yet, phonological awareness skills are not necessarily fully

developed or intact prior to beginning reading instruction. Phonological awareness skills

may strengthen as the child develops into a mature reader. "The correlation between

reading and phonological awareness, which is already substantial by the start of school,

becomes stronger during the early grades" (Snow et al., 1998, p. 56). However,

phonological awareness abilities remain a robust predictor of early reading achievement

even when assessed in very young preschool children (Blachman, 2000). In fact, even

when individual differences in intelligence are considered, phonological awareness

abilities assessed in preschool children continue to be significant predictors of later word

recognition and spelling skills (Kennedy & Backman, 1993).

Assessment of Phonological Awareness

It is important to identify early those children that are at risk for reading failure.

Given that phonological awareness is an accurate and reliable predictor of reading

achievement, assessing a child's phonological awareness skills is a logical first step in









helping these children. "Educators face the formidable challenge of determining which

children have weaknesses in phonological awareness and, therefore, which children are

likely to develop reading problems" (Lane et al., 2002, p. 103). Since phonological

awareness skills are important to reading development and later achievement, what types

of tasks are being used to determine whether a child has the necessary precursors for

reading success?

The assessment of an individual's phonological awareness typically involves one or

more of the following tasks: isolating or segmenting one or more phonemes in a spoken

word, blending or combining a sequence of separate phonemes into a word, manipulating

(adding, subtracting, or rearranging) the phonemes within a word (Snow et al., 1998).

Assessment tools evaluating an individual's phonological or phonemic awareness skills

do not involve letters (Torgesen, 2002). It is phonological awareness tasks that involve

manipulating spoken language that assist in identifying children that are at risk for

reading difficulties. Researchers have found that children who are successful on

phonological awareness tasks such as deletion (e.g., say hit without saying the /h/ sound)

and categorization (e.g., bat and big go together because they both start with /b/) learn to

read and spell with greater ease than those children that perform poorly on such tasks

(Blachman, 2000).

Although informal methods may be used, many formal measures of phonological

awareness have been developed and are available for widespread use (see Lane et al.,

2002). Furthermore, the assessment of phonological awareness can be accomplished

individually or in a group setting. Ultimately, it is important to assess a broad range of

skills in order to have the best estimate of future reading performance. "Both conscious









awareness of the phonemes in words and ability to accurately identify them within words

is necessary in learning to phonemically decode words in print" (Torgesen, 2002, p. 12).

Research has been conducted in recent years regarding the relationship between

phonological awareness and intelligence. It is believed that strengths or weaknesses in

phonemic awareness do not necessarily depend on an individual's intellectual ability or

general verbal skills (Pugh et al., 2001; Shaywitz, 1996; Torgesen, 2002). "Weaknesses

in phonemic awareness characterize children with reading problems across a broad span

of general verbal ability" (Torgesen, 2002, p. 12). It has been found that phonological

awareness skills predict future reading achievement even when intelligence is controlled.

"Tests of phonological awareness are among the best predictors of children's progress in

learning to read and typically account for large amounts of variance in reading skill even

after the effects of age and IQ have been controlled for" (McDougall, Hulme, Ellis, &

Monk, 1994).

Intervening with Children Struggling to Learn to Read

For those individuals identified as having weaknesses in phonological or phonemic

awareness, it is important to intervene as early as possible in order to prevent further

reading difficulties. "Children who are delayed in the development of phonemic

awareness have a very difficult time making sense out of 'phonics' instruction"

(Torgesen, 2002, p. 12). Therefore, these students must obtain the necessary instruction

to strengthen their phonological awareness skills and prepare them for future reading

instruction. Early preventative or remedial efforts will prevent academic frustrations from

consuming these children. Therefore, the total number of negative side effects from

experiencing reading failure can be reduced (Olofsson & Niedersoe, 1999).









To foster phonological awareness, children must be exposed to print at an early

age. Among other things, this can be accomplished by reading to children, talking about

literature and storybook characters, and pointing out signs along the roadside. "Such

global awareness of the forms, functions, and uses of print provides not just the

motivation but the basic conceptual backdrop against which reading and writing may best

be learned" (Adams, 1990, p. 337). Current reading research literature explicitly indicates

that incorporating phonological awareness components into early reading instruction is

essential. Those children that have strong phonological awareness skills, either due to

explicit instruction or developed through early family and preschool literacy experiences,

appear to have an early reading and spelling advantage (Blachman, 2000).

Critical Elements of Instruction

While early literacy experiences including exposure to printed materials provide an

important foundation for later reading success, there are several critical elements of

reading instruction that should be present in early elementary classrooms. Reading

research has demonstrated that "there is strong evidence of a positive effect on reading

with intervention that combines phonological awareness instruction and explicit,

systematic instruction in reading" for children in kindergarten, first, and second grades

(Blachman, 2000, p. 486). Although it is beneficial for all children in the early

elementary grades to receive instruction in phonological awareness that is direct,

systematic, and explicit, there is a heightened necessity for this to occur for the struggling

reader or those students at risk for later reading failure. "Specifically, instruction for

children who have difficulties learning to read must be more explicit and comprehensive,

more intensive, and more supportive than the instruction required by the majority of

children" (Foorman & Torgesen, 2001, p. 206). Due to deficiencies in phonological









awareness abilities, some children will not discover connections between spoken and

written language independently, despite having had quality preschool literacy experiences

and opportunities to interact with language (Blachman, 2000).

By ensuring that these critical elements of classroom instruction are present for all

students, especially those most in need of reading support, the percentage of children

remaining poor readers can be significantly reduced (see Torgesen, 2002). For children

with the most severe reading difficulties, phonological awareness interventions that are

longer, more intense and explicit, and structured to move beyond accuracy in decoding

are necessary to facilitate fluent word recognition (Blachman, 2000), and therefore later

academic success.

To deliver reading instruction in intensive, meaningful, and efficient ways,

teachers employ various grouping practices in their classrooms. These include organizing

and delivering instruction to a whole classroom of students simultaneously, in small

groups, or individually. The efficacy of these different instructional arrangements on

teaching reading to average and struggling readers has been detailed in the reading

research literature. Generally, research has demonstrated that small group and one-on-one

instructional arrangements represent the most effective grouping practices for reading

instruction (Elbaum, Vaughn, Hughes, & Moody, 2000). However, many teachers

consider whole class instruction to be the preferred approach to reading instruction

(Elbaum et al., 1999) for general and special education students, and this continues to be

the most common practice (Elbaum et al., 1999; Logan, Bakeman, & Keefe, 1997).

Unfortunately, the ways in which teachers group students for reading instruction affect

student outcomes, and small group and individual instruction have been demonstrated to









be more effective than whole class instruction (Ehri, 2004; Elbaum et al., 2000; Vaughn

et al., 2003).

Measuring Progress and Reading Achievement Outcomes

There is empirical research to demonstrate that phonological awareness plays a

significant role in reading ability and disability (Lane et al., 2002; Olofsson & Niedersoe,

1999; Smith et al., 1995; Stone & Doane, 2001; Torgesen, 2002). Regardless of the

grouping arrangements or the specific instructional content, it is important to consider

methods that can be used to monitor progress and evaluate outcomes for those students

that are at risk for reading failure or are involved in some sort of reading intervention. In

selecting measures to document student progress or evaluate the efficacy of an

intervention, it is important to consider the psychometric adequacy and the degree of

specificity of each outcome measure (Stone & Doane, 2001).

Norm-referenced, standardized assessment instruments provide information about a

student's current level of functioning compared to a large, often nationally representative

group of same-aged peers. This information can be especially important when evaluating

student achievement gains or in making eligibility determinations. Norm-referenced tests

can be used district-wide, statewide, or even nationally, to provide a unified method for

determining eligibility for special education programs (Shinn & McConnell, 1994).

Norm-referenced, standardized assessment tools are often valued for their ease of

interpretation, presumed technical adequacy, and provision of norms to compare student

performance (Sofie & Riccio, 2002). An example of a norm-referenced, standardized

measure that can be implemented to assess skills such as word identification and reading

fluency is the Woodcock Johnson Tests of Achievement (WJ-III; Woodcock, McGrew, &

Mather, 2001). The Comprehensive Test of Phonological Processing (CTOPP; Wagner,









Torgesen, & Rashotte, 1999) is one common standardized measure used to evaluate an

individual's phonological awareness abilities.

While there is often an emphasis on selecting norm-referenced, standardized tests

to make special education decisions (Sofie & Riccio, 2002), other evaluative tools can be

employed to monitor student progress or document achievement gains. Curriculum-based

measurement (CBM) is a dynamic assessment tool that can be employed to monitor

student progress and assist in evaluating achievement outcomes relative to a particular

intervention. CBM "relies on a traditional psychometric framework by incorporating

conventional notions of reliability and validity so that the standardized test administration

and scoring methods have been designed to yield accurate and meaningful information"

(Deno, Fuchs, Marston, & Shin, 2001, p.508). Therefore, using CBM, student

performance can be closely monitored throughout instruction, and decisions can be made

immediately as to whether academic progress is satisfactory as this progress relates to the

curriculum. One example of curriculum-based reading materials that have gained

enormous popularity in recent years as a means of identifying at-risk students and

monitoring student progress is the Dynamic Indicators of Basic Early Literacy Skills, or

DIBELS (Good, Kaminski, Laimon, & Johnson, 1992; Kaminski & Good, 1996). One

type of DIBELS task that has been employed with young children is letter naming

fluency, which measures the accuracy and speed with which a child can provide the

names of upper and lower case letters of the alphabet. Letter naming fluency, or letter

identification, is considered to be one of the strongest predictors of school readiness and

later reading achievement (Elliott, Lee, & Tollefson, 2001; Snow et al., 1998; Speece,

Mills, & Ritchey, 2003)









Selecting Reading Curricula and Delivering Instruction

Through comprehensive meta-analyses of the reading research literature and

influential documents such as Preventing Reading Difficulties in Young Children and

Teaching Children to Read, findings regarding effective prevention and intervention

strategies for teaching children to read are becoming apparent. For example, through its

review of the reading research literature, the National Reading Panel (2000) identified

several instructional components that should be present in order for children to succeed in

learning to read. Among these are the incorporation of phonemic awareness activities into

early reading instruction and the importance of direct instruction (National Reading

Panel, 2000). In order for beginning reading instruction or interventions to be effective,

phonological awareness training must involve explicitly and systematically teaching

children in small groups to manipulate phonemes with letters (National Reading Panel,

2000).

However, while prominent researchers and major legislation appear to

resoundingly support certain instructional practices and intervention methods as being

effective, a significant number of children continue to struggle in learning to read. "A

large number of students who should be capable of reading ably given adequate

instruction are not doing so, suggesting that the instruction available to them is not

appropriate" (Snow et al., 1998, p. 25). A host of instructional conditions remain in a

significant number of schools today that contribute to the failure of many students in

learning to read. These include lack of an appropriate curriculum, low expectations for

student success, teachers poorly trained in effective methods for teaching children to

learn to read, unavailability of appropriate curricular materials such as books, and noisy

or crowded classrooms (Snow et al., 1998). Oftentimes, phonological awareness









instruction, which has been documented to be a necessary component of early reading

instruction or intervention, is not adequately addressed in general classroom instruction.

Unfortunately, while the ability to manipulate and segment phonemes correlates strongly

with later reading success, these skills are generally unattainable unless children receive

formal reading instruction in these areas (Adams, 1990).

Just as children must acquire knowledge in a variety of academic subjects social

studies, science, and mathematics, for example quality reading instruction should

address various facets of reading in addition to phonological awareness. These skills

include phonics, fluency, vocabulary instruction, and text reading comprehension

(National Reading Panel, 2000). Phonological awareness training offers the necessary

foundational knowledge in the alphabetic principle and serves as one component in a

comprehensive instructional program, but other competencies must be acquired as well to

ensure student success in reading and writing (National Reading Panel, 2000). However,

for the young student, phonological awareness instruction provides the necessary

foundation for later instruction in other reading principles.

With the accountability pressures placed on schools and the financial woes of local

and state educational systems, educators must continue to find ways to deliver effective

reading instruction to students and provide meaningful interventions to those students

struggling to learn to read for whatever reason. One such program that focuses on

phonological awareness training and has received attention in the reading research

literature is the Lindamood Phoneme Sequencing Program. While originally devised for

clinical use with students in a one-on-one setting, this program has been "scaled up" for









use in the schools, in both prevention and intervention efforts, and has been delivered in

various grouping arrangements.

The Lindamood Phoneme Sequencing Program

Program Purpose

The Lindamood Phoneme Sequencing Program for Reading, Spelling, and Speech

(LiPS) is a multisensory program that incorporates auditory, visual, and tactile-

kinesthetic strategies to teach phonemic awareness, and eventually reading and spelling

skills, through direct instruction. "The contribution of the LiPS Program is the

development of an oral-motor, visual, and auditory feedback system that enables all

students to prove the identity, number, and order of phonemes in syllables and words"

(Lindamood & Lindamood, 1998, p. xiv). The LiPS program can be employed as the

primary language arts component of an educational curriculum, or can be used in

conjunction with existing reading materials used within the schools (Lindamood &

Lindamood, 1998).

Program Sequence

According to the LiPS manual, the progression of the program is generally

organized into five levels. These components include Setting the Climate for Learning,

Identifying and Classifying Speech Sounds, Tracking Speech Sounds, Associating

Sounds and Symbols, and Spelling (Encoding) and Reading (Decoding).

Setting the climate for learning

The purpose of the first level, Setting the Climate, is to engage the student actively

in the learning process by helping him to know what he will be doing and why

(Lindamood & Lindamood, 1998). In this portion of the program, the student learns more









about how to see, hear, and feel the sounds in words in order to make learning to read and

spell easier (Lindamood & Lindamood, 1998).

Identifying and classifying speech sounds

In Identifying and Classifying Speech Sounds, the student is introduced to the

process of categorizing speech sounds based on similarities and differences between

them. The student begins the multisensory experience, hearing, feeling, and seeing

sounds as they are produced in order to identify, classify, and label each of the consonant

and vowel sounds (Lindamood & Lindamood, 1998). As new sounds are introduced and

labeled, letter symbols may be presented concurrently, or this component may be

postponed to a later level.

Tracking speech sounds

Tracking involves the manipulation of concrete objects mouth pictures, colored

blocks, and/or colored felts in order to learn the identity, order, and sameness/

difference of speech sounds in syllables and words. "The ability to track sounds in

sequences and conceptualize them visually is a critical factor in reading and spelling"

(Lindamood & Lindamood, 1998, p. 11). During this component, the student hones his

skills in "tracking sounds in sequences and associating sounds and symbols with these

sequences" (Lindamood & Lindamood, 1998, p. 11). Beginning at the syllable level with

two to three sounds, tracking continues throughout the sequence of the program to the

multisyllable level. During the tracking activities, the student learns to track five types of

changes in syllables as one sound at a time is substituted, taken away, added, repeated, or

switched (Lindamood & Lindamood, 1998).









Associating sounds and symbols

If letter symbols were not previously introduced, they can be introduced at this

stage of the program. While letter symbols can be used for reading and spelling activities

(oftentimes mouth pictures are used initially for younger or more severe students),

tracking activities never involve the use of letter symbols. This is because tracking

activities involve the manipulation of phonemes, and letters oftentimes do not match to

sounds with a one-on-one correspondence (e.g., /th/ is one sound, but is represented by

two letters). "Sound-symbol association activities in Spelling and Reading should be

overlapped with the Tracking activities as a separate but concurrent task" (Lindamood &

Lindamood, 1998, p. 13).

Spelling (encoding) and reading (decoding)

Through reading and spelling activities first with mouth pictures, then letter

symbols the student has the opportunity to integrate the auditory tracking skill with the

sound-symbol associations developed in previous levels of the program (Lindamood &

Lindamood, 1998). Spelling and reading tasks extend from the simple syllable level to

the complex and multisyllable levels, depending on the age and developmental level of

the student.

Program Paths

There are two paths through the LiPS program: the Horizontal Path and the Vertical

Path (Lindamood & Lindamood, 1998). The order of progression through the program

materials depends on the age and developmental level of the student, as well as instructor

preference. In the Horizontal Path, all consonant sounds are presented first, followed by

all vowel sounds. Then tracking, reading, and spelling of syllables and words are

introduced, from simple, to complex, to multisyllable words. The Vertical Path, however,









presents three consonant pairs and three vowel sounds, and uses these to track, read, and

spell simple syllables and words. Next, the remaining consonants and vowels are

introduced slowly as tracking, reading, and spelling continues to the multisyllable level.

The Vertical Path is deemed most appropriate for younger children, children with

developmental delays, or those students that have experienced limited academic success

(Lindamood & Lindamood, 1998).

According to the program manual, the length of LiPS treatment will vary

depending on the type of instructional setting (Lindamood & Lindamood, 1998). In a

classroom, it is suggested that instruction should be provided daily for approximately 40

to 50 minutes in order to reach the complex syllable level within two to three months. For

a clinical setting with one-on-one or small group instruction, it is suggested that intensive

treatment be administered for three to four hours each day. Length of treatment will

depend on the age and skill level of the student.

Key Program Components

A key element of the LiPS program is the quality of exchanges between the

instructor and the student or students. The program developers describe a Socratic

questioning interaction that they term responding-to-the-response (Lindamood &

Lindamood, 1998). In responding-to-the-response, the instructor incorporates simple and

direct questioning in such a way as to allow the student to discover new concepts,

monitor his or her own progress, and identify and self-correct errors. "This questioning

elicits the sensory-cognitive connections that are the goal of the LiPS Program"

(Lindamood & Lindamood, 1998, p. xiii). For example, instead of correcting a student's

incorrect response by providing the correct answer, the instructor uses a series of

questions to lead the student to the desired response. This is believed to be the most









critical element of the instructor-student interactions throughout the entire program

(Lindamood & Lindamood, 1998).

Training of Instructors

While training in the LiPS program has varied since the program's inception,

persons affiliated with the Lindamood-Bell Learning Processes company train interested

individuals throughout the country on a regular basis. Currently, program developers and

affiliated trainers offer a three-day workshop to prepare persons to teach the LiPS

program to individuals or groups of students. A minimum of 70 to 80 training are

offered nationally each year for professionals interesting in learning the LiPS program (P.

Worthington, personal communication, September 28, 2005). Presently, Lindamood-Bell

has contracts with over 100 schools and districts nationwide, infusing trainers into these

systems to teach teachers how to instruct students in the LiPS program and offering

consultative services to these schools for at least one school year (P. Worthington,

personal communication, September 28, 2005).

Previous LiPS Research

The LiPS program has been implemented in dozens, if not hundreds, of educational

and clinical sites throughout the nation. However, substantial empirical evidence

regarding student outcomes as a result of this reading intervention remains limited. There

are only a small number of studies that have examined issues surrounding one-on-one

implementation of the LiPS program, and fewer that address LiPS program

implementation with small groups or classes of students. The following describes some

of the LiPS program research that has been documented in recent years.









Individual implementation

A handful of studies have been published in recent years evaluating the efficacy of

the LiPS program with one-on-one implementation. Research involving the Lindamood

program has been conducted with samples of various sizes, with participants with a wide

range of ages, and in both school and clinical settings. In one of the first studies to

examine issues of program effectiveness with students with learning disabilities in the

schools, Kennedy and Backman (1993) compared student reading achievement scores for

nine students who received the Lindamood program in addition to the school's traditional

curriculum with nine students in a control group who received only the traditional

curriculum. Participants were between the ages of 11 and 17 and attended a nonprofit

residential school for high school students with severe learning disabilities. An

educational consultant and a teacher who had previously been trained in the Lindamood

program, along with a speech-pathologist, trained the ten teachers implementing the

intervention during a series of in-service training and regular bi-weekly meetings.

Assessment measures were administered at the beginning of the school year, at mid-year,

and at the end of the school year. Treatment for those receiving the Lindamood program

began after protesting in September, was administered individually, and consisted of

three 50-minute class periods per day for six weeks, totaling 75 hours (Kennedy &

Backman, 1993). While all of the participants in this study were reported to have made

significant gains on standardized reading and spelling measures, there was no evidence

that those students in the experimental condition made significantly more gains than the

control group on these standardized reading and spelling measures (Kennedy &

Backman, 1993). However, there was evidence of significantly greater gains made by

those students receiving the Lindamood program on measures of phonological awareness









and use of phonetic strategies in spelling real and nonwords (Kennedy & Backman,

1993). Overall, the authors concluded that the Lindamood program "was a successful

addition to a comprehensive remedial program in terms of improved ability to sequence

speech sounds and phonetic accuracy in spelling real and nonwords within this sample of

students with severe LDs" (Kennnedy & Backman, 1993, p. 258).

While Kennedy and Backman (1993) evaluated the Lindamood program's

effectiveness with a high school sample, Torgesen et al. (1999) addressed the program's

success with elementary school students. Torgesen et al. (1999) evaluated the relative

effectiveness of three methods, including a variation of the LiPS program, for preventing

reading disabilities in children with weak phonological skills (n = 138). Students were

recruited to participate in the two and a half year study midway through their

kindergarten year. The research design consisted of four conditions: a phonological

awareness plus synthetic phonics (PASP) condition, an embedded phonics (EP)

condition, a regular classroom support (RCS) condition, and a control group that did not

receive treatment (NTC). Only two of these conditions, PASP and EP, were considered to

be truly experimental in nature; the third intervention (RCS) was designed to be most

closely aligned with the children's present reading curriculum. In the PASP condition,

students received the LiPS program (referred to in this study and formerly known as the

Auditory Discrimination in Depth, or ADD, program) with a focus on explicit instruction

in phonemic awareness in conjunction with some instruction in reading decodable text.

While both the PASP and EP programs consisted of direct instruction in phonemic

decoding strategies, "the most important instructional contrast involved the degree of

explicitness of instruction in phonological awareness and phonemic reading skills as well









as the extent of decontextualized, focused practice on these skills" (Torgesen et al.,

1999).

Participants in each of the three treatment conditions received four 20-minute

sessions of one-on-one instruction per week over the two and one half year period.

Certified teachers led two of the weekly sessions, and the two additional weekly sessions

were led by aides who followed the teacher's written lesson plans. The certified teachers,

referred to as tutors, were recruited for this study, randomly assigned to either the EP or

PASP condition, and received eighteen hours of initial training by members of the

research team in the program to which they were assigned. In fact, Patricia Lindamood,

one of the developers of the program, trained the tutors involved with the PASP program.

This initial training was followed up by biweekly evaluations of treatment integrity from

research project members via videotaped sessions and inservice training throughout the

treatment period. In sum, total treatment time consisted of 88 hours of one-on-one

instruction beginning in the middle of kindergarten and extending through the 2nd grade

(Torgesen et al., 1999).

According to Torgesen et al. (1999), the most phonemically explicit condition, the

PASP condition, produced the strongest growth in word level reading skills. Participants

in the PASP condition demonstrated significantly stronger phonological awareness,

phonemic decoding, and untimed context-free word reading skills that those in the EP

group. Moreover, children in the PASP group also demonstrated greater gains on word

level reading skills than participants in either the RCS or NTC groups. No significant

differences were noted between the groups in the area of reading comprehension

(Torgesen et al., 1999). During the study, 26% of the sample was retained in either









kindergarten or first grade, and there was a significant difference in retention rates across

conditions. It is interesting to note that only 9% of the PASP participants were retained,

whereas the percentages for the NTC, the RCS, and EP conditions were 41, 30, and 25

respectively (Torgesen et al., 1999). In addition, the percentages of children that were

referred for special services during the research period also differed with the NTC, RCS,

EP, and PASP conditions at 22, 24, 42, and 18, respectively (Torgesen et al., 1999).

Torgesen and his colleagues again contrasted the relative effectiveness of the LiPS

program, referred to by the authors as the ADD program, with the Embedded Phonics

(EP) program this time with participants between the ages of eight and ten who were

previously diagnosed as having a learning disability (Torgesen et al., 2001). While sixty

children participated in the treatment phase of the study, only fifty participants are

included in the results due to attrition. Participants were randomly assigned to one of two

conditions, or instructional approaches. The authors distinguished the two instructional

approaches by their relative focus on word level decoding versus application to

meaningful text. "The EP program provided much more practice than the ADD program

in reading and comprehending meaningful text, while the ADD program provided more

explicit and extended practice on phonemic awareness and phonemic decoding skills than

the EP program" (Torgesen et al., 2001, p. 35). Treatment was provided to each

participant one-on-one in two 50-minute sessions each day of the week. Total treatment

time for each participant was 67.5 hours, which extended over an eight to nine week

period (Torgesen et al., 2001). Additionally, upon the conclusion of the intensive

treatment sessions, clinicians went into the classroom of participants once per week for

the next eight weeks to assist in generalizing the materials from treatment to classroom









tasks. Each clinician involved in administering the treatment (either ADD or EP) in this

study had at least one year of previous experience teaching their respective method. Five

different teachers taught the ADD program, and five teachers instructed participants in

the EP program (Torgesen et al., 2001).

Torgesen et al. (2001) concluded that both the ADD and EP programs provided

equally effective instruction for the sample of children participating in this study.

According to the authors, at the end of the two year follow-up period, no differences

existed between the groups on any of the important reading outcomes (Torgesen et al.,

2001). While children receiving the ADD program demonstrated significantly stronger

growth in accuracy of phonemic decoding skills and in the accuracy and fluency of word

reading in text at the end of the treatment phase, these gains were not maintained during

the follow-up period (Torgesen et al., 2001). While the outcomes in this study differed

somewhat from Torgesen et al. (1999) children receiving the ADD program in the

previous study obtained consistently higher scores on measures of phonemic decoding

and word identification, and these results were maintained at follow-up the authors of

the present study cited teacher experience as one possible explanation. The authors

hypothesized that the experienced teachers in the present study may have been able to

refine components of the EP program to account for the children's phonemic awareness

abilities while reading meaningful text (Torgesen et al., 2001).

Another study documenting the Lindamood program's effectiveness is a case study

of an adult who received greater than 100 hours of intensive intervention in a clinical

setting. While this case study of an adult has limited generalizability to children and use

of the LiPS program in the schools and the findings are less than remarkable, this study









offers a much more detailed description of treatment implementation than any other study

reviewed examining the efficacy of the LiPS program. Conway et al. (1998) examined

the effects of the LiPS program, formerly know as the Auditory Discrimination in Depth

(ADD) program, in a case study with a 50-year-old male whom had previously suffered

from a stroke that affected the left hemisphere of his brain. At fifteen months post-onset,

the patient was administered a series of pretest assessment measures and treatment began.

Treatment was performed one-on-one, for 2 to 4 hours per day, 5 days per week, and

totaling 101.1 hours over a two-month period (Conway et al., 1998). Six different

clinicians, each with extensive training in the program and between 5 and 10 years of

clinical experience, administered the treatment to the patient. The program was

implemented according to the sequence outlined in the program manual (Conway et al.,

1998). Conway et al. (1998) describe in some detail the four major components of

treatment (oral awareness training, simple nonword training, complex nonword-word

training, and multisyllable nonword-word training) implemented with this participant.

For example, the authors explained that, during the simple nonword training component,

one to two chains of ten nonsense segments (e.g., /ip/) were administered to the

participant using mouth pictures during each treatment session, totaling two to eight

chains per day. Once this task, which progressed from one to three phonemes, was

completed with 90 to 100 percent accuracy, the mouth pictures were replaced with

colored wooden blocks in order to create a less concrete representation of the phonemes

(Conway et al., 1998). Descriptions of each of the four major components of treatment

and benchmarks for advancement are included in this study and are described here in









greater detail than many other research studies detailing one-on-one implementation of

the LiPS program.

Using a multiple probe design to monitor the progress of the individual and

evaluate reading and spelling achievement outcomes, large gains were cited in

phonological awareness, reading and spelling nonwords, and reading and spelling real

words (Conway et al., 1998). Specifically, Conway et al. (1998) reported improved

phonological awareness that was associated with improved reading and spelling for

words that were phonologically regular. On pre- and posttest measures, the authors

reported standard scores on the Woodcock Reading Mastery Test Word Attack subtest to

be 99 at pretreatment and 112 at posttreatment, Word Identification subtest scores to be

99 at pretreatment and 103 at posttreatment, and Passage Comprehension subtest scores

to be 117 at pretreatment and 124 at posttreatment (Conway et al., 1998). The patient was

reported to have maintained treatment gains in phonological awareness and reading at

two months posttreatment (Conway et al., 1998).

A few other studies have evaluated the efficacy of the LiPS program with specific

populations using various research design methodologies. For example, Alexander,

Anderson, Heilman, Voeller, and Torgesen (1991) evaluated the effectiveness of the

Lindamood program with ten students with severe dyslexia. In this study, treatment was

implemented one-on-one in a clinical setting, and participants received an average of 65

hours of LiPS training (range of treatment hours varied from 38 to 124 hours). From

protesting to posttesting, phonological awareness and decoding skills improved

significantly (as measured by the Woodcock Reading Mastery Test and Lindamood

Auditory Conceptualization Test). In another study, a description of the LiPS program









with suburban high school students with documented learning disabilities in the Midwest

(O'Dea, 1998) was presented. The students in this study received instruction in the

Lindamood program for 18 weeks, five days each week, for 55 minutes per day. Gains

from pretest to posttest were assessed using the Kaufman Test of Educational

Achievement. Results indicated that students made an average growth in reading

comprehension of one year and growth in decoding of approximately 6.5 months in the

18 weeks of Lindamood instruction. Improved attitudes toward reading were also noted.

Based on the information available, it did not appear that a control group was employed

in either of these studies.

Small group implementation

While there is limited empirical evidence supporting the use of the LiPS program

in both school and clinical settings when administered individually, even less research

has been conducted to evaluate the efficacy of this intervention with small groups or

classes of students. One study that did address issues of small group implementation of

the Lindamood program was McGuinness, McGuinness, & Donohue (1995). This study

compared three groups of first-grade children: one class at a Montessori school receiving

the Lindamood program in addition to traditional instruction (n = 15), one class at a

private school receiving the Lindamood program in addition to the traditional curriculum

(n = 15), and a control group at the private school receiving only the traditional

curriculum (n = 12). Teachers implementing the Lindamood program, referred to by the

authors as the ADD program, were trained by the second author for 32 hours in the

summer prior to treatment implementation, followed by a one-day practicum just prior to

the start of school, and a one-day refresher practicum prior to the second semester of the

project (McGuinness et al., 1995). The intervention was implemented in small groups of









five to seven children, for 20-30 minutes each day over an eight-month period. Pre- and

post-testing was completed on all children participating in the study.

According to the researchers, both treatment groups significantly outperformed the

control group on word attack and word identification measures. However, results

indicated that the Lindamood program had a greater impact on decoding than word

recognition, possibly due to the treatment program's heavy emphasis on phonologically

regular and nonsense words (McGuinness et al., 1995). All three groups in this study

increased noticeably on a measure of phonological awareness, and no significant

differences were noted between the two experimental groups on any measure

(McGuinness et al., 1995). Overall, the authors considered this small group

implementation of the Lindamood program in these school settings to have been

effective. "The adaptation of the ADD program to the classroom was effective to the

extent that children who were taught by this method significantly increased their reading

standard scores compared to their own initial performance, beyond what is normally

expected" (McGuinness et al., 1995, p. 849). However, it should be noted that all three

groups increased substantially on the Lindamood Auditory Conceptualization Test, a

phonological awareness measure that reproduces some of the specific skills introduced in

the Lindamood program. Additionally, authors of this study report equivalent success in

successive experiments in which they eliminated some specific components of the

Lindamood program (McGuinness et al., 1995).

One other study evaluating the Lindamood program as an effective reading

intervention with groups of students was identified with outcomes that were less

favorable. This study was conducted with both typically achieving students and students









receiving exceptional education services in the schools (Roberts, 1975). The treatment

group consisted of 39 students with either average abilities or learning difficulties. The

control group consisted of 29 students with similar academic characteristics. While both

the treatment and control groups continued on with their traditional reading instruction,

the treatment group also received instruction in the Lindamood program throughout the

duration of the study. Phonological awareness (as measured by the Lindamood Auditory

Conceptualization Test) and general reading achievement (as measured by the

Metropolitan Achievement Test) were assessed prior to the intervention, subsequent to

the intervention, and eight weeks after the intervention's termination. No statistically

significant differences were noted between the treatment and control groups on the

measured reading skills.

Overall, some limitations exist when drawing conclusions about the efficacy of the

LiPS program for individual and group use. Despite some consistently identified student

gains, descriptions of the methodologies or the actual treatment delivered were often

limited in the studies described above. Therefore, it was unclear how closely treatment

adhered to the Lindamood program as it was set forth in the training manual. Some

studies stated that the treatment or intervention was based on the Lindamood program,

but no detailed descriptions of the treatment were included in the articles. Moreover, it

was unclear whether some studies included control groups or some form of alternate

treatment, and many did not. Also, instructor training or previous experience with the

LiPS program was not described in any detail in most of the studies, and sample sizes

were often quite small.









Regardless, certain conclusions can be drawn regarding the extant literature

examining the efficacy of the LiPS program. For those studies implementing the LiPS

program with individual students, there is evidence to suggest that students made specific

word-level reading gains in the most methodologically sound and empirically controlled

studies. For example, in Torgesen et al. (1999), elementary students were randomly

assigned to one of four conditions (three experimental conditions or a control group).

Certified teachers underwent extensive training before delivering the instruction to study

participants, and the children's progress was documented over a two and one-half year

period. Results of this research indicated that students receiving the Lindamood program

made significant gains in phonological awareness and phonemic decoding. While the

research evaluating the LiPS program with group implementation is more scant, one

empirically sound study (McGuinness, McGuinness, & Donohue 1995) demonstrated

similar gains to those identified by Torgesen and colleagues. In fact, many of the studies

described above demonstrated student gains in phonological awareness and increased

word attack skills. For those studies using the Lindamood Auditory Conceptualization

Test as a measure of student outcomes, gains were consistently noted in students

receiving treatment in the Lindamood program. Based on these studies, evidence does not

suggest, however, that gains in reading comprehension, word identification, and

vocabulary skills are typically a result of instruction the Lindamood program.

Nevertheless, some consistent gains have been noted across the LiPS research literature

in specific word-level reading skills. Participants in these studies varied greatly in age

(from five-years-old to adult) and included a range of academic ability levels.









Unfortunately, studies evaluating the efficacy of the Lindamood program with

individuals or groups of students consistently fail to include detailed descriptions of the

treatment. Specific details of the Lindamood program implementation are not offered in

sufficient detail to assess treatment integrity. Therefore, in order to replicate these

findings, more information is needed about treatment integrity or adherence to the

Lindamood program as it was described in the program manual and how this affects

student outcomes.

Purpose of this Study

Over the last three decades, reading researchers have learned a great deal about

how children learn to read and why some students continue to struggle (Denton et al,

2003). One key foundational reading skill that has received significant attention is

phonological awareness and its instruction (Snow et al., 1998; Torgesen, 2002).

Phonological awareness involves an individual's ability to understand that spoken

language is made up of smaller parts. Phonological awareness training has been found to

be a crucial component of beginning reading instruction (Olofsson & Niedersoe, 1999;

Smith et al., 1995; Torgesen, 2002). One reading program that offers phonological

awareness training is the LiPS program.

While the LiPS program was originally designed for one-on-one implementation,

this program is currently being employed in schools with individuals and groups of

students. Empirical evidence exists to support the use of this program with individuals

and small groups (e.g., McGuinness et al., 1995; Torgesen et al., 1999). Unfortunately,

little documentation exists detailing the specific procedures that were followed in

treatment implementation or how closely instructors adhered to the program as it was

designed (i.e., treatment integrity).









The present study was designed to address some of the gaps in the literature

relative the Lindamood program. The purpose of the present study is two-fold. First, the

LiPS program was initially designed for individual treatment in the clinical setting, and

much of the research addressing the efficacy of this program pertains to one-on-one

implementation. However, instructors are presently being trained to implement this

program in school settings, and many teachers have adapted this program to address the

needs of students in small groups and whole classrooms. Therefore, the first purpose of

this research is to examine issues surrounding the implementation of the LiPS program in

the school setting with classes of students. Specific research questions related to

treatment integrity include:

1. When implementing LiPS in kindergarten classrooms with large groups of students,
how closely do the instructors adhere to the program as described in the training
manual?

2. What decisions do instructors make about the program sequence in relation to
student needs?

3. How does program implementation vary across instructors when considering the
training and experience of the two instructors?

4. How does LiPS instruction differ from the classroom to clinical setting?

A second purpose of this study is to evaluate student outcomes in classrooms where

the LiPS program is used as a regular part of the reading curriculum. Specific research

questions related to student outcomes include the following:

1. What gains do students demonstrate in reading after receiving instruction in the
LiPS program?

2. Do student academic gains differ on a measure more closely aligned with the LiPS
program (i.e., the LAC) as compared to other standardized, norm-referenced
measures?

3. Does student reading achievement differ significantly from instructor to instructor?






34


Earlier research has examined some issues related to individual implementation of

the LiPS program, with samples in these studies varying in age and severity of reading

difficulty. However, these studies offer little insight into exactly how the treatment was

implemented or the specific program sequence that was followed. Moreover, even less

work has been done to empirically examine group implementation of this program in the

school setting. This study seeks to examine the use of the LiPS program as an early

intervention method and its application to a group or classroom setting.














CHAPTER 2
METHOD

Previous research has documented the efficacy of the LiPS program with individual

children (e.g., Torgesen et al., 1999) and with small groups of students (e.g., McGuinness

et al., 1995). Academic gains have been noted across studies in phonological awareness

and phonemic decoding skills. However, despite the empirical evidence to support gains

in word-level reading skills subsequent to instruction in the LiPS program, little

information is available regarding how the program was implemented in these studies.

Therefore, in order to replicate some of the findings related to LiPS efficacy, more

information is needed regarding treatment integrity. The purpose of this study was to

examine the treatment integrity of the LiPS program when it was incorporated into

kindergarten classroom reading instruction and the student progress and outcomes that

were achieved over the treatment period.

Participants

Participants included kindergarten students attending two local elementary schools

in North Central Florida. Two kindergarten classes from each school, with approximately

20 students per classroom (n =75), were involved in this study. Students were assigned to

each classroom by the school administration prior to the commencement of this research.

It was assumed at the outset of the study that each group was relatively commensurate

across academic performance levels, with higher and lower achieving students present in

each of the four classrooms. This was confirmed based on the pretest assessment data









collected. Informed consent was obtained from each participant's parent or guardian prior

to the student's data being used for the study.

Settings

Two school sites participated in this research. School 1, the site of Instructor 1, was

a laboratory school affiliated with the local state university. This school was considered a

public school and served as its own school district within the state. The population of the

school was diverse with respect to race and ethnicity and was selected to match the state

in terms of Florida's socioeconomic and racial-ethnic composition. The school serves

students from kindergarten through twelfth grade.

School 2, the site of Instructor 2, was a parochial school serving students in

kindergarten through eighth grade. Families of children attending both schools underwent

admission procedures and chose to have their children attend these particular schools.

In addition to the school settings where data was collected on whole classroom

LiPS instruction, additional data was collected in a clinical setting where LiPS was used

with children one-on-one. This clinical setting was a private facility in Central Florida

offering remedial services to children and adults with learning difficulties. Individuals

seeking assistance at this private center undergo a comprehensive evaluation, and

interventions are designed to address the particular academic weaknesses of each person.

The LiPS program is one of a number of remedial programs and interventions employed

at this facility.

Instructors

Two instructors participated in this research. One instructor taught at each school,

administering the Lindamood Phoneme Sequencing Program (LiPS) to students in her

respective two classrooms. Each was a licensed speech pathologist and had been









previously trained in the LiPS program. The two instructors varied in their amount of

overall clinical experience related to speech pathology, training received in the LiPS

program, and specific experience administering the LiPS program to individuals and

groups of students. An initial interview with each instructor was conducted early in the

semester prior to program implementation to determine the level of training (where,

when, number of hours) and experience (amount and type of experience -

individual/group, clinical/school) each had attained with the LiPS program. The two

instructors had worked collaboratively to offer speech/language services in the past;

however, each individual designed and implemented the LiPS program independently at

each school.

The LiPS instruction of two instructors was also observed in the clinical setting.

These two instructors participated in extensive training and supervision in the LiPS

program prior to their work with students at this facility. The two instructors at this

clinical site had a combined total of approximately ten years of experience working with

students using the Lindamood programs.

Procedure

Two variables, treatment integrity and student outcomes, were assessed throughout

this research. Each variable will be subsequently discussed.

Treatment Integrity

Many of the previous research studies examining the efficacy of the LiPS program

employed experienced clinicians or classroom/intervention teachers trained directly by

the program developers (Conway et al., 1998; Torgesen et al., 1999; Torgesen et al.,

2001). Thus, while it may be assumed that these instructors strictly adhered to the

program as outlined in the LiPS manual, little has been done to document the specific









program sequence that was followed during treatment by these or other less experienced

instructors in the studies reviewed. In sections such as "Classroom and Clinical

Activities" (p.24) and "Additional Ways to Practice Consonants in the Classroom and

Clinic" (p. 82), the LiPS program manual (Lindamood & Lindamood, 1998) includes

some information for classroom implementation to offer instructors ideas for practicing

or reviewing previously introduced material with students. This study seeks to document

and describe how closely each instructor adhered to the LiPS manual when implementing

the entire program to classes of students, the decisions made by each instructor as the

program was implemented in a classroom, and the types of modifications that were made

to the program for group instructional purposes. From most of the previous research

studies, it is unclear exactly how the treatment was implemented. Therefore, it is difficult

to interpret or make generalizations regarding student outcome data that is presented in

each study.

The daily and weekly lesson plans of each instructor were gathered in order to

assess treatment integrity, or adherence to the program, as described in the LiPS manual.

Additionally, the number of treatment hours each participant received was recorded, as

indicated by the instructor lesson plans. Periodic, direct, classroom observations were

conducted by the primary investigator to ensure that each instructor adhered to stated

lesson plans and that lesson plans were revised when necessary to accurately reflect

introduced material. Additionally, several forms were created by the primary investigator

to collect data during the classroom observations. These forms included the Record of

Program Delivery, the Classroom Observation Error Handling form, the Student









Opportunity to Respond form, and the Student Engagement/ On-Task Behavior form (see

Appendix A through D).

The Record of Program Delivery form was used by the primary investigator during

observations in each classroom at a minimum of eight points throughout treatment

implementation to document the occurrence or nonoccurrence of key program

components incorporated by each instructor into the instruction. The elements selected to

include on the Record of Program Delivery form were chosen based on the perceived

importance placed on these components by reviewing the LiPS training manual, as well

as the primary investigator's personal training and past experience in teaching the

Lindamood program. For each program or session component listed on the Record of

Program Delivery form, the page numbers from the LiPS manual are cited. During each

classroom observation, a Record of Program Delivery form was completed, and the

presence or absence of each component was recorded. For example, one point on the

form addresses whether the instructor avoided the use of the word "no' when a student's

answer was not the expected one. If a specific item was not applicable to that particular

session, then this was indicated on the form as well. For example, if the class lesson did

not include reading or spelling practice, then the assessment of student mastery was not

applicable.

The frequency or degree to which some of these key program components were

incorporated by each instructor into instruction was also assessed on at least eight

occasions throughout the intervention using the Classroom Observation Error Handling

form. From the Record of Program Delivery form, the frequency of occurrence for two

specific items was recorded. First, each time the instructor used a line of questioning to









lead the class or a particular student to a desired response, a tally mark was made. This is

referred to as Socratic questioning in the LiPS training manual (Lindamood &

Lindamood, 1998, p. 419). Also, each time the instructor questioned a student even

though a correct response was made, a tally mark was recorded.

Student engagement was evaluated at least seven times per classroom throughout

treatment using the Student Opportunity to Respond and Student Engagement/On-Task

Behavior forms. Using the Student Opportunity to Respond form, the frequency with

which each student orally responded during each classroom observation was recorded.

Class lists were maintained, and a tally mark was made for each instance that an

individual student responded directly to the instructor's question and the instructor

acknowledged that response. On the Student Engagement/On-Task Behavior form, the

number of students that were looking at the instructor at the end of each five-minute

period was recorded. Again, this form was completed on at least eight occasions in each

of the classrooms.

Furthermore, focused interviews were conducted with each instructor near the

beginning, middle, and end of treatment (see Appendix F). This was done to obtain

perspectives regarding what each individual instructor perceived to be effective during

their LiPS instruction, adaptations or accommodations that were made to the curriculum,

why specific curricular choices were made, and how they perceived implementation

would differ if they were instructing in a one-on-one setting.

In order to make comparisons of how LiPS treatment implementation in a large

group setting differed from a one-on-one instructional setting, additional observations

were conducted in a clinical setting where the program was employed with students one-









on-one. Using the Record of Program Delivery and Error Handling Observation forms,

observations were conducted of two instructors working individually with two different

students across approximately two to four sessions per student (totaling twelve

observations). The goal was to observe program presentation in a one-on-one

instructional setting until stability across observations was achieved. Comparisons were

then made of the similarities and differences in program implementation when conducted

one-on-one versus in larger group settings.

Student Progress/Outcomes

At the outset of the school year, in September, pretest measures were administered

to the participants individually over a two-week period prior to the initiation of the LiPS

program in their classrooms. Posttesting was conducted at the culmination of the

treatment period, during the month of February, again over a two-week period. The pre-

and posttesting was conducted by the primary investigator and two other recruited

volunteers trained in administering these measures. Total testing time was approximately

30 to 45 minutes each for pre- and posttesting. The order of the assessment measures

given was counterbalanced in order to account for order effects. Once introduced, the

LiPS program was implemented in each classroom in addition to the traditional

curriculum. Termination of the treatment at each school was at the discretion of each

instructor and was similar across both sites. The program was employed in each

classroom at both school sites from approximately September to February.

Measures

Pretesting and posttesting to assess student achievement were conducted with the

following measures:









Woodcock Johnson Tests of Achievement (WJ-III)

The WJ-III (Woodcock, McGrew, & Mather, 2001) is an individually administered,

standardized, norm-referenced, achievement measure. Two reading subtests of the WJ-III

were administered to participants in this study: Letter-Word Identification and Word

Attack. The Letter-Word Identification task required the individual to decode real words

in isolation. The Word Attack task required the student to identify individual sounds for

some letters and decode nonsense words, assessing phonemic awareness skills in reading

individual sounds and novel words.

From this measure, both raw scores and standard scores were obtained. This

assessment tool is a widely used measure of reading achievement and has demonstrated

adequate reliability and validity. For example, for children ages 5 to 19, the Broad

Reading cluster, which includes Letter-Word Identification and measures of reading

fluency and comprehension, has a median reliability of .93 (Woodcock et al., 2001). Test-

retest correlations on the Letter-Word Identification and Word Attack subtests were .92

(n=106) and .79 (n=104) respectively, with one year between administrations for children

ages four to seven at first testing (Woodcock et al., 2001). In addition, concurrent validity

for Broad Reading has been documented with validity coefficients ranging from .633 to

.857 with various measures of intelligence and achievement (Hintze et al., 2001).

Comprehensive Test of Phonological Processing (CTOPP)

The CTOPP (Wagner, Torgesen, & Rashotte, 1999) is an individually

administered, norm-referenced measure that is used to evaluate various facets of an

individual's phonological awareness and processing. The following subtests of the

CTOPP were administered to participants in this study: Elision, Blending Words, and

Sound Matching. The Elision task measured how well the student could identify and









manipulate word chunks or individual phonemes within orally presented words. For

example, in this task, the student was asked to complete items such as saying the word

hotdog without saying hot or saying the word goat without saying /g/. The Blending

Words task assessed the individual's ability to combine, or blend, orally presented

syllables, onset-rimes, or phonemes. Finally, the Sound Matching Task evaluated the

student's ability to identify objects that contained the same initial or final sound as a

presented word. For example, which word starts with the same sound as cat? hat, car, or

dog? The Elision, Blending Words, and Sound Matching subtests comprised the

Phonological Awareness Composite, and both raw and standard scores were obtained for

the three tasks and composite.

Internal consistency reliability estimates have been reported to be .96 for the

Phonological Awareness Composite for children aged five to six years (Hintze, Ryan, &

Stoner, 2003). Moreover, internal consistency reliability for specific tasks, Elision,

Blending Words, and Segmenting Words, ranges from .84 to .89 (Rashotte, MacPhee, &

Torgesen, 2001). Regarding criterion-related validity, the correlation between the

Phonological Awareness Composite of the CTOPP and the Letter-Word Identification

task of the Woodcock Reading Diagnostic Battery was found to be .65 (Havey, Story, &

Buker, 2002).

Dynamic Indicators of Basic Early Literacy Skills (DIBELS)

DIBELS (Good, Kaminski, Laimon, & Johnson, 1992) involves brief curriculum-

based assessment probes that can be used to monitor student progress and serve to

identify children with reading difficulties. National normative data corresponding to

benchmarks is available for the various DIBELS tasks. The Letter Naming Fluency

(LNF) and Phoneme Segmentation Fluency (PSF) tasks were administered to the









participants in this study. The LNF task required the student to rapidly name as many

lower and upper case letters as possible from a provided page in one minute. The raw

score is the total number of letters correctly identified in one minute. The second task,

PSF, required the student to segment orally presented real words in a one-minute period.

The raw score achieved is the total number of phonemes correctly identified in one

minute.

Reliability and validity data exists to support the use of these curriculum-based

probes. Alternate forms reliability coefficients for the LNF task have been documented to

range from .86 (Speece, Mills, & Ritchey, 2003) to .93 (Kaminski & Good, 1996).

Regarding concurrent validity, correlations for the LNF task with the Letter-Word

Identification subtest of the WJ-III were .77 (Speece, et al., 2003), .75 for LNF with the

Woodcock-Johnson Skills cluster that included the Letter-Word Identification task, and

.60 for PSF with the same Woodcock-Johnson Skills cluster (Elliott, Lee, & Tollefson,

2001). Additionally, Hintze et al. (2003) examined the concurrent validity of the DIBELS

measures with the CTOPP using data from 86 kindergarten students. Data revealed that

the DIBELS kindergarten readiness tasks strongly correlated with most subtests and

composite scores of the CTOPP. Specifically, both LNF and PSF correlated with the

CTOPP Phonological Awareness Composite at .53.

Lindamood Auditory Conceptualization Test (LAC)

The LAC (Lindamood & Lindamood, 1971) is an individually administered

assessment tool that measures phonological awareness abilities through a series of tasks

involving the manipulation of colored blocks. After an elaborate training process wherein

the examiner teaches the assessee how the colored blocks can be used to represent

individual sounds that are sequenced from left to right, the individual's ability to identify









and represent phonemes and nonsense words with the blocks is measured. For example,

the examiner might ask the student to use the colored blocks to represent the following:

/p/ b/t/. The student must recognize that three sounds were presented, and each sound

was different. Therefore, the student would present three different colored blocks to

represent the prompt. As the items become increasingly more complex, the student must

use the colored blocks to represent phonemes within words and manipulate these blocks

to reflect changes made to the words (e.g., from /ap/ to /op/ or /sik/ to /siks/).

On the LAC, raw scores are entered into a formula to achieve a Total Converted

Score. This formula allows for items of greater complexity to be given greater weight.

The maximum score allowable is 99, and benchmarks, or Recommended Minimum

Scores, are offered for each grade level from kindergarten through adult. As stated on the

test protocol, by the end of the first half of kindergarten, a child should achieve a

minimum score of 31, and this score should be at least 40 by the end of the second half of

the kindergarten year.

Of the pre and post measures used in this study, the LAC was most closely aligned

with the LiPS program. The same individuals who devised the LiPS program developed

this assessment tool. Additionally, the tasks completed during the LAC assessment are

included in the instruction of the LiPS program as outlined in the training manual.

Published reliability and validity research relative to the LAC test are significantly

more scant than for the other assessment measures employed in this study. In one study

of 660 students ranging in grade from kindergarten through grade 12, correlations

between student LAC and Woodcock Reading Mastery Test (WRMT) reading and

spelling performance yielded scores of .66 to .81, with an average of .73 (Lindamood,









1972). In addition, test-retest reliability using alternate forms of the LAC at least four

weeks apart on a sample of 52 students in kindergarten through grade 12 was reported at

.96 (Lindamood & Lindamood, 1971).

Analysis of Data

One goal of this study was to document and describe the LiPS program and how it

was translated from a one-on-one to large group instructional setting. The lesson plans

and interviews were employed to create a description of how the two instructors adapted

the LiPS program to a classroom setting. Using data collected from the Record of

Program Delivery and the Classroom Observation Error Handling forms, a description

was developed detailing how closely the instructors adhered to the LiPS program as

described in the program manual and how rigorously the instructors incorporated key

components of the program into their classroom instruction. In addition, decisions the

instructors made during treatment were described (from observational and interview

data), and variance between instructors/classes was detailed.

A second goal of this study was to determine whether the students made progress

or demonstrated academic gains in a program that was adapted to a large group setting.

Pre- and posttest data was expressed descriptively (e.g., means across classes for each

measure) and analyzed statistically. Raw score differences from the pre- and posttest

measures (WJ-III, CTOPP, DIBELS, and LAC) were analyzed using ANCOVA

procedures to control pretest scores and look at posttest differences between instructors

and between schools. Statistical analyses were conducted to determine whether the

students of one instructor made significantly greater gains from pre- to posttest than the

other instructor. Additionally, a Repeated Measures ANOVA (2x4) was conducted using

all four assessment measures to determine if student academic gains differ on a measure






47


more closely aligned with the LiPS program (i.e., the LAC) as compared to other

standardized, norm-referenced measures. Lastly, benchmark data were analyzed for the

measures best suited for monitoring student progress (i.e., DIBELS, LAC) to examine

percentages of students at each school meeting certain criteria at protesting and

posttesting.














CHAPTER 3
RESULTS

The primary purpose of this study was to examine how the Lindamood Phoneme

Sequencing Program for Reading, Spelling, and Speech (LiPS) was adapted and

implemented with large groups of kindergarten students in a classroom setting. Over the

course of a six month period, two instructors at two different school sites offered

supplemental reading instruction to two classrooms each (four classrooms total in study,

75 kindergarten students) using the LiPS program. Descriptive information, including

classroom observations, instructor interviews, and lesson plans, was gathered to better

understand what this program, a program initially designed for one-on-one clinical use,

looked like when it was modified to meet the needs of a group of students in a classroom

setting. Three specific questions guided this research to ascertain treatment integrity and

the delivery of instruction.

1. When implementing LiPS in kindergarten classrooms with large groups of students,
how closely do the instructors adhere to the program as described in the training
manual?

2. What decisions do instructors make about the program sequence in relation to
student needs?

3. How does program implementation vary across instructors when considering the
training and experience of the two instructors?

To answer these questions, information was gathered to monitor how this program

was implemented and the students' response to the intervention. In other words, which

program path or sequence did instructors choose, which program components were

included or omitted, what decisions did instructors make in response to student progress,









and how engaged were the students during this intervention time? Regarding treatment

integrity, two global factors were considered during the data collection as LiPS was

implemented to whole classrooms of kindergarten students: instructor decision-making

during intervention delivery (i.e., adherence to program design) and student engagement

or responsiveness during the LiPS instruction.

In addition, the results of observations conducted in a clinical setting where LiPS

was employed during one-on-one instruction are presented for comparison. Issues of

treatment integrity and adherence to the program manual can exist regardless of group

size. However, this program was initially designed for use with individual students in a

clinical setting, and it was important to consider whether program implementation varied

in these two different environments. Specifically, data were gathered to ascertain whether

differences existed in the inclusion of key program components between the one-on-one

and classroom-based LiPS instruction.

Lastly, pretest and posttest data were collected on the students in the kindergarten

classrooms to assess student outcomes. The pretest and posttest data provided a means to

quantify student reading gains across the treatment period. Various reading assessment

measures were used to evaluate such skills as phonemic awareness, decoding, and letter

naming fluency in the kindergarten participants. One measure of particular interest was

the Lindamood Auditory Conceptualization test (LAC), a measure closely aligned with

the LiPS program and designed to assess a student's ability to detect sameness and

difference in sounds.









Descriptive Data

Whole Class Instruction

To ascertain how this program was implemented by the two instructors and the

students' response to the LiPS instruction, observations were conducted in the four

participating kindergarten classrooms. Over the six months of classroom intervention,

weekly observations of the intervention implementation were performed, and the

instructors' daily lesson plans were collected. In addition, interviews with the instructors

were conducted at the beginning, middle, and end of treatment. Four specific forms were

generated for this research to capture as much information as possible about treatment

integrity, student response to the intervention, and the decisions instructors made

throughout program implementation. The forms included a Record of Program Delivery,

Error Handling, Student Opportunity to Respond, and Student Engagement/On-Task

Behavior. Refer to Appendix A through D for each of these four observation instruments.

The Record of Program Delivery form was completed during each classroom

observation. Using this form, the presence or absence of specific program components

was recorded. Included on this form were critical aspects of the program that should be

present during each LiPS session. The Error Handling form was used to record frequency

with which the instructors employed questioning in their instruction, particularly when

errors were made, to lead the student to the desired response. The Student Opportunity to

Respond form was employed to track the number of times each student engaged in

dialogue with the instructor during instruction. Lastly, the Student Engagement/On-Task

Behavior form employed a time sampling method to record the number of students

engaged in instruction at certain intervals during instruction.









Table 1 displays the number of observations that were conducted by school and

classroom over the intervention period. Each classroom was observed a minimum of

seven times, with a range from seven to sixteen distinct observations in each classroom.

Some classes were observed more often due to the frequency with which the instruction

was delivered (e.g., Instructor 1 was in her classrooms two to three times per week;

Instructor 2 delivered new instruction in her classrooms one time per week).

Table 1. Number of Observations by Instructor for Whole Group Intervention
Instructor 1 Instructor 2
Classroom 1 Classroom 2 Classroom 3 Classroom 4
Record of Program 11 15 8 8
Delivery
Error Handling 10 13 10 8
Opportunity to 10 11 7 7
Respond
On Task 8 16 10 9

In addition to the classroom observations, interviews were conducted with each

instructor prior to and throughout the intervention period. The Initial Instructor Interview

was conducted with both instructors prior to LiPS implementation to ascertain their

respective levels of training and prior experience with the program. Additionally, the

researcher met with each instructor near the beginning, midway through, and at the

conclusion of the intervention period to discuss their thoughts regarding student progress

and decisions about program implementation. Therefore, a total of four interviews were

completed with each instructor at various points throughout the study.

The presentation of the descriptive data for whole class LiPS instruction will be

organized under two concepts: treatment integrity and delivery of instruction. Treatment

integrity consists of the inclusion of key program elements and the program paths that

each instructor selected. Delivery of instruction includes a discussion of the decisions that









the instructors made based on the needs of the classroom teachers, the students, and their

respective levels of training and experience with the LiPS program. Subsequent to the

presentation of the descriptive data for whole class instruction, data will be presented

regarding the observations conducted in a one-on-one setting. Lastly, student outcomes

data related to whole class LiPS instruction will be presented.

Treatment Integrity

The purpose of conducting classroom observations and collecting daily lesson

plans was to assess the degree to which the instructors followed the program as it was

designed to be implemented. In other words, data were collected to determine how

closely the instructors adhered to the program scope and sequence as described in the

LiPS Trainer's Manual (Lindamood & Lindamood, 1998). This was assessed by

comparing the instructors' lesson plans with the sequence of skills to be introduced as

delineated in the program manual and using the devised observation forms to assess the

presence or absence of particular program components and delivery techniques.

Inclusion of key program elements

At the outset of the program, LiPS offers instruction in phonemic awareness at the

oral level. Students hear, see, and feel the physical characteristics of sound units and

work to compare and contrast them. "The major premise of the LiPS Program is that the

auditory element of speech sounds should not be separated from the more basic oral-

motor activity that produces the sounds" (Lindamood & Lindamood, 1998, p. 7). A

signature component of this program is the "mouth pictures." As soon as the student is

introduced to the first sounds, mouth pictures are paired with those sounds so as to offer a

visual representation of what the mouth should look like when specific sounds are

produced.









A focus on oral awareness and individual sound units was precisely what occurred

in the classrooms of both instructors. Students were introduced to individual sounds, and

mouth pictures were paired with each set of consonant sounds that was introduced. For

example, the first consonant sounds that were introduced by both instructors were /p/ and

/b/. These sounds are identified with the label "Lip Popper" because, in order to produce

these two sounds, one's lips are pushed together and then pop open. A picture card, or

mouth picture, was paired with the discussion of these sounds. The basic dialogue offered

in the LiPS manual to introduce this consonant pair and all others was employed by both

instructors.

Table 2. Record of Program Delivery, Percentages of Observations by Instructors Across
Intervention Period
GENERAL Instructor 1 Instructor 2
T. reviews previously introduced material at beginning of 92% 100%
session
S. provided with/encouraged to use mirror when introduced 0% 0%
to or practicing new sounds
All Ss. observed to be actively engaged in learning process 8% 13%

TRACKING, READING, SPELLING Instructor 1 Instructor 2
S. instructed to follow 3 steps in Tracking repeat words, 0% 0%
touch & say, make change
T. questions S. about label of sounds during Tracking 100% 100%
Real and nonsense words used in 69% 50%
Tracking/Reading/Spelling
T. assesses S. mastery on T/R/S chains before new material 0% 0%
introduced

ERROR HANDLING Instructor 1 Instructor 2
T. incorporates responding-to-response (allows student to 100% 94%
self-correct)
T. uses line of questioning to lead S. to desired response 96% 87%
(Socratic)
T. avoids use of word "no" when student's answer is not 85% 75%
expected one
T. questions S. even when correct response provided 8% 31%
T. avoids providing correct answer for S. having difficulty 27% 88%









Differences were noted, however, in the inclusion of some key program elements,

as measured by the Record of Program Delivery form. These data are displayed in Table

2. The numbers noted in this table indicate the percentages of observations in which each

program component was present. One important element that can be employed to help the

students see what their mouths are doing during sound introduction was omitted by both

instructors. Mirrors are suggested as a way to support the student with more sensory input

until the mouth action can be felt strongly (Lindamood & Lindamood, 1998, p. 47). As

new sounds were introduced, both instructors discussed with the students what their

mouths looked like and paired the new sounds with mouth pictures to visually represent

the sounds, but neither instructor included mirrors in their instruction (0% of observed

sessions for Instructor 1 and Instructor 2 as measured by Record of Program Delivery

form).

Student engagement, as measured by the Record of Program Delivery form, was

another key program element that was calculated for each instructor. During each

observation, it was recorded whether all students were observed to be actively engaged in

the learning process, and the presence of this component was recorded only if all students

appeared to be engaged in the instruction. For Instructor 1, this occurred during 8% of the

observations; for Instructor 2, this occurred during 13% of the observations. More

detailed information regarding student engagement was collected using the Student

Engagement/On-Task Behavior form, and the results of this data will be presented later in

the Delivery of Instruction section.

Instructor differences were present in the incorporation of another key program

element: Tracking. Tracking refers to the process of sequencing mouth pictures or









colored blocks to represent the number, order, and sameness of sounds heard auditorally.

This task can be accomplished with isolated sounds, single syllable words, and

multisyllable words. "Tracking develops the students' ability to compare and contrast

sequences of speech sounds and represent them visually" (Lindamood & Lindamood,

1998, p. 93). While Tracking in the LiPS program is traditionally completed with colored

blocks and felts to represent individual sounds and syllables, it can be accomplished with

the mouth pictures for younger or severely impaired students. For example, with mouth

pictures, the examiner would say a series of sounds (e.g., /p/ /b/ /d/), and the students)

would identify the mouth pictures that represented that sequence of sounds (i.e., lay out

the mouth pictures to represent the lip popper, lip popper, tongue tapper).

It should be noted that Tracking with mouth pictures is essentially Spelling prior to

the introduction of letter symbols. In other words, mouth pictures are used to represent

the order and sameness of sounds prior to the introduction of symbols. Once letter

symbols are introduced, these two tasks become different. Tracking continues to hone in

to those phonemic awareness skills, focusing the student's attention on the sounds that

they hear and making the changes in the mouth pictures (or colored blocks) only where

they hear the changes in the presented sequence (e.g., from /p/ /b/ /d/ to /p/ /p/ /d/ only

the second sound changes in this sequence). Once letter symbols are introduced, Spelling

then taps into different skill sets, assessing the student's ability to represent sounds with

letters. For both instructors, Tracking was only completed with mouth pictures. Colored

blocks were never employed for Tracking by either instructor.

Certain instructional elements of the Tracking process were specifically noted

during observations using the Record of Program Delivery form. First, when appropriate,









both instructors questioned the students about the labels for the mouth pictures. This

occurred during 100% of the observations during Tracking. Another observation worth

noting is the instructor variability in the selection of real and nonsense words employed

for Tracking, Reading, and Spelling. The necessity of incorporating nonsense, or pseudo

words, into LiPS instruction is explicitly stated in the program manual. Especially in the

beginning stages of the program, the inclusion of nonsense words allows the student to

experience more extensive Tracking, Reading, and Spelling practice with two sound,

consonant-vowel and vowel-consonant combinations than if the student were limited to

real words alone (Lindamood & Lindamood, 1998). Instructor 1 incorporated real and

nonsense words into her instruction all of the time for one classroom and half of the time

for the other classroom (For the other half of this instructional time, she sometimes only

used real words and sometimes only used nonsense words). Instructor 2 used this second

approach and presented both real and nonsense words to the students for Tracking,

Reading, and Spelling half of the time in her respective two classrooms. Lastly, regarding

Tracking, Reading, and Spelling, neither instructor employed any means of assessing

student mastery prior to introducing new material.

Even the process of Tracking was observed to look differently in the respective

classrooms of the two instructors. The LiPS manual suggests that, while Tracking can be

introduced to a whole class of students, intensive practice should be conducted in small

groups or individually "to assure attention to individual differences in perceptual

difficulty" (Lindamood & Lindamood, 1998, p. 94). The way that this task was

implemented differed by instructor. Instructor 1 employed two different means to

accomplish Tracking during her lessons. At times, she would complete Tracking









classwide. Using mouth pictures printed on 8 1/2" by 11" sheets of paper, she would

provide a series of sounds or a given word and have students stand in front of the larger

group holding the mouth pictures that represented their sounds. For example, if the

sounds were /p/ /f/ /k/, three students would stand holding their respective mouth

pictures. If the sequence of sounds changed from /p/ /f/ /k/ to /t/ /f/ /k/, the first student

would be seated and another student would take her place with the new mouth picture to

represent the sound that changed. At other times, Instructor 1 would have the students

complete Tracking with their own individual sets of mouth pictures. Instructor 2 only

completed Tracking class wide. For Instructor 2, no student completed his or her own

individual Tracking chains.

A section of the Record of Program Delivery form was devoted to monitoring Error

Handling, or the ways in which the instructors offered verbal feedback to the students

during instruction. An important instructional element included under Error Handling that

was considered during observations of instruction was the use of "responding to the

response" (Lindamood & Lindamood, 1998, p. 14). The overarching goal for instruction

is to foster independence in reading and spelling, and this technique allows the student to

self-monitor and self-correct their work. "You cannot tell this information to your

students; you need to ask them questions and ask them to do things so that they use their

own sensory-cognitive systems to discover information and arrive at concepts"

(Lindamood & Lindamood, 1998, p. 47). According to Lindamood and Lindamood

(1998), this is the most critical element in the interactions between the instructor and the

students at every level of the program. As reported in Table 2, both instructors were

observed to consistently incorporate the "responding to the response" technique in their









instruction (Instructor 1 = 100% of observations, Instructor 2 = 94%). For example, on

one occasion, when a student responded incorrectly to a question, Instructor 2 redirected

the student back to the mouth pictures hanging at the front of the room and used a series

of questions to guide the student to the desired response.

Another program element related to Error Handling that was noted to occur

infrequently during instruction involved the level of questioning instructors included

when the students' responses were correct. It is deemed important to question students

about their responses and decisions regardless of their accuracy in order to promote self-

monitoring and self-correcting. As stated in the LiPS manual (Lindamood & Lindamood,

1998), "Questioning students only when they are wrong gives them a set toward self-

doubt and impulsive changing of answers when questioned" (p. 419). As measured by the

Record of Program Delivery form and noted in Table 2, both instructors included this

component inconsistently (during 8% of the observations for Instructor 1, 31% of the

observations for Instructor 2).

Program paths

Outlined in the LiPS Trainer's Manual are two distinct program paths, Vertical and

Horizontal, which can be followed in introducing new concepts to students as they move

through the program (Lindamood & Lindamood, 1998, p. 16). The content of each path

remains the same; variation occurs only in the sequence of concepts introduced. The

Vertical Path allows for the presentation of three consonant pairs and three vowels, then

moves the student quickly into Tracking, Reading and Spelling with these sounds. The

Horizontal Path presents all of the consonant sounds first, then the vowel sounds, and

then introduces Tracking, Reading, and Spelling with all of the sounds. Figure 1 offers a

visual depiction of the first few program elements as they would be introduced for each






59


path. The manual suggests the Vertical Path for young students (Lindamood &

Lindamood, 1998), such as those in this study.

Setting the Climate
for Learning


Discover / Label 1st 3
Consonant Pairs


Track Isolated
Consonants


Discover Vowel Circle /
Label 3 Vowels


Track/Read/
Spell Simple Syllables


Discover / Label Remaining
Consonant Pairs


Figure 1. Vertical Program Paths (Recommended)


Discover/
Setting the Label 1st 3
Climate for Consonant
Learning Pairs


Discover/
Label
Remaining
Consonant
Pairs


Discover/
Label
Other
Consonant
Groups


Discover
Vowel
Circle/
Label 3
Vowels


Figure 2. Horizontal Program Paths

Instructor 1

Instructor 1 chose the Vertical Path to introduce new concepts to the kindergarten

students. She moved through the Vertical Path, introducing new concepts in the following









manner: After setting the climate, this instructor introduced four consonant pairs (Lip

Poppers, Tongue Tappers, Tongue Scrapers, Lip Coolers) followed by three vowel

sounds (/ee/, /o/, /oe/), then proceeded to Tracking/Spelling and Reading with mouth

pictures. Next, two new consonant pairs were introduced (Skinny Air, Fat Steady Air),

followed by the vowel sounds /ae/ and /oe/. One consonant pair, Fat Pushed Air, was

omitted altogether. Subsequent to Tracking, Reading, and Spelling with the above

mentioned sounds, the Tongue Cooler and Tongue Lifter were introduced last, and

Tracking, Reading, and Spelling resumed with all of these sounds. Instructor 1 chose to

introduce letter symbols to the students at Lesson 6. As mentioned previously, Tracking

was only ever completed with the mouth pictures; blocks for Tracking were never

introduced. However, Instructor 1 did mention in her final instructor interview that she

would have incorporated blocks for Tracking had she been implementing this program

one-on-one. She expressed that small manipulatives were difficult to manage with the

larger group of students.

Instructor 2

Instructor 2 selected the Horizontal Path to introduce new concepts to the students.

Instructor 2 moved through the Horizontal Path, introducing new concepts in the

following manner: First, all of the consonant pairs, or "brothers," were presented (i.e.,

Lip Poppers, Tongue Tappers, Tongue Scrapers, Lip and Tongue Coolers, Skinny Air,

Fat Steady and Pushed Air sounds). Next, the three "cousins" were presented (i.e.,

Windy, Nose, Tongue Lifters). Then, the vowels /ee/, /o/, and /oo/ were introduced.

Lastly, students completed Tracking (mouth pictures only), Reading, and Spelling with

these sounds. Letter symbols were introduced to the students at Lesson 12. Similar to

Instructor 1, blocks for Tracking were never introduced during the course of program









implementation, and, during the instructor interviews, she made no mention of a desire to

include this component in her instruction.

Delivery of Instruction

Decision based on needs of classroom teacher and school

The instructors negotiated with each of the four classroom teachers regarding how

the LiPS intervention would be delivered to the students. Therefore, the days and times of

instruction varied by school, as well as the total instructional time across the intervention

period. Table 3 presents how the LiPS intervention was delivered across instructors and

classrooms. While both instructors spent similar amounts of time in the classrooms, with

Instructor 1 averaging 15 total hours and Instructor 2 averaging 14 total hours per

classroom across the intervention period, the way that the instruction was delivered

varied by school site. For example, Instructor 1 delivered the LiPS intervention in her

respective classrooms three to four times per week in twenty-minute sessions. Instructor

2 spent forty-five minutes in each classroom one day per week. Furthermore, as reflected

in Table 3, the classroom teachers at Instructor 2's school site reviewed recently

introduced content with the students on days that Instructor 2 was not present. According

to the LiPS Trainer's Manual, "in a classroom situation, a formal work period and follow-

up reinforcement should be provided daily for a minimum of 40 to 50 minutes if

competency in Tracking, Spelling, and Reading is desired into the complex syllable level

within 2 to 3 months" (Lindamood & Lindamood, 1998, p. 18). Due to the grade level of

the intervention students (i.e., kindergarten), the children were not expected to reach the

complex syllable level. Regardless, the intervention was not intensive at either school site

so as to meet the criteria of 40 to 50 minutes daily.









Table 3. Description of Instruction: Sessions, Time, and Delivery
Instructor 1 Instructor 2
Class 1 Class 2 Class 3 Class 4
Whole Class 20 9 14 14
Sessions
Small Group 19 28 28
Sessions
(concurrently)
Switch to 42
Small Group
Sessions
Session 20 min 20 min 30 min sessions 30 min sessions
Length sessions sessions (15 min whole (15 min whole
class, then 15 class, then 15
min small group min small group
table activities); table activities);
15 min for 15 min for small
small group group
Total
Instructional
Time: 6 hours 40 3 hours 7 hours 7 hours
Time in min
Whole Class
Time in 6 hours 20 14 hours 7 hours 7 hours
Small Group min
Notes 3 times per 4 times per 1 day per week, 1 day per week,
week, 2 week, whole then teacher then teacher
days whole class 2 reviewed reviewed
class and 1 groups of 10, content (-5 content; on some
day small then switch to hours); on some occasions during
group of 5 small group of occasions observations,
7 during instructor
observations, divided class in
instructor 12 for "whole
divided class in class" instruction
12 for "whole
class"
instruction

As reflected in Table 3, the two instructors also varied in the delivery of the

intervention relative to group size (i.e., whole classroom versus small group). In fact,

Instructor 1 delivered the LiPS intervention to her two respective classrooms differently

based on the previously established curricular organizations of the classroom teachers.









For Instructor 1, in Classroom 1, she spent two of her days each week engaged in whole

class instruction and one day per week with a small group of five students both she and

the classroom teacher deemed as most in need of additional instruction. In Classroom 2,

Instructor 1 introduced the LiPS program to the whole class. Then, after nine sessions,

she switched to small group instruction and continued to work only with the seven

students deemed most in need of the intervention by the classroom teacher. These seven

students continued with the intervention during their center time while the remainder of

the students in the class attended other centers during that time. For Instructor 2, the LiPS

intervention was delivered similarly across her two respective classrooms. She divided

each class in half. Then, half the students worked with Instructor 2 to learn new content

while the remainder of the class completed table activities to review previously learned

material. After 15 minutes, the groups switched (i.e., 15 minutes of instruction for those

previously working on small group table activities, 15 minutes small group table

activities for those previously engaged in instruction with Instructor 2).

The school sites also varied in their plans for the duration of the LiPS intervention,

and this affected the decisions the instructors made regarding the delivery of instruction.

At the school site of Instructor 1, the classroom teachers had no specific time frame for

program implementation or duration of instruction. Instructor 1 had discretion to continue

the intervention as long as she deemed necessary and appropriate. At the second school

site, the teachers desired to complete the LiPS program by the winter of the school year

(i.e., February) and introduce a different intervention program to the students at that time.

Therefore, Instructor 2 anticipated that she would have a specific number of weeks at the

outset to work with the students.









While both school sites agreed to have the instructors come into the classrooms to

work with the kindergarten students, the school in which Instructor 2 was working was

more enthusiastic about the process. Furthermore, the level of classroom teacher

involvement varied by school, and by classroom to some extent. At Instructor 2's school

site, both teachers desired to learn the program themselves as their students were

introduced to it. These two teachers reviewed the LiPS program manual and closely

followed the students' instruction. Additionally, both teachers at this site prominently

displayed large mouth pictures in their classrooms and independently reviewed

previously introduced material with the students on the days that Instructor 2 was not

teaching. In contrast, the classroom teachers at the school site of Instructor 1

demonstrated less interest in learning the program themselves and were available during

LiPS instruction primarily for classroom monitoring and management of student

behavior. During the instructor interview that was conducted after only a few days into

the intervention period, Instructor 1 expressed some frustration with the limited amount

of classroom teacher involvement and support in the process. Specifically, she noted that

the teachers did not display the mouth pictures in the classroom or reinforce the LiPS

content with students at times when Instructor 1 was not in the classroom. Additionally,

Instructor 1 stated that it was difficult for her to bring her materials to the different

classrooms each day and negotiate space in the rooms. For example, she noted that even

finding markers and space on the board to write were difficult on some days. During

instructor interviews that were conducted throughout the intervention process, Instructor

2 did not mention any classroom or teacher factors that affected her choices in the

delivery of the LiPS intervention.









Decisions based on needs of students

During the course of the intervention period, both instructors made decisions

regarding the delivery of instruction based on student needs. First, decisions about the

sizes of the groups receiving instruction in the various classrooms changed during the

intervention period. For example, in Classroom 1, Instructor 1 previewed new material

with a small group of five students (deemed by herself and the classroom teacher as most

at-risk or in need of additional instruction) the day before the content was introduced to

the whole class of students. In another instance, Instructor 2 modified her LiPS

instruction to incorporate small group table activities to reinforce newly introduced

material. The decision of Instructor 2 to divide each classroom of students into two

groups was also made after she introduced new content to the entire classes of students

initially; she expressed that behavior management issues with whole classrooms of

students made it difficult to introduce new material effectively. Therefore, she modified

the instructional arrangements for her two classrooms and how she delivered LiPS

instruction based on student needs.

Second, regarding the pace of instruction, Instructor 1 had more discretion to

introduce material slowly and based on her perceptions of student mastery. Instructor 2,

however, was not able to consider student needs as much in her decisions regarding when

to introduce new content. From the outset and as mentioned previously, Instructor 2 was

aware that she had a specific time period in which to deliver the LiPS intervention to the

students at this school site. Therefore, Instructor 2 chose to introduce a new concept to

her students at each session and based this decision more on the needs of the teachers.

She did express, however, in the instructor interview at the outset of the intervention that

she desired a slower pace and recognized that it was not feasible in the classroom setting









in the time frame that was allotted for this intervention. Additionally, Instructor 2

mentioned that, if she were implementing this program one-on-one, she would have

followed more closely with the pace of the students in introducing new material. With the

larger groups of students in her classrooms, Instructor 2 expressed that she attempted to

aim the pace of her instruction to the "middle" students, while at the same time reviewing

previously introduced material and introducing something new each session. Regarding

the pace of instruction, Instructor 1 voiced similar comments during the interviews. Even

from the outset of the intervention, Instructor 1 felt that she would be further along in the

program had she been working with a student one-on-one. Regardless of the pace of the

students in each classroom, Instructor 1 stated that she attempted to keep both classrooms

at the same instructional pace.

While student engagement data was collected throughout the intervention period

for research purposes, neither instructor employed any sort of specific behavior

management system nor written records of student progress in the intervention. However,

during interviews, both instructors recognized from the outset that management of

student behavior was one of the most difficult aspects of implementing the LiPS

intervention with whole classes of students. In fact, as mentioned previously, this was one

reason Instructor 2 modified her instructional arrangements only a few sessions into the

intervention. Additionally, both instructors were able to elicit assistance from the

classroom teachers to manage student behavior, at least to some extent or on some

occasions.

While neither instructor collected specific data on student engagement, this data

was collected throughout the duration of the intervention period by the primary









investigator using the Record of Program Delivery and the Student Engagement/On-Task

Behavior forms. The Record of Program Delivery form offered information regarding

whether all students were engaged in the LiPS instruction for a given observation period.

As mentioned previously, for Instructor 1, this occurred during 8% of the observations;

for Instructor 2, this occurred during 13% of the observations.

However, more detailed information was also collected using the Student

Engagement/On-Task Behavior form. With this form, a time sampling method was used

to record the number of students engaged in instruction at designated time intervals.

During each observation, the number of students looking at the instructor at the end of

each five minute time period was recorded. The number of students looking at the

instructor was considered the best means of quantifying and recording student

engagement in a concrete, observable way. From this information, percentages were

calculated for students engaged based on the number of students in attendance during

each observational period, and an average was calculated across classrooms at each

school site. Similarities were noted across school sites. Table 4 displays the average

percentages of students engaged in the instruction in each of the four classrooms during

the intervention period.

Table 4. Percentage of Student Engagement by Instructor
Classroom 1 Classroom 2
Instructor 1 77% 73%
Instructor 2 72% 83%

The percentages across instructors regarding student engagement were similar and

generally consistent. For Instructor 1, percentages based on the Student Engagement/On-

Task Behavior form ranged from 59 to 85 percent for Classroom 1 and 53 to 87 percent









for Classroom 2. For Instructor 2, percentages ranged from 50 to 87.5 percent for

Classroom 1 and 72 to 95 percent for Classroom 2.

Lastly, through the formal instructor interviews that were conducted throughout the

intervention period, more information was gleaned regarding the instructors' views on

how they were altering or tailoring their instruction to meet the needs of the students.

Both instructors mentioned incorporating activities involving movement in order to

involve more participants and maintain attention to the tasks. The instructors performed

such activities as Spelling with large mouth pictures and Reading and Spelling on a large

dry erase board. Additionally, both instructors discussed the usefulness of maintaining

close proximity to struggling students during instruction. Regarding treatment integrity, it

should be noted that Instructor 2 specifically stated during the interview that was

conducted midway through the intervention period that she would have adhered more

closely to the LiPS protocol, or manual, had she been implementing this program one-on-

one.

Decisions based on training and experience of instructors

Because most of the instructors' previous experiences with the LiPS program were

in a one-on-one setting, the instructor interviews conducted at the beginning, middle, and

end of the intervention period allowed them to reflect on how their instruction might be

different in this classroom setting than it would be if they were working one-on-one with

students. As gleaned from the Initial Instructor Interviews conducted with the instructors

prior to program implementation, the instructors had differing training and experiences

with the LiPS program, although the amount of experience each had with the program

was similar. Training for Instructor 1 in the LiPS program was included in her graduate

coursework and involved a combination of live and videotaped instruction followed by









clinical work that was supervised by a professional trained in the program. Instructor 2

had no formal training in the LiPS program. She had purchased the program kit, read the

manual, and reportedly taught herself the program. Subsequently, she attended training

in other Lindamood-Bell programs, and those training involved a discussion of the LiPS

program.

Regarding their experiences with the program, both instructors were speech

language pathologists and had previous experiences implementing the program one-on-

one in both clinical and school settings. Clinically, Instructor 1 had worked with several

clients, including children and adults, whom she had taken through the program. In the

year prior to this study, Instructor 1 spent one semester teaching the LiPS program to

small groups of third through fifth graders. In the Initial Instructor Interview, Instructor 2

reported that she had approximately twelve years of experience using elements of the

program in schools with individual children ranging in age from five to twelve.

Additionally, in the past, Instructor 2 had worked with small groups of kindergarten

students teaching components of the program. Instructor 2 stated that she had never

completed the LiPS program from start to finish with a student. Both instructors reported

limited experience teaching larger groups or whole classrooms of students in other

reading and writing curriculums prior to this study. Overall, Instructor 1 had more

rigorous training and supervision in teaching the LiPS program, while Instructor 2 had

more experience teaching the program to students in the schools.

As can be seen in Table 2, some differences were noted between the two instructors

in the frequency of occurrence of some critical program elements. For example, the

frequency of questioning the students' responses even when correct responses were









provided varied by instructor (Instructor 1 = 8%, Instructor 2 = 31%). While both

instructors employed this teaching technique infrequently during instruction, Instructor 2

used this strategy to encourage student self-checking on occasion. For example,

Instructor 2 had the students cover their ears to confirm if a sound was quiet or noisy. In

addition, the frequency with which the instructors avoided providing correct answers for

students having difficulty varied (Instructor 1 = 27%, Instructor 2 = 88%). Often,

Instructor 1 would state the correct answer if a particular student was having difficulty or

she would elicit the answer from another student. Instructor 2 tended to remain with the

student having difficulty, leading them to the desired response, which is recommended in

the program manual (Lindamood & Lindamood, 1998, p. 419).

One-On-One Implementation

As mentioned previously, observations were conducted in a clinical setting where

clinicians worked one-on-one with individual students. The purpose of this activity was

to offer a comparison of what LiPS should look like in a clinical setting where the

program does not need to be modified or adapted to meet the needs of a larger group of

students, and it could be implemented as it was intended or designed based on the

program manual.

The particular setting where the one-on-one observations were conducted was a

private center offering remedial services to children and adults with learning difficulties.

Individuals seeking assistance undergo a comprehensive assessment, and interventions

are designed based on the particular needs of each person. The LiPS program is one of a

number of academic interventions or programs that are offered at this private center. The

two instructors observed at this facility underwent extensive training and supervision in









the LiPS program and had a combined total of approximately ten years of experience

working with students using the Lindamood programs.

Table 5 displays the number of observations that were conducted in the one-on-one

setting. Similar to the larger group observations, observations were conducted with two

instructors in the clinical setting. Moreover, the goal was to observe the program

presentation in a one-on-one instructional setting until stability across observations was

achieved. In other words, it was important that the observational data accurately reflect

typical behaviors or responses in this setting. Therefore, a total of twelve observations

were conducted across instructors in this setting.

Table 5. Number of Observations
Instructor 1 Instructor 2
Record of Program Delivery 6 6
Error Handling 6 6

The intent of conducting the observations in a one-on-one setting was to compare

the level of inclusion of key instructional components with the program as it was

designed (i.e., based on the program manual). While this program was originally

designed for clinical use with individual students and it was expected that treatment

integrity would be high in this setting, some variations or deviations from the LiPS

manual were anticipated during one-on-one observations as student differences exist and

instruction has to be modified. In other words, as the instructors worked to adapt the

instruction to their individual students, it was expected that the instructors would vary

some in their delivery of the LiPS program.

Table 6 displays the percentages by instructor for one-on-one instruction of the

inclusion of key program components as measured by the Record of Program Delivery.

Overall, certain program elements were consistently included in the sessions of both









instructors in the clinical setting. Both instructors in the clinical setting offered high

levels of the following key program components in their LiPS instruction: reviewing

previously introduced material, use of mirrors, following three specific steps in Tracking,

assessing student mastery, incorporating various error handling techniques such as

responding-to-the-response and Socratic questioning.

There were, however, areas where the instructors differed from the LiPS manual or

from each other in their instruction as measured by the Record of Program Delivery form.

First, there were two program components where the instructors significantly differed

from the LiPS manual. The first is related to questioning the student about the label of the

sounds during Tracking (i.e., "I took out a Lip Popper, and replaced it with a Lip Cooler."

or "The new sound is a Lip Cooler.") Neither instructor working one-on-one with

students included this component with great frequency (Instructor 1 = 33%, Instructor 2 =

25%). While this program element may not be as critical as others based on the specific

needs of the students, it is nevertheless a component that is explicitly discussed in the

LiPS manual (Lindamood & Lindamood, 1998, p. 34) and was employed infrequently in

the clinical setting by both instructors.

A second program component that differed from the LiPS manual and was not

included to a high degree involved questioning students even when their responses were

correct. As mentioned previously, in order to promote self-monitoring and self-correcting

behaviors, the LiPS manual emphasizes questioning students regardless of the accuracy

of their responses (Lindamood & Lindamood, 1998). In this way, the students tend to

become less dependent on the instructor and more reliant on their own skills and

decision-making abilities. As measured by the Record of Program Delivery form and









noted in Table 6, both instructors included this component inconsistently (in 50% of the

observations for Instructor 1, 67% of the observations for Instructor 2).

Furthermore, while both instructors in the one-on-one setting tended to include

most of the measured elements of the LiPS program with similar frequency, they differed

from each other on two components. First, the instructors differed significantly in their

inclusion of real and nonsense words for Tracking, Reading, and Spelling (Instructor 1 =

33%; Instructor 2 = 100%). However, the lesser percentage for Instructor 1 can be

attributed to her response to one of the students and the modification of the curriculum to

meet his needs. This particular student that Instructor 1 was working with was an older

student who had developed a great deal of sight word knowledge (i.e., had memorized a

great deal of real words). Therefore, Instructor 1 included only nonsense words in the

beginning of his LiPS instruction to ensure that he had the opportunity to use the skills he

was learning to sound out new or unfamiliar words. Otherwise, the percentages gathered

on the Record of Program Delivery forms were similar across Instructors 1 and 2.

A second program element where the two instructors differed from each other in

their delivery of instruction was related to the use of the word "no." Instructor 1 avoided

the use of the word "no" during 100% of the observations, while Instructor 2 avoided this

word during only 33% of the observations. This difference may be attributed to

individual differences in the teaching styles of the two instructors. For example, in

avoiding the use of the word "no" during instruction, Instructor 1 was noted to use such

statements as, "Use your mirror. Do those sounds look the same?" and "That's not a bad

guess..."









Table 6. Record of Program Delivery, Percentages by Instructors for One-on-One
Treatment
GENERAL Instructor 1 Instructor 2
T. reviews previously introduced material at beginning of 100% 100%
session
S. provided with/encouraged to use mirror when introduced 100% 100%
to or practicing new sounds
All Ss. observed to be actively engaged in learning process 67% 100%

TRACKING, READING, SPELLING Instructor 1 Instructor 2
S. instructed to follow 3 steps in Tracking repeat words, 100% 75%
touch & say, make change
T. questions S. about label of sounds during Tracking 33% 25%
Real and nonsense words used in 33% 100%
Tracking/Reading/Spelling
T. assesses S. mastery on T/R/S chains before new material 100% 100%
introduced

ERROR HANDLING Instructor 1 Instructor 2
T. incorporates responding-to-response (allows student to 100% 100%
self-correct)
T. uses line of questioning to lead S. to desired response 100% 100%
(Socratic)
T. avoids use of word "no" when student's answer is not 100% 33%
expected one
T. questions S. even when correct response provided 50% 67%
T. avoids providing correct answer for S. having difficulty 100% 67%

Summary of Descriptive Results

The purpose of this section was to describe how the LiPS program was delivered to

larger groups of students in kindergarten classrooms. Additionally, for comparative

purposes, data were presented regarding what the program looked like in a clinical setting

where clinicians worked with students one-on-one. While some LiPS program elements

were present across both settings, a number of differences existed in how this program

was implemented in the school versus clinical setting.

Certain program elements, as measured by the Record of Program Delivery form,

were present significantly more often in the one-on-one setting than in the classroom









setting. Table 7 displays the percentages for whole group versus one-on-one instruction

as measured by the Record of Program Delivery form. First, while mirrors were not

employed by either instructor in the classroom setting, both instructors in the one-on-one

setting consistently encouraged the use of mirrors for their students when introducing or

practicing new sounds. Second, during a majority of the instructional time in the clinical

setting, the students were instructed to use a specific process during Tracking (i.e., repeat

the old and new word, touch the blocks while stating the individual sounds, and make the

change with the blocks). In the clinical setting, Instructor 2 did not consistently have the

student touch and say the individual sounds with each new word, but she did have the

student state the change each time. In the classroom setting, the students were only

encouraged to make the changes that they heard. They were not encouraged or required

to complete the first two steps in the Tracking process.

Another important difference between the classroom and clinical settings was in the

monitoring of student progress. Clinicians in the one-on-one setting recorded individual

student performance on each task completed during each session. Students had to

demonstrate 80% or higher mastery of the material in order to move on to the next level

or receive new material. No specific records were kept regarding student progress or

mastery of the curriculum content in the classroom setting.

Lastly, differences were noted between the clinical and classroom settings in the

amount and type of questioning that was present. In the clinical setting, students were

questioned more frequently by the instructors, even when their responses were accurate.

Furthermore, instructors in the clinical setting were more inclined to avoid providing the









correct answers for the students and allowed the students to work toward the correct

answers via instructor questioning.

Table 7. Record of Program Delivery, Percentages for Whole Group versus One-on-One
GENERAL Whole One-on-
Group One
T. reviews previously introduced material at beginning of 95% 100%
session
S. provided with/encouraged to use mirror when introduced to 0% 100%
or practicing new sounds
All Ss. observed to be actively engaged in learning process 10% 83%

TRACKING, READING, SPELLING Whole One-on-
Group One
S. instructed to follow 3 steps in Tracking repeat words, touch 0% 86%
& say, make change
T. questions S. about label of sounds during Tracking 100% 29%
Real and nonsense words used in Tracking/Reading/Spelling 65% 71%
T. assesses S. mastery on T/R/S chains before new material 0% 100%
introduced

ERROR HANDLING Whole One-on-
Group One
T. incorporates responding-to-response (allows student to self- 98% 100%
correct)
T. uses line of questioning to lead S. to desired response 93% 100%
(Socratic)
T. avoids use of word "no" when student's answer is not 81% 67%
expected one
T. questions S. even when correct response provided 17% 58%
T. avoids providing correct answer for S. having difficulty 50% 83%

It should be noted that one specific difference existed between the classroom and

clinical settings that was not reflected in the data collection forms but was noted by the

primary investigator during observations in both settings. This difference was in the

amount of work that was completed during each session. The amount of time devoted to

LiPS instruction in each session was similar across the classroom and clinical settings

(i.e., approximately 30 minutes per session). However, in the clinical setting, the

instructors managed to complete Tracking, Reading, and Spelling (typically ten words









each) during each session. In the classroom setting, often only one of these tasks was

completed. Therefore, students in the one-on-one setting had more practice with the tasks

than the students in the classroom setting.

In an effort to summarize the data, the researcher looked across all data sources

including the four observation instruments (Record of Program Delivery, Error Handling,

Opportunity to Respond, and Student Engagement/On-Task Behavior), instructor

interviews, and anecdotal observational notes. Table 8 reflects conclusions made by the

primary investigator regarding the treatment integrity maintained for instructors across

the classroom and clinical settings where LiPS was employed as compared to the

program as it was designed. Instructional elements rated as low for treatment integrity for

either the classroom or clinical setting indicates that this component was demonstrated

infrequently or not at all. In other words, this instructional element appeared significantly

different from how the program was designed. Instructional elements noted as high in

treatment integrity were present on most or all occasions.

Table 8. Summary of Level of Treatment Integrity for Key Program Components Across
Settings
Instructional Elements Classroom Setting Clinical Setting
Instructor 1 Instructor 2 One-on-One
Presence of key instructional materials Lo Lo Hi
(use of mirrors, incorporation of small
mouth pictures, use of colored blocks)
Student engagement in learning process Lo Lo Hi
Choice of program paths (vertical path is Hi Lo Hi
recommended for young students)
Tracking following a prescribed process Lo Lo Hi
Formal assessment of student Lo Lo Hi
progress/mastery of concepts
Error handling techniques (e.g., Hi Hi Hi
incorporation of responding-to-the-
response and Socratic questioning)
Hi = high treatment integrity, Lo = low treatment integrity









On key instructional elements, or program components, a high degree of treatment

integrity was consistently maintained by the instructors in the clinical setting. In the

classroom setting, both instructors had similar amounts of previous experience with the

LiPS program and implemented the program as it was designed to similar degrees. Both

Instructor 1 and Instructor 2 in the classroom setting demonstrated low levels of

treatment integrity when adapting this program to teach larger groups of students.

However, Instructor 1 in the classroom setting did demonstrate some higher levels of

treatment integrity on certain key components, as can be seen in Table 8. For example,

Instructor 1 received a "high treatment integrity" rating based on her selection of the

vertical program path for LiPS implementation, as it was recommended in the program

manual for younger students. Finally, it should also be noted that Instructor 1 also

attempted the use of small mouth pictures during instruction on one occasion. However,

she expressed that managing student behavior during this activity was very difficult as

each student was engaged in the task independently. Therefore, after this one attempt,

Instructor 1 discontinued the use of small mouth pictures and used larger mouth pictures

with the whole group working together to complete the task. This was just one of a

number of program elements that were modified or excluded as this program, which was

originally created for one-on-one use, was adapted to a classroom setting.

Student Outcome Data

The purpose of this research was to document and evaluate how the Lindamood

Phoneme Sequencing Program for Reading, Spelling, and Speech (LiPS) was

implemented in the schools with kindergarten students. This study included protesting,

six months of intervention, and posttesting. The LiPS program was conducted in four

classrooms of kindergarten students by two different instructors at two school sites (i.e.,









one instructor at each school site working with two classrooms of students; 35 students at

School 1, 40 students at School 2). Pretest and posttest measures included the Lindamood

Auditory Conceptualization (LAC) test, the Phonological Awareness Composite of the

Comprehensive Test of Phonological Processing (CTOPP), the Word Identification and

Word Attack tasks from the Woodcock-Johnson Tests of Achievement (WJ-III), and the

Letter Naming and Phoneme Segmentation tasks from the Dynamic Indicators of Basic

Early Literacy Skills (DIBELS). The LiPS program served as a supplemental reading

intervention in these four kindergarten classrooms, offered in addition to the traditional

reading curriculum at each school.

Below are the results of student outcomes after six months of classroom

intervention using the LiPS program. The answers to three distinct research questions

were sought to determine the academic gains students made after exposure to this

curriculum. These questions included:

1. What gains do students demonstrate in reading after receiving instruction in the
LiPS program?

2. Do student academic gains differ on a measure more closely aligned with the LiPS
program (i.e., the LAC) as compared to other standardized, norm-referenced
measures?

3. Does student reading achievement differ significantly from instructor to instructor?

First, to analyze the results related to student outcomes, the academic gains of all

student participants collectively are examined. Then, student outcomes by school site,

and therefore by instructor, are considered. Lastly, an examination of student benchmarks

for certain measures offers a closer look at the data from some of the assessment

instruments that are more sensitive to the small changes in student achievement over the

intervention period.









Gains Demonstrated After LiPS Intervention for All Students

The raw score means and standard deviations at pretest and posttest for all students

across each measure are presented in Table 9. At pretest, the means ranged from 3.44 (SD

= 3.09) for the raw score of the Word Attack subtest of the WJ-III to 28.99 (SD = 17.13)

for the raw score of the Letter Naming task on the DIBELS. At posttest, the means

ranged from 5.56 (SD = 2.91) for the same Word Attack task to 47.47 (SD = 19.08) on

the LAC. When considering all participants collectively, positive gains from pretest to

posttest were achieved on all measures.

Table 9. Raw Score Means, Standard Deviations for Pretests/Posttests Across All
Participants
Measure Pretest Posttest
(n=75) (n=72)
CTOPP Elision 4.05 (2.97) 6.72 (3.48)
CTOPP Bldg Words 4.71 (2.89) 9.68 (3.19)
CTOPP Sound Matching 6.79 (5.08) 12.63 (5.52)
LAC 23.97 (17.36) 47.47 (19.08)
DIBELS -LN 28.99 (17.13) 45.64 (15.89)
DIBELS -PS 15.73 (13.17) 38.39 (14.26)
WJ Wd Identification 15.56 (7.17) 22.47 (6.95)
WJ- Wd Attack 3.44(3.09) 5.56(2.91)

A two-way within-subjects analysis of variance (ANOVA) was conducted to assess

student gains across measures and over time (i.e., from pretest to posttest). The dependent

variable was the mean number of items correct on each measure across all participants.

The within-subjects factors were test with eight levels (Elision, Blending Words, and

Sound Matching from the CTOPP; the LAC; Letter Naming and Phoneme Segmentation

from the DIBELS; and Word Identification and Word Attack from the WJ-III) and time

with two levels pretestt and posttest). The main effect for time was statistically

significant, F (1, 71) = 480.93, p < .01. The main effect for test was also statistically









significant, F (3, 203) = 241.85, p < .01. Additionally, the interaction between time and

test was statistically significant, F (3, 243) = 78.35, p < .01. Therefore, when looking at

all students collectively, there was a statistically significant difference between mean test

scores from pretest to posttest (i.e., time) and between the means of at least one measure

(i.e., test). Furthermore, the statistically significant interaction indicates a statistically

significant difference between mean pretest and posttest scores of at least one measure.

Student Outcomes: A Comparison of Measures

To determine which measure or measures yielded the greatest academic gains from

pretest to posttest, follow up procedures were conducted from the results achieved on the

ANOVA discussed previously. Of most interest were the gains achieved on the LAC, a

measure closely aligned with the LiPS reading intervention, compared to gains achieved

on the other assessment measures. Table 10 presents the mean test differences from

pretest to posttest for all students on each measure. Mean gains from pretest to posttest

ranged from 2.44 for the Word Attack task of the WJ-III to 24.24 on the LAC. In other

words, the mean increase from pretest to posttest for the WJ-III Word Attack task was

2.44 points, and the mean increase from pretest to posttest on the LAC was 24.24 points.

Mean gains from pretest to posttest on all measures were positive.

Table 10. Estimated Marginal Means
Measure Mean Test Difference Standard Error
(Posttest-Pretest)
CTOPP Elision 2.85 .27
CTOPP Blending Words 5.04 .34
CTOPP Sound Matching 6.07 .62
LAC 24.24 1.93
DIBELS Letter Naming 17.54 1.51
DIBELS Phoneme Segment. 22.92 1.61
WJ-III Word Identification 7.50 .49
WJ-III Word Attack 2.44 .24









In order to adjust for multiple comparisons and control for familywise error rate,

Bonferroni pairwise comparisons were calculated. Results of these comparisons are

presented in Table 11. Mean test differences were employed to determine if the gains

from pretest to posttest on the LAC were statistically different and greater than the mean

gains achieved on the other measures administered to participants. For all participants,

gains from pretest to posttest were statistically significant and greater on the LAC than on

the subtests of the CTOPP (Elision, Blending Words, and Sound Matching) and the tasks

from the WJ-III (Word Identification and Word Attack). The test differences, or gains

from pretest to posttest, on the LAC were not statistically significantly different from

those on the DIBELS Letter Naming and Phoneme Segmentation tasks. Student outcomes

relative to the LAC and DIBELS tasks will be discussed in greater detail later in this

chapter.

Table 11. Bonferroni Pairwise Comparisons
Measures Mean Standard Error
Difference
LAC vs. CTOPP Elision* 21.39 1.99
LAC vs. CTOPP Blending Words* 19.19 1.90
LAC vs. CTOPP Sound Matching* 18.17 1.99
LAC vs. DIBELS Letter Naming 6.69 2.20
LAC vs. DIBELS Phoneme Segment. 1.32 2.36
LAC vs. WJ-III Word Identification* 16.74 1.87
LAC vs. WJ-III Word Attack* 21.79 1.93
*p<.01

Student Outcomes: Differences Between Instructors

As stated previously, the LiPS program was employed at two different school sites

with two different instructors. Each instructor taught the program in two kindergarten

classrooms. The means and standard deviations at pretest and posttest across each









measure for students at each school site (i.e., Instructor 1, Instructor 2) are reported in

Table 12. Additionally, the table includes means and standard deviations by classroom.

Table 12. Means and Standard Deviations by Instructor and Classroom
Measure Instructor #1 Instructor #2
Class 1 Class 2 Class 3 Class 4
Pre Post Pre Post Pre Post Pre Post
(n=17) (n= 17) (n= 18) (n= 17) (n=20) (n=20) (n=20) (n= 18)


CTOPP -
Elision
CTOPP -
Bldg Wds
CTOPP -
Snd Mchg
LAC

DIBELS -
LN
DIBELS -
PS
WJ-III
Wd Ident


3.88
(2.57)
5.29
(3.37)
7.41
(4.99)
25.70
(17.58)
21.76
(16.73)
17.35
(13.00)
13.18
(6.11)


6.59
(4.42)
9.53
(3.50)
11.94
(5.79)
42.53
(24.80)
36.12
(17.77)
36.24
(12.54)
19.00
(6.21)


4.61
(3.91)
5.89
(2.89)
6.72
(4.76)
20.50
(18.18)
33.39
(21.02)
20.89
(13.01)
16.78
(10.10)


7.82
(3.34)
10.00
(2.62)
13.06
(4.70)
44.88
(13.61)
43.88
(14.72)
43.06
(5.88)
21.82
(6.32)


4.25
(2.97)
4.75
(2.38)
6.20
(5.19)
24.65
(16.09)
28.95
(9.66)
16.85
(13.93)
16.45
(6.14)


6.90
(2.97)
10.55
(3.35)
12.55
(5.48)
50.55
(21.35)
49.75
(9.91)
40.10
(18.25)
24.25
(6.50)


3.50
(2.37)
3.10
(2.34)
6.90
(5.61)
24.35
(18.54)
31.20
(18.66)
8.60
(10.25)
15.60
(5.70)


5.61
(3.05)
8.56
(3.09)
12.94
(6.38)
51.17
(14.20)
51.72
(17.06)
34.11
(15.73)
24.39
(7.77)


WJ-III 2.65 5.41 3.72 5.47 3.75 5.70 3.55 5.61
Wd Attack (2.06) (2.98) (4.93) (3.24) (2.45) (2.05) (2.28) (3.50)

Two separate analyses were conducted in order to assess differences between

instructors on student outcomes. These included analyses of covariance (ANCOVA)

procedures and the calculation of effect sizes. In subsequent sections, the results of these

analyses are presented.

Analyses of Covariance

First, ANCOVA procedures were conducted on all academic variables using the

pretest score for each measure as the covariate and comparing the posttest score for

Instructor 1 and Instructor 2 students. The independent variable, instructor, included two

levels: Instructor 1 and Instructor 2. The dependent variables were the mean test scores

on each measure at posttest, and the covariates were the mean test scores at pretest. The









resulting ANCOVA F values appear in Table 13. Data are reported separately for each

measure administered. Statistically significant differences between students for Instructor

1 and Instructor 2 were noted for posttest scores on the DIBELS Letter Naming task and

the WJ-III Word Identification task, where students for Instructor 2 performed better than

students for Instructor 1 on both tasks. For all other measures, no statistically significant

differences were noted.

Table 13. Student Differences at Posttest By Instructor
Measure Instructor 1 Instructor 2 F Effect Size
Mean (SD) Mean (SD)
CTOPP Elision 7.21 (3.91) 6.29 (3.04) 2.81 .04
CTOPP Blending Words 9.76 (3.06) 9.61 (3.34) 1.75 .03
CTOPP Sound Matching 12.50 (5.22) 12.74 (5.85) .19 .00
LAC 43.71 (19.73) 50.84 (18.08) 2.59 .04
DIBELS Letter Naming* 40.00 (16.54) 50.68 (13.61) 10.66 .13
DIBELS Phoneme Segment. 39.65 (10.25) 37.26 (17.14) .17 .00
WJ-III Word Identification** 20.41 (6.33) 24.32 (7.03) 4.23 .06
WJ-III Word Attack 5.44(3.07) 5.66 (2.79) 2.47 .04
*p<.01, **p<.05

Effect Sizes

Second, in addition to the ANCOVAs, effect sizes were calculated to assess student

outcomes. Also included in Table 13 are the effect sizes examining instructor differences

for each measure at posttest when pretest was held constant. Unlike the ANCOVAs

which account for sample size, effect sizes were calculated to determine if differences

existed on outcome measures between instructors when sample size was not considered.

Results of calculated effect sizes yielded no statistically significant differences between

instructors on any posttest measure when pretest was held constant. In other words, when

correcting for pretest variability, no meaningful differences between instructors on

posttest measures were identified. Interestingly, effect sizes ranged from .00 for the

DIBELS Phoneme Segmentation task to .13 for the DIBELS Letter Naming task. While









previously reported ANCOVA results identified some statistically significant differences

between measures, specifically for the DIBELS Letter Naming and WJ-III Word

Identification tasks, no practical significant differences were noted between the measures

because all calculated effect sizes were small.

Student Progress: A Closer Look

Benchmark Comparisons

In the analyses of statistically significant differences for student outcomes between

measures, mean test differences were greater on the LAC than that of the WJ-III or

CTOPP. However, no statistically significant differences were noted between the LAC

and DIBELS tasks. Both of these assessment instruments can be used for progress

monitoring and may be more sensitive to small changes in student performance relative

to an intervention. Therefore, in order to further investigate any differences that may exist

between these two measures, a comparison was made between mean pretest and posttest

scores for the students of each instructor and benchmarks for expected levels of reading

achievement or progress.

Benchmarks are typically employed for screening or grouping students (Good &

Kaminski, 2003) and can serve to demonstrate meaningful differences in progress

monitoring. For this purpose, the benchmarks offered a sense of student reading growth

for the kindergarten participants from the beginning of the school year until the

conclusion of the LiPS intervention, which was in February. Typically, "the benchmarks

represent minimal levels of satisfactory progress for the lowest achieving students"

(Good, Gruba, & Kaminski, 2001 in Good & Kaminski, 2003). A comparison of student

changes in benchmark placement by instructor was considered for both the LAC and

DIBELS tasks. Three distinct benchmarks were considered for the DIBELS tasks:









students at-risk (< 20th percentile), students considered to have some risk (21-38th

percentile), and those considered at low risk (> 39th percentile) for reading difficulties.

The LAC offered a distinct and unique set of benchmarks which were recommended

minimum scores, and this is discussed in more detail below.

Table 14 includes the percentages of students identified in each of the three

benchmark categories at protesting and posttesting by instructor on the DIBELS tasks.

For both instructors, a majority, or over eighty percent, of students were considered low

risk at protesting and posttesting on the Letter Naming task. However, on this particular

task, little change was noted from pretest to posttest. In other words, the number of

students identified in each of the three benchmark categories remained relatively stable,

and the percentages of students considered at risk or with some risk did not change from

pretest to posttest. On the Phoneme Segmentation task of the DIBELS, great

improvements were noted in the percentages of students considered low risk from pretest

to posttest for both instructors. While half of the students for Instructor 1 were considered

low risk at protesting, over ninety percent were considered low risk at posttesting on the

Phoneme Segmentation task. For Instructor 2, approximately one quarter of the students

were considered low risk at protesting, while over eighty percent were low risk at

posttesting on this DIBELS task. Moreover, while almost half of the students working

with Instructor 2 were at-risk for reading difficulties, as measured by the Phoneme

Segmentation task, at protesting, only eight percent of the students remained in this

category at posttesting.









Table 14. Percentage of Students at Benchmarks at Pretest/Postest on DIBELS
Task Instructor #1 Instructor #2
At-Risk Some Low Risk At-Risk < Some Low Risk
< 20th Risk > 39th 20th Risk > 39th
percentile 21-38th percentile percentile 21-38th percentile
percentile percentile
Ltr Naming Pre 9 (3) 9(3) 83 (29) 0(0) 5 (2) 95 (38)
Post 9 (3) 9 (3) 82 (28) 0 (0) 5 (2) 95 (36)
Phoneme Pre 17(6) 34(12) 49(17) 48(19) 25(10) 28(11)
Seg
Post 0 (0) 6 (2) 94(32) 8(3) 11(4) 82(31)
Note: Instructor 1: Pre n=35; Post n=34
Instructor 2: Pre n=40; Post n=38
() indicates actual numbers of students in each category

Table 15 includes benchmark data by instructor from pretest to posttest on the

LAC. Unlike the percentile classifications for the DIBELS tasks, the LAC offers

recommended minimum scores for students at each grade level from kindergarten

through the seventh grade. According to Lindamood and Lindamood (1971), these

recommended minimum scores were selected based on statistical data and clinical

experience. As stated in the test manual, "The recommended scores represent a level of

performance that correlates highly with adequate or better-than-adequate spelling and

reading skills for particular grades in typical American classrooms" (Lindamood &

Lindamood, 1971, p.29). Unlike the DIBELS tasks, however, no specific percentile

equivalents are offered.

The recommended minimum score for thefirst half of kindergarten was used for

comparison of students working with the two instructors atpretest; the recommended

minimum score for the second half of kindergarten was used for comparison at posttest.

On the LAC, improvements were seen across instructors, and the percentages decreased

from protesting to posttesting as expected. However, while the pattern of percentages was

similar to the DIBELS tasks in that less students were of concern at posttesting, the









improvements made across participants were somewhat less than for the DIBELS tasks.

For the LAC, half of the students were below the recommended minimum score at

protesting, and less than a third were below this minimum score at posttesting. For the

DIBELS tasks, however, typically less than twenty percent of the students were in the

higher risk categories (< 38th percentile) at posttesting. Furthermore, on the DIBELS,

these percentages were as small as five percent in some instances at posttesting (e.g.,

Letter Naming for Instructor 2.

Table 15. Percentage of Students Below Recommended Minimum at Pretest/Posttest on
LAC
Measure Recommended Minimum Percentage
Instructor #1 Instructor #2
LAC Pre 54 (19) 53 (21)
(First 1/2 ofK = 31)
Post 32(11) 24(9)
(Second /2 of K = 40)
Note: Instructor 1: Pre n=35; Post n=34
Instructor 2: Pre n=40; Post n=38
( ) indicates actual numbers of students in each category














CHAPTER 4
DISCUSSION

As schools move toward early identification and the prevention of reading

difficulties, reading curricula are being carefully selected and instructional time is

increasing. Especially for younger students or beginning readers, more instructional time

devoted to phonological awareness activities is critical. The reading research literature

suggests that phonological awareness, or the ability to recognize that spoken language

consists of smaller units, is a strong predictor of later reading achievement (Bus & van

Ijzendoorn, 1999). Therefore, curricula that emphasizes or at least includes direct

instruction in phonological awareness should be incorporated into beginning reading

instruction, as including this instruction early on may be more effective than waiting until

students are older (Bus & van Ijzendoorn, 1999). One such program that offers

phonological awareness training and is increasingly appearing in schools is the

Lindamood Phoneme Sequencing Program for Reading, Spelling, and Speech (LiPS).

While only a small base of empirical research exists to support this program, the

LiPS program is employed nationally in both public schools and private facilities as

beginning reading instruction and in remedial efforts for children and adults. Limited

empirical research exists supporting the use of this program in one-on-one settings (e.g.,

Kennedy & Backman, 1993; Torgesen et al., 1999), and even less systematic research

exists documenting the efficacy of the LiPS program for small groups or classrooms of

students (e.g., McGuinness, McGuinness, & Donohue, 1995). No study to date has

documented in detail how the LiPS program, which was originally designed for clinical









use, is modified for use in the school setting or what LiPS instruction looks like when

employed with large groups of students.

The primary purpose of this study was to investigate how the LiPS program was

adapted and employed in a school or classroom setting with large groups of kindergarten

students. Along with documenting the program implementation, student outcomes after

approximately six months of LiPS instruction were examined. Second, in addition to

examining program implementation in a classroom setting, another purpose of this

research was to compare the LiPS instruction of the classroom settings with the more

traditional implementation in a clinical setting to assess treatment fidelity across the two

settings.

To examine LiPS program implementation in a classroom setting, data were

collected at two school sites employing this program with kindergarten students. In the

school setting, participants included 75 kindergarten students from four different

kindergarten classrooms two classrooms at each of the two school sites. Two LiPS

instructors, both trained as speech pathologists, one at each school site, also participated

in this study. One participating school was a laboratory school affiliated with the local

public state university. This school served students in kindergarten through the twelfth

grade. The second school site was a parochial school serving students in kindergarten

through eighth grades. For comparative purposes, data regarding treatment fidelity were

also collected at a clinical site. Observations were conducted with two instructors at a

private educational center offering remedial services to children and adults with learning

difficulties.