Evaluating Assistive Technology in Early Childhood Education

Published on May 2016 | Categories: Documents | Downloads: 28 | Comments: 0 | Views: 188
of 8
Download PDF   Embed   Report

Comments

Content


Evaluating Assistive Technology in Early Childhood Education:
The Use of a Concurrent Time Series Probe Approach
Howard P. Parette Æ Craig Blum Æ Nichole M. Boeckmann
Published online: 23 May 2009
Ó Springer Science+Business Media, LLC 2009
Abstract As assistive technology applications are
increasingly implemented in early childhood settings for
children who are at risk or who have disabilities, it is
critical that teachers utilize observational approaches to
determine whether targeted assistive technology-supported
interventions make a difference in children’s learning. One
structured strategy that employs observations and which
has powerful child progress monitoring implications is the
concurrent time series probe approach. Requiring multiple
performance measures of a child engaged in a targeted task
over time—both with and without a specific assistive
technology device—the concurrent time series probe
approach can be used to evaluate the effectiveness of as-
sistive technology tools in supporting skill acquisition in
the classroom. This approach is described in the context of
a case study, with accompanying explanations of how to
interpret data and make decisions regarding the effective-
ness of the technology.
Keywords Assistive technology Á Progress monitoring Á
Assistive technology consideration Á
Concurrent time series Á Assistive technology outcomes Á
Classroom data management
Since enactment of the No Child Left Behind Act of 2001
(NCLB), early childhood education professionals have
increasingly recognized the need for ‘scientifically based
research’ and progress monitoring of children’s attainment
of educational skills (Grisham-Brown et al. 2005; Helm
et al. 2007; Neuman and Dickinson 2001; Sindelar 2006).
State and national standards (Copple and Bredekamp 2009;
Division for Early Childhood 2007; Sandall et al. 2005)
have been established in response to increasing demands
of accountability regarding young children’s learning
(Sindelar 2006). Such accountability assumes that in the
absence of effective classroom monitoring approaches,
teachers cannot make informed teaching decisions
(Grisham-Brown et al. 2005).
Use of scientifically based research and progress moni-
toring is particularly important for young children who are
at-risk or who have disabilities, and who must have indi-
vidual education programs (IEPs) developed for them
(Individuals with Disabilities Education Improvement Act
of 2004). Recent studies have consistently recognized that
educational decision-making must be couched in assess-
ment approaches to evaluate children’s learning (Odom
et al. 2005; Sindelar 2006). The National Association for
the Education of Young Children (NAEYC) and the
National Association of Early Childhood Specialists in
State Departments of Education (NAECSSDE 2004)
developed a position statement noting numerous indicators
of effective assessment practices. Among these are (a) the
need for developmentally and educationally significant
assessments, (b) use of assessment information to under-
stand and improve learning, (c) gathering assessment
information in naturalistic settings such that children’s
actual performance is addressed, and (d) use of data
gathered across time. While some early childhood educa-
tion professionals may feel uncomfortable with the practice
H. P. Parette (&) Á C. Blum
Department of Special Education, Illinois State University,
P.O. Box 5910, Normal, IL 61790-5910, USA
e-mail: [email protected]
C. Blum
e-mail: [email protected]
N. M. Boeckmann
Department of Communication Sciences and Disorders, Illinois
State University, P.O. Box 4720, Normal, IL 61790-4720, USA
e-mail: [email protected]
1 3
Early Childhood Educ J (2009) 37:5–12
DOI 10.1007/s10643-009-0319-y
of incorporating both assessment and research based
practices into their curricula, particularly data collection
strategies, there are numerous practical approaches that can
be easily implemented by most practitioners. More
importantly, assessment and use of scientifically based
approaches are both mandated by law (i.e., IDEIA) and are
best practices in the field (NAEYC/NAECSSDE 2004).
Assistive Technology Consideration During IEP
Development
A wide array of assistive technology (AT) devices have
been reported to support the learning and classroom par-
ticipation of young children who are at risk or who have
disabilities (Mistrett et al. 2005; Judge 2006). The federal
government has defined AT devices as ‘‘any item, piece of
equipment or product system, whether acquired commer-
cially or off the shelf, modified, or customized, that is used
to increase, maintain, or improve functional capabilities of
individuals with disabilities’’ [Individuals with Disabilities
Education Improvement Act of 2004 (IDEIA 2004), 20
U.S.C. § 1401(251)]. AT devices are compensatory and
enable children to perform tasks that would not be possible
without the devices at some expected level of performance
(Parette 2006; Parette et al. 2007). These devices have been
shown to compensate for difficulties exhibited by young
children in numerous areas including mobility (Butler
1986); communication (Schepis et al. 1998); enhanced
caregiving (Daniels et al. 1995); emergent literacy (Parette
et al. 2008); access to computers (Lehrer et al. 1986); and
play (Lane and Mistrett 1996).
The IDEIA requires that AT be ‘considered’ [20 U.S.C.
1401 § 614(B)(v)] by the team developing an individual
education program (IEP) for a particular child who is at
risk or who has disabilities. This process includes exami-
nation of a child characteristics, as well as the tasks the
child is expected to complete in the context of activities in
the classroom setting (e.g., communicating with the teacher
and others during Circle Time; creating a product during
Art; eating during Snack Time; identifying beginning
sounds during Literacy Time). Understanding what the
child can and cannot do in the context of natural settings
(i.e., activities and their embedded tasks to participate in
them) allows the team to consider specific AT devices that
help the child to successfully complete important educa-
tional tasks. While the consideration process is beyond the
scope of this article for discussion, numerous resources
have been reported to assist early childhood education
professionals to better understand this decision-making
process (Center for Technology in Education Technology,
Media Division 2005; Judge and Parette 1998; Mistrett
2004; Parette and VanBiervliet 1991; Watts et al. 2004).
Observational Data in AT Decision-Making
Use of observations across time (Brassard and Boehm
2007) has consistently been recognized as the primary
approach for assessing the learning needs and educational
progress of young children with or at-risk of disability
(Bagnato and Neisworth 1991; Cohen et al. 1997; Meisels,
and Atkins-Burnett 2005). Including a method for record-
ing information gained throughout the observational pro-
cess is an important component of the data gathering
approach (Watts et al. 2004). Of particular importance in
AT decision-making is the need for collecting and
recording data both before and after an AT device is
implemented with any particular child (Parette et al. 2007).
Simply making a decision to purchase a device without
data examining whether it made any immediate impact on
a child’s performance, or failing to examine data related to
whether the AT device made any difference in the child’s
learning across time, would be ineffectual educational
practices. In either instance, observational data of child
performance is needed to make decisions about the AT
device and its use with a particular child.
Role of Concurrent Time Series Probe Approach
An emerging practice in early childhood education that can
help teachers with AT outcomes documentation is use of a
‘concurrent time series probe’ classroom approach (Smith
2000). This practical, data-focused approach involves the
teacher in collecting performance measures of a child
completing a specific task—both with and without AT—
over a reasonable period of time (Edyburn 2002; Parette
et al. 2008, 2007; Smith 2000). Measures of the child’s
performance with and without AT—both before a device is
purchased and after it has been integrated into the child’s
curriculum—provide performance lines for comparison to
what the teacher expects of the child to successfully
complete a targeted task within a classroom activity area
(Parette et al. 2007). In the concurrent time series
approach, probes are then used concurrently to assess a
student’s performance with and without AT. Probes are the
assessment of a behavior (academic, social, or life skill) on
systemically selected occasions when there is no contin-
gency or support in effect for that behavior (Kazdin 1982).
The probe, or performance assessment, is considered con-
current because during the same day or time period the
child is evaluated both with and without AT.
A probe strategy is ideal for use in the early childhood
classroom because the teacher does not have to continu-
ously monitor both the target behavior (or desired outcome
for AT support) and performance without AT support. This
makes data collection much more efficient and practical for
6 Early Childhood Educ J (2009) 37:5–12
1 3
early childhood educators. However, frequent assessments
should be made during the initial AT consideration process
to gather necessary information about the effectiveness of
the AT in enhancing the child’s performance and providing
needed compensatory supports. After an AT device or
support is selected (based on data collected that demon-
strate effectiveness), additional data should be collected on
a monthly basis to determine whether the AT remains
effective across time. Regularly scheduled data collection
ensures that the AT continues to positively impact the
child’s performance on targeted curriculum tasks over time
(Parette et al. 2007).
Presented in Fig. 1 is a graph using the concurrent time
series probe approach to assess the effectiveness of AT. In
this example, data would be collected (a) 1 week before an
AT device was tried (to determine performance levels for a
child without the AT device; this is sometimes called
baseline); (b) after the AT device was introduced during a
second week; and (c) again during a third week. Of par-
ticular importance when using this approach is securing a
‘probe’ periodically (a concurrent performance measure) in
which the child is asked to complete the targeted task
without the device to gain a data point that is then com-
pared to the child’s performance using the device for
completion of the same task. The probe should not be
conducted until there have been three to five data obser-
vations with AT support.
For example, a child who is nonverbal is presented with
questions regarding her preferences during Circle Time
over the course of a week to collect baseline data (see
Fig. 1). The teacher knows that the child has difficulty
communicating choices to others based on this data, and
expects all children to communicate five or more choices in
response to questions. A simple communication board
containing pictures of options for the child is considered
for integration in the curriculum, and systematically used
in Circle Time for a week (Intervention; see Fig. 1). The
teacher collects data on the child’s responses, given that
now she can simply point to her choice using the com-
munication board. Changes in the child’s ability to respond
are noted by the data, and after 5 days, the teacher conducts
a ‘probe’ in which questions are posed to the child without
the communication board (i.e., the communication board is
not available during the classroom activity), and data col-
lected. If the data indicated that the communication board
was previously effective, the teacher would see an imme-
diate decline in the child’s ability to perform the task (i.e.,
indicate preferences) when the board was not available.
The teacher would then reinstitute use of the AT device and
continue to collect data on a regular basis.
Previous Usage of the Concurrent Time Series
Approach
Concurrent time series approaches have been reported in
documenting the effectiveness of AT devices in school-
age education settings and have been advocated for use
both by school psychologists and school-age special
education teachers (Parette et al. 2006). Mulkey (1988)
used a time series approach to measure student gains in
reading achievement. She investigated whether grouping
students requiring special education by education needs
rather than disabling conditions increases student perfor-
mance. Anderson and Lignugaris/Kraft (2006) used a time
series approach to assess the effects of video-case
instruction for teachers of students with problem behav-
iors in general and special education classrooms. This
design made it possible to evaluate the effects of program
instruction on the analytical skill of participants on several
different occasions. They also added a control group to
allow for comparisons of skill acquisition and skill gen-
eralization. Schermerhorn and McLaughlin (1997) simi-
larly used a time series approach to evaluate the effects of
a spelling program across two groups of students, finding
that children’s test scores significantly increased while
using the program. Such studies have provided strong
support for use of a concurrent time series approach in
early childhood settings.
An Example of Implementation in a Preschool
Classroom Setting
The following brief case example describes how the con-
current time series approach could be applied to AT deci-
sion making in the early childhood classroom.
Fig. 1 Sample graph of data using a concurrent time series probe
approach
Early Childhood Educ J (2009) 37:5–12 7
1 3
Shanika is an African-American preschool student
who has been identified as at risk and attending an
early childhood education center funded by the state.
She is a lively and energetic child with who enjoys
conversation with her friends at a school, loves to
share, and be part of the class. She is well liked by her
peers, and her teachers. Sometimes she does have
difficulty focusing and listening to the teacher. When
doing small group lessons in the class she is easily
distracted and needs frequent reminders from the
teacher to follow directions and perform at a specified
level expected of children in the classroom. Her
teacher is noticing that she has difficulty listening,
and learning skills related to hearing sounds. As part
of their efforts to improve their emergency literacy
program, the early childhood center has adopted the
use of curriculum-based measures (CBMs) for uni-
versal screeners and systematic data collection. As
indicated by Shanika’s performance on the CBMs,
she is having difficulty with the phonological
awareness skill of onset (beginning consonants and
consonant clusters) and rime (vowel and remaining
sounds that provide meaning, e.g., ‘at’ in ‘cat’ and
‘bat’).
The Approach to AT Decision Making
In order to address the case above, Shanika’s teacher
decided to use a concurrent time series probe approach as a
systematic problem-solving method to make decisions
about several AT devices and whether they made a dif-
ference in Shanika’s classroom performance. Initially, the
teacher made observations of children’s performance of
targeted skills for each of 5 days while a lesson in pho-
nological awareness was being taught in the classroom. In
the curriculum at Shanika’s school they use puppets in
conjunction with picture cards with animals on them to
teach phonological skills. The puppets were also used to
play rhyming games during Circle Time. Students are
expected to learn to match initial sounds with words on
picture cards using the puppets. After instruction was
provided to all the children during the instructional setting,
children were asked to identify sounds made by letters that
were targeted in the lesson. As children responded, the
teacher simply made tally marks to indicate correctness of
children’s responses (see Fig. 2). As noted previously, this
process of taking data before intervention takes place is
called baseline. In an instructional setting, baseline is the
natural occurrence of an academic, social, or life skills task
or behavior prior to some new instruction and/or AT is
presented (Alberto and Troutman 2009). Baseline data
provides a benchmark against data collected using other
interventions and enables the teacher to make comparisons
of child performance.
In Shanika’s case, the IEP team chose to try the Intel-
liTools
Ò
Classroom Suite 4 Intellitools
Ò
(2007a)—a sci-
entifically based AT tool (Intellitools
Ò
2007b)—to
compensate for Shanika’s difficulty with phonological
awareness. The IntelliTools
Ò
Classroom Suite 4 supports
children’s mastery of content and related literacy skill
acquisition by using a cadre of well-supported learning
strategies and premade templates for literacy skill building,
including auditory cues, pictures, movies, and manipula-
tives (Howell et al. 2000). Teachers also can incorporate
individualized curriculum content into the activity, and use
an expanded keyboard for child access and control over
activities presented (cf. http://www.intellitools.com/imple
mentation/archive.aspx for guides and tutorials and http://
aex.intellitools.com [using Windows Explorer]) (Fig. 3).
The classroom task presented to Shanika using the In-
telliTools
Ò
Classroom Suite 4 required the teacher to use a
teacher-developed template (downloaded from the Class-
room Suite Activity Exchange at http://aex.intellitools.
com/) and which was presented on a computer screen.
Shanika used an expanded Intellitools
Ò
keyboard connected
to the computer to view the template presentation and which
allowed her to make choices in response to hearing the
program say, ‘‘Click the picture to hear its name. Say the
name of the picture out loud. Find the letter that spells the
first word’’ (see Fig. 4). A series of 10 screen presentations
were made to Shanika using the IntelliTools
Ò
Classroom
Suite 4 template, and her responses recorded on each of five
subsequent days, with a probe also being implemented on
the 5th day using the baseline classroom strategy (i.e., no
AT). As reflected in the data (see Fig. 4), Shanika’s pho-
nological awareness skills did not improve markedly using
the IntelliTools
Ò
Classroom Suite 4 intervention. In fact,
when a probe was conducted, her performance without the
AT intervention was only slightly less than performance
using the IntelliTools
Ò
Classroom Suite 4 intervention.
The IEP team then decided that another intervention was
needed. The teacher used a Microsoft
Ò
PowerPoint
TM
-
based curriculum—Ready-to-Go—which had been reported
in the literature as a research-based strategy that positively
impacted children’s phonological awareness skill develop-
ment (Blum and Watts 2008). Utilizing direct instruction
strategies (Carnine et al. 1995; Rosenshine 1986), this
Fig. 2 Excerpt of teacher data recording chart used during baseline
8 Early Childhood Educ J (2009) 37:5–12
1 3
PowerPoint
TM
curriculum included features of scientifically
based early literacy instruction: (a) explicit modeling, (b)
guided practice with explicit corrective feedback, (c)
independent practice and evaluation with corrective feed-
back, and (d) positive consequences for success. Animation
features within the PowerPoint
TM
curriculum, coupled with
high quality graphic elements, were deemed to be elements
that might be engaging to Shanika. The curriculum pro-
vided a structured approach, including specific statements
that the teacher was to say as each PowerPoint
TM
slide was
presented and the expected student response. Another
benefit of this program was that it could be utilized with the
entire class and delivered using the classroom computer and
LCD system, thus enabling a ‘big screen’ presentation.
For example, in using the curriculum during the sec-
ond intervention period, the teacher would show a pic-
ture of a cat, and emphasize the /k/ sound. Then as a cat
appeared on the screen multiple times, she modeled the
/k/ as each picture appeared. She also showed a slide
containing several pictures from which Shanika (and
other students) had to choose which one began with the
/k/ sound (see Fig. 5). Once this intervention was cor-
rectly implemented, Shanika’s performance greatly
improved, i.e., her performance exceeded both that noted
in the group activity-based intervention (baseline) and
when the IntelliTools
Ò
Classroom Suite 4 intervention
was initiated.
Based on these data, the IEP team could then make an
informed decision regarding which AT intervention made a
substantive difference in Shanika’s educational program. In
this particular instance for this child, the Ready-to-Go
curriculum made a bigger difference in Shanika’s phono-
logical awareness skill acquisition than the first AT solu-
tion. Having data upon which to make an informed
decision, the IEP team included the Ready-to-Go curricu-
lum in Shanika’s IEP, and noted the need to monitor her
progress in using the curriculum across time to ensure that
the AT solution remained effective.
Fig. 3 Sample screen presentation from IntelliTools
Ò
Classroom Suite 4 activity used to instruct phonological awareness
Fig. 4 Graph of Shanika’s phonological awareness classroom per-
formance across baseline and AT interventions
Early Childhood Educ J (2009) 37:5–12 9
1 3
Special Graphing Considerations for Concurrent Time
Series Probes
When using concurrent time series probe approach the
teacher should always graph data. Both Microsoft
Ò
Excel
TM
and Microsoft
Ò
PowerPoint
TM
have extremely
useful graphing features. Barton et al. (2007) developed
guidelines for early childhood educators when using the
graphing features Microsoft
Ò
PowerPoint
TM
. Graphing
data provides a powerful visual support to teachers when
making data-based decisions about their teaching and AT
considerations. However, visual inspection of line graphs
can be tricky, and caution should be used when interpreting
them. While it is beyond the scope of this article to discuss
all of the issues, we will outline a few of the major points
when using concurrent time series probe approaches.
One of the most common problems that teachers may
encounter is extreme ‘data variability.’ If, during baseline
(typically three observations), the data is highly variable it
can be difficult to interpret progress during the imple-
mentation of AT support. In this case, the teacher may want
to extend baseline a few more sessions/days to see if
the baseline will stabilize around a consistent level or
performance.
Another potential problem is ‘increasing or decreasing
baseline.’ Given that young children are constantly learn-
ing (i.e., at the time the educational team decided to take
baseline), the student may have started demonstrating the
performance outcome during baseline data collection. If
this happens, the teacher should continue baseline for a few
more sessions/days, and it will help the team see if the
progress was just temporary or represents a real trend in the
desired behavior. Failure to do this could lead early
childhood professionals to make the erroneous conclusion
that their intervention was making a difference.
Finally, if the probe without AT support is well above
baseline (indicating that the student can perform the out-
come without AT well above baseline levels), the probe
should be conducted for at least three observation days to
ensure that the student can truly perform the targeted
behavior without AT. Sometimes young children are able
to perform an outcome without support on a single occa-
sion. When making decisions about AT support, it is
essential to make careful decisions that result in minimiz-
ing the support a child needs to be successful.
Discussion
The concurrent time series probe approach is only one of
many classroom assessment strategies that can assist early
childhood education professionals to make informed deci-
sions regarding the impact of AT interventions considered
for children who are at risk or have disabilities. The IDEIA
places responsibility on all education professionals work-
ing with these children to both develop an understanding of
the AT consideration process, as well as helping choose
and implement AT solutions to support young children’s
participation in the curriculum.
In the typical early childhood education setting, teachers
are familiar with assessment of a range of daily skills, and
use of the concurrent time series probe approach simply
provides needed data upon which decision making is based
for a particular child. The approach lends itself to a variety
of data collected using checklists, rating scales, samples of
the child’s work, electronic recordings, and other assess-
ment strategies (Cook et al. 2008). Teachers have great
flexibility to design their own data collection forms to
record child performance; the important consideration is
that data be collected. Otherwise, understanding whether a
particular AT solution does indeed make a difference may
not be evident to the education professional.
Admittedly, such formal approaches for monitoring
child progress often meet with opposition by classroom
practitioners who may be managing large numbers of
children and have limited time. Use of classroom assistants
and/or volunteers to help with collecting data may be
necessary in some instances. For example, these individu-
als may observe a child’s performance in the context of a
group activity and make tally marks to document perfor-
mance on some targeted measure (see Fig. 2), thus freeing
the teacher to focus on instruction. However, even with
perceived time constraints and challenges of implementing
such formal data collection strategies, most early childhood
education teachers have the creativity to develop unique
Fig. 5 Sample PowerPoint
TM
-based Ready-to-Go curriculum slide
presented to teach phonological awareness
10 Early Childhood Educ J (2009) 37:5–12
1 3
forms for collecting data that complement both their
instructional styles and time commitments for the delivery
of instruction.
As response to intervention (RTI) models become more
prevalent in early childhood settings (Coleman et al. 2006),
data collection will become an everyday part of the teaching
repertories of early childhood professionals. The concurrent
time series probe approach allows for practitioners to use
data collected as part of an RTI process to be used for AT
considerations. When the concurrent time series probe
approach is properly implemented, it is a problem-solving
model that uses data based decision making.
Of particular importance, however, is that early
childhood education professionals recognize that positive
outcomes are possible when AT is used to compensate
for disabilities exhibited by young children who are at
risk or who have specific disabilities. Given that these
children are being prepared to enter the public schools
and experience success in the general education curric-
ulum, critical developmental skills acquired in the early
childhood setting provide the foundation upon which all
future learning occurs as children move into academi-
cally oriented educational milieus. Ensuring that impor-
tant foundational skills are developed should be a
primary concern for education professionals. The con-
current time series probe approach provides an important
tool for both documenting AT outcomes and making
decisions about AT effectiveness both short term and
over time.
Acknowledgments This article is supported through a grant from
the Illinois Children’s Healthcare Foundation to the Special Educa-
tion Assistive Technology (SEAT) Center at Illinois State University.
Content presented is based on a presentation at the National Asso-
ciation for the Education of Young Children 2008 Annual Conference
and Expo.
References
Alberto, P. A., & Troutman, A. C. (2009). Applied behavior analysis
for teachers (8th ed.). Upper Saddle River, NJ: Pearson
Education, Inc.
Anderson, D., & Lignugaris/Kraft, B. (2006). Video-case instruction
for teachers of students with problem behaviors in general and
special education classrooms. Journal of Special Education
Technology, 21(2), 31–45.
Bagnato, S. J., & Neisworth, J. T. (1991). Assessment for early
intervention: Best practices for professionals. New York:
Guilford.
Barton, E. E., Reichow, B., & Wolery, M. (2007). Guidelines for
graphing data with Microsoft
Ò
PowerPoint
TM
. Journal of Early
Intervention, 29, 320–336. doi:10.1177/105381510702900404.
Blum, C., & Watts, E. H. (2008). Ready-to-go curriculum. Normal,
IL: Illinois State University.
Brassard, M. R., & Boehm, A. E. (2007). Preschool assessment.
Principles and practices. New York: Guilford.
Butler, C. (1986). Effects of powered mobility on self-initiated
behaviors of very young children with locomotor disability.
Developmental Medicine and Child Neurology, 28, 325–332.
Carnine, D., Grossen, B., & Silbert, J. (1995). Direct instruction to
accelerate cognitive growth. In J. Block, S. Everson, & T.
Guskey (Eds.), Choosing research-based school improvement
programs (pp. 129–152). New York: Scholastic.
Center for Technology in Education Technology, Media Division.
(2005). Considering the need for assistive technology within the
individualized education program. Columbia, MD and Arling-
ton, VA: Author.
Cohen, D., Stern, V., & Balaban, N. (1997). Observing and recording
the behavior of young children (4th ed.). New York: Teachers
College Press.
Coleman, M. R., Buysse, V., & Neitzel, J. (2006). Recognition and
response: An early intervening system for young children at-risk
for learning disabilities. Executive summary. Chapel Hill, NC:
University of North Carolina at Chapel Hill.
Cook, R. E., Klein, M. D., & Tessier, A. (2008). Adapting early
childhood curricula for children in inclusive settings. Upper
Saddle River, NJ: Pearson Merrill Prentice Hall.
Copple, C., & Bredekamp, S. (2009). Developmentally appropriate
practice in early childhood programs serving children from birth
through age 8 (3rd ed.). Washington, DC: National Association
for the Education of Young Children.
Daniels, L. E., Sparling, J. W., Reilly, M., & Humphrey, R. (1995).
Use of assistive technology with young children with severe and
profound disabilities. Infant-Toddler Intervention, 5, 91–112.
Division for Early Childhood. (2007). Promoting positive outcomes
for children with disabilities: Recommendations for curriculum,
assessment, and program evaluation. Missoula, MT: Author.
Edyburn, D. (2002). Measuring assistive technology outcomes: Key
concepts. Journal of Special Education Technology, 18, 1–14.
Grisham-Brown, J., Hemmeter, M. L., & Pretti-Frontczak, K. (2005).
Blended practices for teaching young children in inclusive
settings. Baltimore: Brookes.
Helm, J. H., Beneke, S., & Steinheimer, K. (2007). Windows on
learning: Documenting young children’s work. New York:
Teachers College Press.
Howell, R. D., Erickson, K., Stanger, C., Wheaton, J. E. (2000).
Evaluation of a computer-based program on the reading
performance of first grade students with potential for reading
failure. Journal of Special Education Technology, 15(4),
Retrieved January 26, 2009, from http://www.intellimathics.
com/pdf/research/Research_Literacy.pdf.
Individuals with Disabilities Education Improvement Act. (2004). 20
U.S.C. §§ 1400 et seq.
Intellitools
Ò
. (2007a). IntelliTools
Ò
Classroom Suite 4 [computer
software]. Petaluma, CA: Cambium Learning, Inc.
Intellitools
Ò
. (2007b). The research basis for Intellitools products.
Petaluma, CA: Cambium Learning, Inc.
Judge, S. (2006). Constructing an assistive technology toolkit for
young children: Views from the field. Journal of Special
Education Technology, 21(4), 17–24.
Judge, S. L., & Parette, H. P. (Eds.). (1998). Assistive technology for
young children with disabilities: A guide to providing family-
centered services. Cambridge, MA: Brookline.
Kazdin, A. E. (1982). Single-case research designs: Methods for
clinical and applied settings. New York, NY: Oxford University
Press.
Lane, S. J., & Mistrett, S. G. (1996). Play and assistive technology
issues for infants and young children with disabilities: A
preliminary examination. Focus on Autism and Other Develop-
mental Disabilities, 11, 96–104. doi:10.1177/108835769601
100205.
Early Childhood Educ J (2009) 37:5–12 11
1 3
Lehrer, R., Harckham, L., Archer, P., & Pruzek, R. (1986).
Microcomputer-based instruction in special education. Journal
of Educational Computing Research, 2, 337–355.
Meisels, S. J., & Atkins-Burnett, S. (2005). Developmental screening
in early childhood: A guide (5th ed.). Washington, DC: National
Association for the Education of Young Children.
Mistrett, S. (2004). Assistive technology helps young children with
disabilities participate in daily activities. Technology in Action,
1(4), 1–8.
Mistrett, S. G., Lane, S. J., & Ruffino, A. G. (2005). Growing and
learning through technology: Birth to five. In D. Edyburn, K.
Higgins, & R. Boone (Eds.), Handbook of special education
technology research and practice (pp. 273–308). Whitefish Bay,
WI: Knowledge by Design.
Mulkey, L. M. (1988). Using two instruments to measure student
gains in reading achievement when assessing the impact of
educational programs. Evaluation Review, 12, 571–587. doi:
10.1177/0193841X8801200506.
National Association for the Education of Young Children and the
National Association of Early Childhood Specialists in State
Departments of Education. (2004). Early childhood curriculum,
assessment, and program evaluation. Building an effective,
accountable system in programs for children birth through age
8. Washington, DC: Author.
Neuman, S. B., & Dickinson, D. K. (Eds.). (2001). Handbook of early
literacy research. New York: Guilford.
No Child Left Behind Act. (2001). 20 U.S.C. §§ 6301 et seq.
Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson,
B., & Harris, K. R. (2005). Research in special education:
Scientific methods and evidence-based practices. Exceptional
Children, 71, 137–148.
Parette, P. (2006). Assessment for assistive technology. Workshop
presented at the National Association of School Psychologists
2006 Annual Convention, Anaheim, CA.
Parette, H. P., Blum, C., Meadan, H., Watts, E. (2008). Implementing
and monitoring assistive technology: How to use concurrent time
series designs and interpret outcomes. Poster presentation to the
National Association for the Education of Young Children 2008
Annual Conference and Expo, Dallas, TX.
Parette, H. P., Hourcade, J. J., Boeckmann, N. M., & Blum, C.
(2008b). Using Microsoft
Ò
PowerPoint
TM
to support emergent
literacy skill development for young children at-risk or who have
disabilities. Early Childhood Education Journal, 36, 233–239.
doi:10.1007/s10643-008-0275-y.
Parette, H. P., Peterson-Karlan, G. R., Smith, S. J., Gray, T., & Silver-
Pacuilla, H. (2006). The state of assistive technology: Themes
from an outcomes summit. Assistive Technology Outcomes and
Benefits, 3, 15–33.
Parette, H. P., Peterson-Karlan, G. R., Wojcik, B. W., & Bardi, N.
(2007). Monitor that progress! Interpreting data trends for
assistive technology decision-making. Teaching Exceptional,
39(7), 22–29.
Parette, H. P., & VanBiervliet, A. (1991). Assistive technology guide
for young children with disabilities. Little Rock, AR: University
of Arkansas at Little Rock. (ERIC Document Reproduction
Service No. ED324888).
Rosenshine, B. (1986). Synthesis of research on explicit teaching.
Educational Leadership, 43, 60–69.
Sandall, S., Hemmeter, M. L., Smith, B. J., & McLean, M. E. (2005).
DEC recommended practices. A comprehensive guide for
practical application in early intervention/early childhood
special education. Missoula, MT: Division for Early Childhood.
Schepis, M., Reid, D., Behrmann, M., & Sutton, K. (1998). Increasing
communicative interactions of young children with autism using
a voice output communication aid and naturalistic teaching.
Journal of Applied Behavior Analysis, 31, 561–578. doi:
10.1901/jaba.1998.31-561.
Schermerhorn, P. K., & McLaughlin, T. F. (1997). Effects of the add-
a-word spelling program on test accuracy, grades, and retention
of spelling works with fifth and sixth grade regular education
students. Child & Family Behavior Therapy, 19, 23–35. doi:
10.1300/J019v19n01_02.
Sindelar, N. (2006). Using test data for student achievement. Lanham,
MD: Rowman & Littlefield.
Smith, R. O. (2000). Measuring assistive technology outcomes in
education. Diagnostique, 25, 273–290.
Watts, E. H., O’Brian, M., & Wojcik, B. W. (2004). Four models of
assistive technology consideration: How do they compare to
recommended educational assessment practices? Journal of
Special Education Technology, 19, 43–56.
12 Early Childhood Educ J (2009) 37:5–12
1 3

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close