Adult Learning

Published on July 2016 | Categories: Documents | Downloads: 54 | Comments: 0 | Views: 668
of 12
Download PDF   Embed   Report

InternshipInternshipInternshipInternship

Comments

Content

Computers & Education 81 (2015) 179e190

Contents lists available at ScienceDirect

Computers & Education
journal homepage: www.elsevier.com/locate/compedu

Controlling the slides: Does clicking help adults learn?
Kara Sage*, Nikole Bonacorsi, Sarah Izzo, Abigail Quirk
Hamilton College, Dept of Psychology, 198 College Hill Rd Clinton, NY 13323, USA

a r t i c l e i n f o

a b s t r a c t

Article history:
Received 22 May 2014
Received in revised form
7 October 2014
Accepted 11 October 2014
Available online 19 October 2014

When utilizing screen media as an educational platform, maintaining control over one's experience may
lead to more successful learning outcomes. In the current work, adults learned four new action sequences, each via a different slideshow type. The computer advanced slides automatically, but each
version had a different pausing mechanism: (1) free pause (viewers could click the mouse at any point to
pause the show), (2) subgoal pause (show paused after subgoals, viewer clicked to continue), (3) timed
pause (show paused every 20 slides, viewer clicked to continue), and (4) no pause (no viewer interaction). Participants completed a written memory test, live performance test, cognitive load measures, and
satisfaction measures. Results indicated that memory recall was significantly lower in the no pause
version when compared to the versions with pause capability. Also, over half of participants reported
that the no pause version was their least favorite format to learn from. Conversely, over half of participants selected the free pause as their favorite slideshow format, and participants reported that they felt
most in control of the free pause version. These reports occurred in spite of only one-quarter of all
participants actually using the click-to-pause feature in the free pause slideshow. Perhaps the mindset of
being in control, rather than the pausing itself, increased likeability of the program. This research has
implications for program design and education, pointing to flexible pacing features being helpful in
enhancing users' enjoyment of the program and ability to extract novel information.
© 2014 Elsevier Ltd. All rights reserved.

Keywords:
Computers
Slideshows
Learning
Pausing
Students

1. Introduction
In recent years, the use of technology for learning purposes has skyrocketed. From MOOCs (massive open online courses) to smartphones, there has been an influx of new technology that offers potential for learning. Students are completing degrees online, teachers are
adapting coursework to appear in electronic formats, and job hunters are often seeking telecommuting positions. The rapid increase of
information technology has spurred research regarding the usefulness of the screen in complementing and enhancing learning outcomes.
Given the flexibility that screen learning can provide (such as working from home or on the go), there is a need for more research on digital
platforms and their role in the learning process. A key question surrounds what type of learning platform is best. What electronic features
encourage, or inhibit, the learning process?
Research into online environments has consistently shown that interactivity leads to better learning outcomes. Zhang, Zhou, Briggs, and
Nunamaker (2006) compared college students' learning about Internet search engines under four different conditions: traditional classroom, e-learning with interactive video, e-learning with non-interactive video, or e-learning with no video. The interactive video, which
allowed users to proactively engage in the content and view clips at their own pace, resulted in the best learning outcomes. Furthermore,
students were more satisfied when utilizing the interactive e-learning platform when compared to the other media. The researchers
speculated that controlling one's pace through the learning process provided a more personalized learning experience for students.
There are many reasons why learning from a computer may be most successful with an interactive component. Navigating oneself might
be advantageous, and this ability to choose one's trajectory through the learning process might enhance intrinsic motivation (Becker &
Dwyer, 1994; Berge, 2002; Domagk, Schwartz, & Plass, 2010; Zhang, 2005). Interactive technology can make the experience more
learner-centered. Learners in control can discover for themselves how to learn; they make their own decisions regarding the material
(Merrill, 1975). To illustrate, Wang and Reeves (2007) looked at how an interactive web-based platform affected students' motivation in an
earth science course. Students reported enjoying the web-based learning environment. Classroom observations showed that students
* Corresponding author. Tel.: þ1 315 859 4518.
E-mail address: [email protected] (K. Sage).
http://dx.doi.org/10.1016/j.compedu.2014.10.007
0360-1315/© 2014 Elsevier Ltd. All rights reserved.

180

K. Sage et al. / Computers & Education 81 (2015) 179e190

maintained focus on the e-learning activity and displayed enhanced curiosity by viewing non-mandatory portions of the program. By giving
students a sense of self-responsibility over the assignment, motivation to learn was enhanced.
One way in which interactivity can be effectively incorporated into programs is by breaking information down into segments. For
instance, individuals often turn to YouTube videos to learn a particular skill (Jaffar, 2012; Lee & Lehto, 2013). YouTube videos have a pausing
function and allow the viewer to fast forward and rewind as needed. Schaffer and Hannafin (1986) utilized segmented videos and
continuous videos with adults, reporting that the segmented video took students longer to view but resulted in better learning outcomes.
One potential explanation is that students had more time to think through the information presented when given the segmented version,
thus leading to a deeper level of information encoding.
Videos are a common medium to investigate within the realm of e-learning. However, an alternative, but similar, medium that can also
incorporate aspects of pausing and self-control are slideshows. Slideshows have built-in pauses after each slide and can be controlled in a
variety of ways. Interestingly, results have been conflicting regarding how much control is optimal when learning from slideshows (Lawless
& Brown, 1997; Scheiter & Gerjets, 2007). Some findings show that slideshows might increase engagement in digital environments. To
illustrate, Mayer and Chandler (2001) used slideshows to teach college students about lightning formation. Students were given the option
to advance through the presentation twice in parts via clicking a mouse, or were allowed to click through the parts and then watch a
continuous animation. Those participants with more control were more successful on a knowledge task than their peers who watched
continuous animations or viewed a continuous animation before breaking it down into parts. Viewing the slides in this piecemeal format
either on their own or prior to launching into a continuous animation likely allowed learners to break down the information in a more
effective manner, and thus organize and apply the information with higher success.
Other work by Sage and Baldwin (2014) found that students learned from a yoked-paced slideshow better than from a self-paced slideshow. In their research, half of participants (self-paced) clicked a mouse to advance through images of a novel action sequence that they
were tasked with learning. The other half of the participants (yoked-paced) watched the slides automatically advance by the computer at
another person's pace (e.g., the first participant in the yoked version saw the slides automatically advance at the pace that participant one
had set in the self-paced version). One potential explanation for why self-control in these self-paced slideshows was not valuable is the
burden placed on the user. These slideshows included over 500 slides; attention was likely divided between the mouse in-hand and the
images on the screen. Given the extensive clicking, participants may have become frustrated and thus distracted from the task. Such usability issues with technology have been postulated to detract from the learning process (Scheiter & Gerjets, 2007).
In an illustration of these mixed findings, Hoffler and Schwartz (2011) investigated the impact of representation type on the benefit of
self-pacing. They showed either static pictures or animations to students, and varied whether the show was self-paced by the user or
system-paced. Though no main effect of pacing or image type was observed, there was a notable interaction. When presented with animations, learners were more successful when self-pacing. When presented with static images, learners were more successful with systempacing. This finding suggests that benefits of self-pacing might vary based on the content of the program. When looking at the other
research, the lightning formation stimuli in Mayer and Chandler (2001) seemed more reminiscent of an animation while Sage and Baldwin's
(2014) work extracted frames from a video, akin to static images. This difference in stimuli is one potential explanation for the mixed
findings.
In a recent follow-up study, Sage (2014) investigated how learning outcomes from the yoked-paced slideshow compared to a more
predictably paced slideshow and a video. It is possible that the users in Sage and Baldwin (2014) did better under computer control because
the program advanced at a realistic pace e i.e., the pace that a real user would go through those slides. Sage investigated four slideshow
types: self-paced (user clicked mouse to advance), yoked-paced (program advanced slides at a pace matched to a prior self-paced user), setpaced (program advanced slides every 750 ms), and continuous video. Similar to the prior work, participants recalled more target actions
from the computer-controlled slideshows than the self-controlled slideshow. In terms of their ability to perform the actions, the set-paced
group produced the most successful performers while the self-paced group produced the poorest performers. Learning from a continuous
video tended to fall in the middle of learning outcomes, with yoked-paced slideshow users performing between the video and set-paced
slideshow users.
This work begs the question of why the set-paced version, which involved no learner control, produced the best learning outcomes. It
seems intuitive that participants learned more from the set-paced slideshow than the video, given that the set-paced slideshow broke up
the information to some extent. However, a learner might also want to be in control of their learning process. Still, this study had the same
usability issue as Sage and Baldwin (2014). The user had to click many times (500þ) to advance through all slides, thus providing an
explanation for why the self-paced users met with poor learning outcomes. Additionally, as Hoffler and Schwartz (2011) mentioned, information from static images might be best learned from a system-controlled program. The predictable timing of the set-paced version was
perhaps helpful in organizing the information and in knowing how long one had to encode the information on the slide (i.e., no surprises in
how quickly or slowly the slide advanced). This past work thus seems to suggest that self-control is not always a helpful design option for a
program, but that reliable segments might ease the cognitive burden of learning new information. A remaining question is if and how some
user control can be combined with this reliable pace to produce a superior learning medium. Given that other work has implicated the
benefits of interactivity and user control (e.g., Mayer & Chandler, 2001; Wang & Reeves, 2007), combining a reliable pace with user control
might be a logical next step.
Furthermore, one must consider that these electronic learning programs can be designed with a variety of types of control over the
process. As described by Hannafin (1984), programs can lead learners through a particular path of information while, if given selfcontrol, learners can determine a different individualized path and pace of learning, possibly different from the path intended by
the programmer. A blend of system- and user-control might involve some room for freedom via pausing and selecting for oneself,
while still following the general intended path through the program. The set-paced version of Sage (2014) was entirely programcontrolled and automatically determined the path for the learner. The yoked-paced and video versions in her work were also
entirely controlled by the computer. Though the self-paced version was a blend where the program determined the path while the
user determined the pace, it seems likely that this blend of control was not optimal given the large number of slides utilized and thus
corresponding high cognitive load of the task. A different combination of learner and computer control might help improve learning
outcomes for students.

K. Sage et al. / Computers & Education 81 (2015) 179e190

181

Thanks to the rapid growth in availability of emerging interactive platforms in recent years, the body of research on electronic control
options and interactivity is burgeoning. When looking at the potential of websites to interactively engage students, there have been mixed
findings about whether interactivity is valuable (e.g., Ghose & Dou, 1998) or not valuable (e.g., Bezjian-Avery, Calder, & Iacobucci, 1998).
Early in this body of work, Shackel (1991) pointed to three aspects of the interactive platform that determine its usefulness as a learning
medium: usability, likeability, and utility. Students must feel that the technology is understandable, enjoyable, and helpful in its purpose.
The program must actually do what it espouses to do. Further, Kristof and Satran (1995) put forth seven levels of interactive user control,
with control over pace (clicking to advance to the next segment) at the lowest level and complete control over the simulation (altering the
image/action) at the highest level. Even very early work in this area (e.g., Hannafin & Colamaio, 1987) supported the idea that adaptive
features and sequence options lead to more successful learning outcomes when compared to a more linear progression of information.
Considering this interactivity scale in the context of other research, slideshows similar to those used in Sage and Baldwin (2014), Sage (2014),
and Mayer and Chandler (2001) might be low on the interactivity scale, with control over pacing being present as the program directs users
through a particular path of information.
Further, in addressing the question of why control and interactivity might be helpful, attitude and satisfaction with the medium emerge
as important factors. Teo, Oh, Liu, and Wei (2003) showed that a Website with greater interactivity increases users' satisfaction and leads to
enhanced effectiveness and efficiency. They employed three levels of interactivity within sites from low interactive sites with navigational
links to high interactive sites with feedback forms, forums, and chat systems. Importantly, both cognitive aspects (e.g., the value users placed
on the program) and emotional aspects (e.g., users' satisfaction with the medium) affected users' attitudes towards the interactive websites.
In a similar vein, Kettanurak, Ramamurthy, and Haseman (2001) reported that interactivity has positive effects on user attitude, which can,
in turn, lead to better learning outcomes.
When considering the pace of information flow of these interactive programs, as previously mentioned, pauses or segments might help
facilitate the learning process. Such pauses are readily incorporated into slideshows or videos (e.g., Schaffer & Hannafin, 1986; Mayer &
Chandler, 2001). However, these pauses can also lead to varying effects on cognitive load (Sweller, van Merrienboer, & Paas, 1998), and
thus these interactive programs need to be designed carefully with this load aspect in mind. Vandewaetere and Clarebout (2013) reported
that learner control over the pacing might cause additional cognitive load when compared to program control, given the requirements of
interaction. In their work, learners were either guided through preselected exercises on English tenses or given full control over which
exercises to complete. Learning outcomes were higher in the full control version while difficulty ratings were lower when compared to the
guided version. Learners' mental effort ratings also indicated that learner control, in this case, did not impose higher cognitive load. This
finding runs counter to the research showing that full control can be burdensome (e.g., Moreno & Valdez, 2005; Neiderhauser, Reynolds,
Salmen, & Skolmoski, 2000; Sage, 2014; Sage & Baldwin, 2014; Scheiter & Gerjets, 2007). Thus, it seems plausible that there is an
optimal level of interactivity, one that requires neither too much nor too little of the learner, to foster positive learning outcomes.
1.1. Motivation for current study
It is evident that learner control that is too complicated or requires too much effort can overload a learner. Often, extraneous cognitive
load is evoked by the instructional design, frequently when split attention is required and too many elements must be stored simultaneously
in working memory (Jong, 2010). This idea speaks to the prior work by Sage and Baldwin (2014) and Sage (2014), where the excessive
clicking was likely placing extraneous cognitive load on the users, thus distracting them from learning the task at-hand. An important note
from design theory, as stated by Jong (2010), is that “eliminating characteristics of learning material that are not necessary for learning will
help students to focus on the learning processes that matter” (p. 109). In the present work, we sought to do just this; we eliminated excess
control in order to determine what pace and control feature was most helpful to users in learning the slideshow material when not
simultaneously overloading their working memories.
Further, pausing a presentation between segments may highlight event boundaries and thus enhance a learner's grasp on procedural
information (e.g., Spanjers, van Gog, & van Merrienboer, 2010). However, these pauses can be inserted in a variety of ways e by oneself, by
the computer program in sensible locations like subgoal starts and finishes, or by the computer program in perhaps less sensible but reliable
locations for event understanding. Sage (2014) reported that slides displayed at regular intervals benefitted learning. Thus, perhaps pauses
occurring at regular intervals within that slide presentation may be advantageous as well. Additionally, based on the action segmentation
literature within cognitive psychology, we know that adults naturally parse intentional action scenarios into meaningful units that often
reflect the subgoals of that action, such as finishing one-step of a multi-step process (Hard, Recchia, & Tversky, 2011). Segmentation ability
was positively related to memory recall in this prior work. Therefore, inserting pauses at event boundaries seems like it could be especially
efficacious for one's learning process.
1.2. Research questions and hypotheses
With these ideas and varied findings regarding control type and pacing in mind, the present study focused on the effectiveness of four
slideshow media.
(1) A no pause, set-paced slideshow identical to Sage (2014), given that this was the most successful version in that study
Three new comparison groups offered different blends of user and program control to investigate these questions of control and pacing
further:
(2) A free pause group, where users saw slides advance at a consistent and reliable pace by the computer, but could click the mouse at any
point to pause the show, and then click again when ready to continue
(3) A subgoal pause group, where users saw slides advance at a consistent and reliable pace until one subgoal was completed; the program
then automatically paused and users had to click when ready to continue

182

K. Sage et al. / Computers & Education 81 (2015) 179e190

(4) A timed pause group, where users saw 20 slides advance at a consistent and reliable pace, the program automatically paused and users
had to click when ready to continue. This version essentially combined reliable pacing with reliable pausing
This research design more closely investigated the interaction between user control and pacing than in previous research. Only the no
pause set-paced group, where the slides advanced automatically, was identical to the Sage (2014) work, in order to provide a direct
comparison. The introduction of the free pause group gave the user the option of full control over slideshow pacing. The other segmented
groups offered two alternatives for pausing. Subgoals break down the actions at key moments critical for understanding. In the present
study, users had little prior information about the actions; by breaking down tasks by subgoal, this presentation could facilitate users
meaningfully organizing the information. The timed slideshow version with automatic pauses every 20 slides capitalized upon the reliable
pacing trait hypothesized to be most helpful in Sage (2014) while replacing perhaps the most hindering trait in that same study, the necessity to click 500 þ times to advance through the slides, with a more reasonable click every 20 slides.
Also unlike the Sage (2014) research, the present study took a within-subjects approach so that each person experienced each learning
medium. Learning outcomes were similarly assessed via written memory and live performance. However, the new measures of cognitive
load and satisfaction were also included in the battery of tests. Given prior research showing that users are often more satisfied with
interactive media (e.g., Teo et al., 2003) but also that cognitive overload can interfere with learning in a multimedia environment (e.g.,
Moreno & Valdez, 2005; Neiderhauser et al., 2000), these additional measures seemed warranted in order to provide more specific information on why certain slideshow paces might be more or less effective than other paces. If extraneous cognitive load is placed on a user,
this load can hamper working memory and consequently one's learning process (e.g., Granger & Levine, 2010).
Given all this information, three key hypotheses guided the current work
Hypothesis 1 was that slideshows with pause capability (free, subgoal, timed) would result in superior learning outcomes to the no pause
set-paced slideshow. In light of prior work highlighting the benefits of segmented action over continuous presentations (e.g., Mayer &
Chandler, 2001; Schaffer & Hannafin, 1986), it seemed likely that the pausing would allow users some time to encode and organize information with higher fidelity than when the information was presented in a continuous slideshow, even if at a reliable pace. Further, given
that the usability/extraneous cognitive load issue of excess clicking from Sage (2014) and Sage and Baldwin (2014) was removed and each
medium provided a blend of program and user control, this outcome seemed likely. Given the lack of pauses, it was further predicted that
users would report that they liked the no pause set-paced version least when compared to the other formats.
Hypothesis 2 was that learning outcomes would be highest in the free pause condition when compared to the other pause conditions
(subgoal/timed), given that its design made this slideshow format the most flexible and tailored to the individual user. In other words, one
user might opt to pause on every other slide, while another user might opt to pause only once during the whole show. The other slideshow
formats imposed a program-mandated structure onto the pauses, in that the program controlled when to pause even though the user
decided when to move on. Given the more personalized aspect of the free pause condition, it seemed likely that users also might also report
that they preferred the free pause version to the other formats.
Hypothesis 3 was that learning outcomes in the subgoal condition would supersede learning outcomes in the timed condition. This
hypothesis was made in line with the action segmentation literature, supporting that segmentation abilities, which often align with
identifying goal structure, are positively related to memory recall (Hard, Recchia, & Tversky, 2011).
2. Method
2.1. Participants
Participants were 72 college undergraduates (median age ¼ 19 years; 54 females). Given that this experiment had a within-subjects design,
all participants experienced all four learning media. The order of the slideshow formats was counterbalanced across participants. An additional 3 individuals participated, but were not included in analyses due to a disinclination to participate (n ¼ 2) and experimental error (n ¼ 1).
2.2. Stimuli
2.2.1. Magic tricks
In matching with prior work by Sage (2014) and Sage and Baldwin (2014), four magic tricks were selected. The Chinese Linking Rings,
cups/ball, and cut and restored string trick were identical to the tricks utilized in the past research, and an additional coin trick was selected
for the current study to bring the total to four tricks. Tricks were counterbalanced within slideshow type.
Slides were extracted from videos of the same male actor performing each trick at a rate of 3 slides per second, consistent with the prior
work (Sage, 2014; Sage & Baldwin, 2014). The actor' chest and hands were visible in the videos. See Table 1 for further information on the
tricks.
2.2.2. Computer program for slideshows
A computer programmer wrote a Java program to be run on Netbeans 7.3.1 on a 21.5-inch iMac desktop computer. An initial instruction
screen prompted the experimenter to enter the subject number and the slideshow type/trick pairings. Upon hitting return, the slideshow
launched.
In the no pause set-paced version, the images automatically advanced every 750 ms (in line with the length of the average self-paced
user in Sage (2014) and Sage and Baldwin (2014)). In the subgoal and timed versions, the images advanced every 750 ms as well, but
paused after each subgoal or 20 slides, respectively. The subgoals were agreed upon by a group of expert coders prior to launching the
experiment (see Table 1 for examples). In the free pause version, the images advanced every 750 ms unless the user clicked the mouse.
A click of the mouse could slow down or speed up the presentation. Many clicks in a row would advance the slides at a faster pace while a

K. Sage et al. / Computers & Education 81 (2015) 179e190

183

Table 1
Trick descriptions.
Trick

Original video
length

Number of
slides

Trick explanation

Number of subgoals with examples

Number of target actions with
examples

Chinese
Linking
Rings

52 s

158

The actor utilizes two metal rings, one
with a gap hidden from the viewer. He
interconnects the rings in a variety of
ways that seem magical, given that the
rings appear solid.
The actor hides two large sponge balls
in his hands, and utilizes a secret 4th
ball to make it look like there is always a
ball falling through the next cup. He has
3 cups in total, so this magic occurs
three times before he secretly slips the
larger balls into the outer cups, and
then reveals all the balls at once.
The actor has a normal drinking straw
with a pre-cut slit in it, unknown to the
viewer. He slides a string through the
straw and folds the straw, secretly
tugging down on the string so it is
hidden behind his finger. He then cuts
the straw, holds the straw together in
his hand, and magically pulls out the
intact string.
The actor has a normal playing card
with a pre-cut slit in it, unknown to the
viewer. The card has a hole not quite
large enough for a quarter to pass
through. The actor fails at dropping the
coin through the hole several times. He
then magically taps the quarter while
holding it between his fingers at the top
of the card, and it appears to magically
fall through the too-small hole finally
(but it actually fell through the slit.)

6
Rubbing the rings together then dropping
them down
Separating the rings to reset for the next
portion of the trick
6
Completing the first round of the magic
ball falling through the cups
Successfully slipping the larger balls into
the cups

8
Rubbing the rings together

6
Bending the straw
Covering the string with his finger

8
Bending the straw
Cutting the straw

5
Showing the back of the card with the
thumb over the slit
Displaying the intact card successfully at
the end of the trick

6
Holding the quarter up to the
hole
Hitting the coin until it falls out

Cups/Balls

58 s

175

Cut and
Restored
String

58 s

175

Coin Trick

58 s

174

Banging the rings together
8
Stacking the cups on top of each
other
Revealing the two large balls

single click would leave an image frozen on the screen for any desired duration. Within each version, viewers saw a set of practice images
(twice through a rubberband magic trick) to acclimate them to the new slideshow format; then the program took them through the assigned
trick twice before moving on.
2.3. Measures
2.3.1. Learning check-in
After each slideshow, participants completed a four-question learning check-in that consisted of cognitive load and satisfaction measures. On an 1e7 Likert scale, participants indicated how satisfied they were (1 ¼ extremely dissatisfied, 7 ¼ extremely satisfied), how much
effort it cost them to complete the task (1 ¼ very little, 7 ¼ a lot), how in control of the slideshow they felt (1 ¼ very much not in control,
7 ¼ very much in control), and how difficult it was for them to learn the action from the slideshow (1 ¼ extremely easy, 7 ¼ extremely
difficult).
Self-report measures of satisfaction are typical in learning research (e.g., Teo et al., 2003). It was also important to include a measure of
control, as the rating reveals students' attitude towards or belief about the medium, which can, in turn, affect their learning outcomes (e.g.,
Kettanurak et al., 2001). Self-report measures of cognitive load on Likert scales have been frequently used in past research and have been
shown to be reliable (e.g., Paas, Tuovinen, Tabbers, & van Gerven, 2003). Similar questions on mental effort and difficulty are common, as is
the 7-point scale utilized in the current work (e.g., Kablan & Erden, 2008; Moreno & Valdez, 2005; Vandewaetere & Clarebout, 2013). Some
of this prior work has also indicated that the “difficulty” question speaks to germane cognitive load while the “mental effort” question
speaks more to intrinsic cognitive load, thus providing information on two aspects of load. Lastly, we opted to give learners the load
measures immediately following each slideshow type, as past research has suggested that cognitive load assessment is more accurate if it is
measured more frequently, for load can vary as the situation changes (Paas et al., 2003; Stark, Mandl, Gruber, & Renkl, 2002; Van Gog, Paas, &
van Merrienboer, 2008).
2.3.2. Free recall
Participants were given a blank memory recall sheet and were instructed to recall as many of the actions as they could from the slideshow. Participants were given a score for each trick, reflecting the number of target actions they had successfully identified. These target
actions reflected key actions shown in the slideshows, and were determined by a group of expert coders prior to beginning the experiment
(examples in Table 1).
2.3.3. Performance
Participants were given 8 min to perform the four tricks they had witnessed in the videos. All materials were available in a box on the
table for participants. This portion of the experiment was videotaped and coded later, with the exception of 6 individuals that opted to not be

184

K. Sage et al. / Computers & Education 81 (2015) 179e190

videorecorded, and who thus had their data excluded from the performance measure. The method for scoring performance was identical to
that for free recall.1
2.3.4. Interview
Following their completion of the experiment, participants were asked several questions about their impressions and past experiences.
Participants were asked which slideshow format was the easiest to learn from and why they liked it as well as which format was the hardest
to learn from and why. This information, along with asking participants why they opted to use the click-to-pause feature, were used as
supplementary measures to get a sense of whether participants' opinions on the formats differed, and will be discussed in the subsequent
results section. The remainder of the questions were used to collect background information, and will thus be discussed here.
First, each participant was asked to rate their overall attention to the slideshows on a 1-7 scale, where 1 indicated not attending at all and
7 indicated attending perfectly the whole time. However, attention was relatively consistent across participants. The average self-reported
attention was 5.24 (SD ¼ 0.91), with 81% of participants reporting a 5 or 6. This limited variability with a relatively high average might
showcase either the average attention of a typical college student, or was perhaps positively biased upwards if participants did not want to
admit to the experimenter that their attention had been waning.
Also in the interview, participants were asked to specify any past experience with magic, specifically whether they had seen any of these
tricks before. While 21% reported some past experience with magic, this experience was primarily either witnessing random tricks in
childhood or seeing a magic show at some point in their lives (e.g., “magic set in 3rd grade”, “been to a couple basic shows”, “a little bit at
camp when 7”, “very minimal”, “card tricks e not this type of magic”). A minimal 14% reported having any knowledge of these tricks, and
this was limited to either the rings or cups/balls trick. This limited prior exposure confirmed that magic tricks were a relatively novel domain
for users.2
Participants also rated how difficult they felt each trick was to perceive and perform on a scale of 1e7, where 1 indicated extremely easy
and 7 indicated extremely difficult. For perceiving the trick, the cups/balls was the hardest (M ¼ 5.13, SD ¼ 1.79), followed by the string/straw
(M ¼ 4.56, SD ¼ 1.86), card/coin (M ¼ 2.72, SD ¼ 1.56), and lastly the rings (M ¼ 2.61, SD ¼ 1.57). For performance, the tricks followed the
same pattern of difficulty: cups/balls (M ¼ 6.17, SD ¼ 1.28), string/straw (M ¼ 5.29, SD ¼ 1.72), card/coin (M ¼ 3.29, SD ¼ 1.80), and the rings
(M ¼ 2.50, SD ¼ 1.59). This difference in difficulty merited the determination of whether or not outcomes on the dependent measures for
each learning medium in the present study varied as a function of trick. However, preliminary analyses suggested that results did not vary
accordingly, and thus analyses presented hereafter were collapsed across tricks.
Further, participants were asked if they had any strategy to remember what they saw in the slideshows; half reported they did (usually
citing that they tried to remember the steps or pay attention), and half reported they did not. Representative statements by participants
include: “Thinking motions in head before doing it”, “First time just observed; Second time broke down into steps”, “List of things the person
did”, “Repeating the steps in mind”, “Just paid attention”, along with the common “No” and “Not Really.” Given that participants were
instructed to pay attention and remember the steps prior to each trick, and this approach was the most commonly reported strategy, these
answers did not seem particularly meaningful.
2.4. Procedure
Participants completed the consent process and were then instructed that they would be viewing four slideshows on the computer, and
that they would see each trick twice. The experimenter explicitly stated to pay close attention, as participants would need to perform the
tricks later on. For each slideshow type, participants received the practice run and then advanced through the trick in the corresponding
slideshow version. Immediately following each trick, participants completed the learning check-in before moving on to the next slideshow.
After all four slideshows and learning check-ins were completed, participants completed the memory recall portion (with no time limit),
were given 8 min to perform the four tricks they had witnessed, and were then interviewed on their impressions and experiences. Participants were then debriefed and thanked for their participation.
3. Results
3.1. Analysis strategy
We used a repeated-measures multivariate analysis of variance (MANOVA) with Helmert contrasts. Slideshow type was entered as the
repeated measure, with six dependent measures (memory and performance scores along with self-reported satisfaction, effort, control,
difficulty). As mentioned previously, data were collapsed across trick. Helmert contrasts were selected given the three hypotheses. Contrast
1 provided insight on hypothesis 1, comparing learning from slideshows with and without pause capability. Contrast 2 provided information
on hypothesis 2, comparing free pause (user fully in control) to pre-determined pauses (program in control). Contrast 3 reflected hypothesis
3, comparing learning from the subgoal to timed pause slideshow.
In addition to the MANOVA, two non-parametric chi-square goodness-of-fit tests were conducted to determine whether participants'
selection of a slideshow type as their favorite versus as their least favorite differed from chance (i.e. 25% reporting each type). In other words,
did participants prefer a particular version higher than what would be expected from chance?

1
This method of collecting memory recall and performance was designed to be objective e looking for a particular set of actions that were pre-determined by the research
group via group discussion and careful consideration of each magic trick sequence prior to completion of the research. There was no subjective rating included (e.g., we did
not rate participants based on how subjectively successful we thought they were). Via this method, there were only a few statements made by participants in their written
recall that were questionable in terms of if they matched the list of actions; these few statements were discussed by the research team and decided upon accordingly.
2
Results of the current experiment did not vary when this past exposure was taken into account, likely given that prior exposure was limited and also often distant in time/
relation to what participants experienced in the present work.

K. Sage et al. / Computers & Education 81 (2015) 179e190

185

To garner further insight on how individuals were clicking, we then examined how many individuals clicked during the free pause
condition, and if that clicking led them to report the free pause medium as being the easiest version for them to learn from. We also
conducted comparisons between the subgoal and timed conditions via paired-samples t-tests to shed light on whether pause duration
differed across those two media.
3.2. MANOVA
The omnibus MANOVA on learning differences between the slideshow versions was significant (Wilks' Lambda ¼ 0.818, F(18,
580.31) ¼ 2.38, p ¼ 0.001). Looking at the univariate tests, only control emerged as significant (F(3,210) ¼ 10.32, p < 0.001). For hypothesis 1
(Did pause capability lead to better learning outcomes for students when compared to the no-pause version?), the contrast on control was
significant (F(1,70) ¼ 20.97, p < 0.001) as was the contrast on memory recall (F(1,70) ¼ 4.45, p ¼ 0.039). When given pause capability,
learners recalled more target actions and felt more in control than without pause capability. For hypothesis 2 (Did the free pause version lead
to better learning outcomes for students when compared to the subgoal/timed versions?), only the contrast on control reached significance
(F(1,70) ¼ 8.30, p ¼ 0.005). Learners felt more in control of the free pause version than the subgoal/timed versions. For hypothesis 3 (Did the
subgoal version leader to better learning outcomes for students when compared to the timed version?), no contrasts were significant.
Table 2 displays the means and standard deviations for the dependent measures by slideshow format.
3.3. Chi-square tests
The chi-square goodness-of-fit tests compared participants' responses to which medium they liked best and least to an assumption of
equality e i.e. 25% selecting each medium. For which slideshow format they liked least (and was hardest to learn from), this chi-square test
was significant (c2(3, N ¼ 68) ¼ 50.94, p < 0.001). Out of 68 participants who selected one medium on this question, 41 reported that the no
pause set-paced slideshow was the hardest to learn from. As for the other media, 16, 9, and 2 reported the timed pause, subgoal pause, and
free pause, respectively. As to why the no pause set-paced version was disliked, participants mentioned that they could not stop (e.g., “didn't
allow you to look at what was happening in detail”; “no pauses, couldn't clear up confusion,”), it moved quickly (e.g., “moved really quickly
and didn't stop”; “too fast paced to see what's going on”), there was no control (e.g., “no control, not enough time to think”; “knew that there
was no way to stop”), and/or their mind wandered when it did not stop (e.g., “if attention wandered, it didn't stop”; “couldn't focus on one
particular point”). To briefly mention the next medium, respondents' dislike of the timed pause version stemmed from the distractions
caused by the pausing mechanism, with users commenting: “it was hard to get real flow because starting and stopping, and lost concentration when stopped” and “trying to figure out why pausing, distracting, hard to get back into it”.
As for the chi-square test regarding which slideshow format was most liked (and easiest to learn from), a significant difference emerged
between media (c2(3, N ¼ 68) ¼ 37.88, p < 0.001). Out of the 68 participants who selected one medium in response to this question, 38
reported that the free pause slideshow was the easiest to learn from. As for the other media, 14, 12, and 4 users reported the no pause setpaced, subgoal pause, and timed pause, respectively. When asked for their reasoning, users often cited that the free pause was the easiest to
learn from because they could stop during moments of confusion (e.g., “moments of confusion I could pause instead of computer dictating”;
“you could control parts you didn't understand as much e slow it down”), were in control (e.g., “because I was in control”; “because if I knew
I missed something I could pause”), could select their own focus (e.g., “could choose what to focus on”; “control over if there was a challenging spot, could stop if have to”), and could guide their own speed of learning (e.g., “set your own pace and get closer look at stuff”;
“controlled speed of learning”). To briefly highlight the other significant categories, respondents favoring the subgoal version seemed to
enjoy the placement of the pauses, making such comments as “pause at important points” and “because then I paid attention to that more
and knew I had to do that step when I performed it”. Respondents favoring the set-paced version seemed to prefer getting through the slides
as quickly as possible, commenting “could watch straight through” and “no interruptions, more smooth, felt faster.”
3.4. Click data
3.4.1. Free pause
In the free pause condition, only 16 of the 72 participants opted to utilize the click-to-pause option at least once during their viewing.
With one outlier removed (an excessive clicker with 40 þ clicks), the average user clicked 3.86 times (SD ¼ 5.12) during one run through a
trick. Out of these 16 clickers, 12 reported that the free pause was the easiest learning medium to learn from; this proportion of 75% was
significantly higher than what would be expected from chance (25%, given four media) according to a non-parametric binomial sign test
(p < 0.001) suggesting that users taking advantage of the pause functionality did tend to then report it as their favorite medium.
Table 2
Means and standard deviations for dependent measures by slideshow format.

Free Recalla
Performance
Satisfaction
Effort
Controla
Difficulty

Free pause
M (SD)

Subgoal pause
M (SD)

Timed pause
M (SD)

No pause
M (SD)

0.55
0.54
4.04
3.29
4.36
4.08

0.55
0.53
4.03
3.33
3.83
4.19

0.50
0.52
3.76
3.35
3.78
4.29

0.54
0.52
3.85
3.17
3.26
4.17

(0.25)
(0.26)
(1.35)
(1.65)
(1.34)
(1.46)

(0.23)
(0.25)
(1.37)
(1.70)
(1.38)
(1.47)

(0.24)
(0.23)
(1.14)
(1.56)
(1.40)
(1.40)

Note 1.
Note 2: Free recall and performance numbers represent the proportion of target actions. The bottom 4 measures were scored on 7-point scales.
a
indicates that a significant difference was found between versions on this measure.

(0.26)
(0.24)
(1.34)
(1.79)
(1.45)
(1.65)

186

K. Sage et al. / Computers & Education 81 (2015) 179e190

Additionally, given that so few people utilized the clicking mechanism in the free pause condition, we compared practice “clickers” (n ¼
35) to “non-clickers” (n ¼ 37) on their task performance. We honed in specifically on participants who had clicked during the practice run for
the free pause condition, as this clicking behavior indicated that they had listened to the directions and understood that they could click to
pause the show. Two findings of interest emerged. First, an independent samples t-test comparing the clickers to the non-clickers on their
satisfaction with the free pause condition was marginally significant (t(70) ¼ 1.88, p ¼ 0.065). Clickers (M ¼ 4.34, SD ¼ 1.33) were
somewhat more satisfied with this medium than non-clickers (M ¼ 3.76, SD ¼ 1.32). Second, an independent samples t-test also revealed
that clickers (M ¼ 4.77, SD ¼ 1.35) felt more in control during the subsequent trick presentation than the non-clickers (M ¼ 3.97, SD ¼ 1.21,
t(70) ¼ 2.64, p ¼ 0.01).
To supplement this quantitative data with qualitative information, participants were asked in the interview portion of the experiment
how they made the decision to click-to-pause (or not) during the free pause slideshow version. Many participants simply reported that they
did not use the pausing feature (e.g., “I didn't pause”; “Didn't think it would help”; “Didn't feel the need to”; “Made more sense to keep
going”). However, when participants did pause, there were three consistent reasons for that pause: to solve confusion (e.g, “Just at moments
of confusion”; “When I lost understanding of what he was doing”), to analyze changes or hard parts of the trick (e.g., “I chose moments
where I thought he was hiding something or doing any sort of trick”; “paused if needed to channel in on something”), or to observe the
secret being actively revealed (e.g., “Paused when something with the trick e secret e seemed to happen to look at his hands”; “Whenever
the key point in the trick was e every frame at key points”). Thus, when the click-to-pause option was utilized, viewers tended to have a
particular reason related to enhancing their understanding of the trick.
3.4.2. Subgoal and timed pause
Participants' average pause duration on the subgoal slides was 5.06 s (SD ¼ 3.32) while the average pause duration in the timed version
was 4.66 s (SD ¼ 1.70), a non-significant difference according to a paired samples t-test (t(143) ¼ 1.37, p ¼ 0.17). Interestingly, the clicking
trends between these two slideshow versions seemed to differ. Fig. 1 depicts participants' pause duration (some timed tricks had up to 8
breaks, but subgoal tricks had no more than 6 breaks). For the first 3 pauses, participants paused similarly across the two media, as
confirmed by a paired samples t-test (t(143) ¼ 0.99, p ¼ 0.32). For the next set of 3 pauses, participants paused marginally longer in the
subgoal version (M ¼ 5.29 s, SD ¼ 6.09) than the timed version (M ¼ 4.34, SD ¼ 1.94), as confirmed via a paired-samples t-test (t(143) ¼ 1.80,
p ¼ 0.075).
4. Discussion
This research focused on college students' learning from slideshow formats that varied in control and pacing. There was some evidence to
suggest that having control (whether directly in the free pause version or via clicks in the subgoal/timed versions) resulted in better learning
outcomes and likeability of learning media. Participants reported that they liked the free pause version best, even though only a handful of
people used the click-to-pause mechanism, suggesting that mindset may be the key factor. They seemed to appreciate the possibility for
pausing, even if they did not utilize the function. Conversely, users often reported that the no pause version was the most difficult to learn
from. Furthermore, this no pause slideshow format resulted in the lowest memory recall scores. Perhaps a steady stream of information,
with no other involvement, was either monotonous (since not interactive) or frustrating (since the flow of information never slowed down)
for viewers. Furthermore, many users cited lack of control as a key reason why this particular medium was unappealing. For the users that
did report enjoying the no pause version, they often reported that it went the fastest (and thus perhaps expressing that they were fatigued
or annoyed by the length of the presentations more generally).
In the present study, one of the most notable trends was that students' mindset played a role in their perceptions. Though there was not
much variability in several of the measures between the media types, participants felt significantly more in control of the free pause version.
This finding was further enhanced when participants actually used the click-to-pause option. They reported that the free pause version was
the easiest to learn from, and had positive comments about why they enjoyed that format. Perhaps the most interesting result was that the
78% of participants who did not click viewed identical slideshow formats for the free pause and no pause set-paced tricks. Regardless, the
no-pause version resulted in lower memory recall and the least likeability when compared to the other versions. Thus, the mindset of being
in control, rather than clicking or pausing itself, seemed to enhance students' positive experiences with the learning medium. This finding
seems in line with prior work showing that attitude matters in determining one's satisfaction with a learning medium (e.g., Teo et al., 2003).
Though these reports set the free pause version apart from the other learning media, the free pause version did not differ in learning
outcomes when compared to the subgoal/timed versions. No difference might have emerged as the subgoal/timed versions did both have

Fig. 1. Click duration in subgoal and timed conditions.

K. Sage et al. / Computers & Education 81 (2015) 179e190

187

some level of control built in (deciding when to move forward to the next set of slides). In the future, perhaps requiring participants to click a
few times so that they see their control in action, or teaching them novel material that has higher stakes (e.g., for a grade in a course) would
translate this enhanced likeability into enhanced performance benefits. Within the context of low stakes, one might not be as focused on
acquiring all the possible information within the slideshow. But when a student has to take that information and apply it elsewhere for a
grade (or in the longer-term), it seems plausible that the benefit of the free pause slideshow could be maximized.
When considering the timed versus the subgoal slideshows, there were few significant differences. This finding suggests that it was not
critically important where the pauses were inserted; seeing the pause immediately following a subgoal did not seem to aid memory and
performance over-and-above the program pausing after every 20 slides. However, it should be noted that more people said they favored the
subgoal version than the timed version, and, similarly, more people said they disliked the timed version than the subgoal version. Given that
individuals generally shortened their pause duration in the timed version while increasing the duration of their pauses in the subgoal
version, perhaps there was a difference in how participants viewed the utility of the pauses. This, in turn, could have affected their liking of
the slideshow format, favoring the breaking down by subgoals.
Interestingly, though memory recall varied slightly between conditions, no differences emerged in performance in the present study. In
response to this null finding, we looked at performance from a different angle to see if there was something more nuanced in the data.
Specifically, we timed how long participants spent on each trick to see if certain slideshow types led to longer performance duration,
regardless of the success of that performance. However, there again were no significant differences between slideshow formats. This null
finding could perhaps be due to nature of the learning, considering that magic tricks are a novel and challenging domain. Agreeably, Sage
(2014) reported that there were more high performers in the computer-controlled slideshow versions than in the self-paced version, but
learning was roughly on par performance-wise across the computer-controlled versions, which aligns with the slideshow types presented
here, given that all were computer-controlled to some extent.
To return briefly to this prior work, Sage (2014) reported that a computer-controlled set-paced slideshow led to superior outcomes when
compared to the self-paced slideshow presentation. The self-paced version presumably placed extraneous cognitive load onto users, as
500 þ clicks were required to advance through the presentation. To improve upon this prior work, the current study utilized the same “best”
option from Sage (2014), a computer-controlled set paced slideshow, while also adding in three new slideshow versions to speak more
directly to how self-control can be instantiated in slideshows in a less cognitively burdensome manner. Instead of requiring excessive
clicking, one slideshow version allowed the user to pause at will (free pause), one version capitalized upon the action segmentation
literature (Hard, Recchia, & Tversky, 2011; Spanjers et al., 2010) to pause the show at moments depicting event boundaries (subgoal pause),
and one version explored how a reliable pausing mechanism e every 20 slides e compared (timed pause). Also new to this research was the
incorporation of cognitive load and satisfaction measures, to allow for concrete data to be referenced instead of speculation regarding why
students learned better from one slideshow than another slideshow type. The within-subjects design of the current work facilitated this
data collection, by allowing users, having experienced all versions, to comment on which slideshow version they liked best/least and why.
Based on our data presented here, the current study extended the prior research by suggesting that mindset matters. Giving the viewers
the option to pause, but not forcing them to do, seemed superior to having a presentation advance automatically on the screen with forced
pauses or no option at all for pausing. Using our self-report measures, it was clear that users felt more in control of the free pause presentation and liked it the best, given the control that it offered. It was also informative that users did not rate difficulty or mental effort as
being significantly different across the four slideshow versions, suggesting that it was not the cognitive load that varied across these new
slideshow types but indeed the control aspect that influenced performance and perception. As will be discussed shortly, this mindset of
control is thus an important design consideration for future learning media.
Also, it is worthwhile to note that participants' averages on the learning check-in hovered right around the neutral mid-point (4 on a 7point scale). While it seems encouraging that users did not rate these media on the low end, it is somewhat concerning that they did not rate
them more highly, particularly on the satisfaction measure. How much individuals engaged with the medium might have played a role; we
found that individuals who actually utilized the click-to-pause mechanism in the free pause version reported higher satisfaction with the
medium. It is also possible that the design of this particular slideshow model was mediocre in terms of engaging students. Though not
placing too much or too little cognitive load onto the user (given that mental effort and difficulty were reported as consistent along the midpoint across the measures), this design might not have reached “optimal” levels of interactivity if other aspects of the presentation failed to
engross students. One could consider that students might be motivated to put forth more effort if they were presented with a topic of high
interest to them or of some consequence. Given that our interactions with the participants were fleeting and magic tricks are not a domain
commonly used by college students, learners may have been less apt to be fully engaged or interested in the learning media.
4.1. Lessons for computer programmers
Results from this study suggest that incorporating self-control features, without forcing too much effort or extraneous cognitive load
upon students, might be a helpful design consideration. The key flaw of past work like Sage (2014) was that the self-control mechanism
required too much effort on the part of the user, creating usability issues and likely cognitive overload. In the present work, the mindset of
having some control enhanced liking for the program even when participants did not use the self-control option. This self-control option is a
simple feature that can be added to educational programs. Students who do want to interact with programs more are free to do so, while
those who do not want to are not forced to. This optional functionality honors students' individual differences in their preferences for
learning as well. It also seems helpful to incorporate some pauses into the slideshows, as learning outcomes were lower when the computer
presented the slides at a constant pace with no pauses. This finding is similar to work by Mayer and Chandler (2001) and Schaffer and
Hannafin (1986), all suggesting that presenting information in segments is advantageous for learning.
Though presenting information in segments might be helpful to the learning process, computer programmers must carefully consider
the different pausing mechanisms that they can use in segmented programs and the cognitive processes associated with these different
options. Extraneous cognitive load is created when there is too much information coming in or too much required of the learner to manage it
all simultaneously. This high demand is likely why researchers like Sage and Baldwin (2014), Sage (2014), and Moreno and Valdez (2005)
have reported on certain forms of interactivity interfering with the learning process. Moreno and Valdez believed that interactivity must

188

K. Sage et al. / Computers & Education 81 (2015) 179e190

encourage purposeful processing of the new information in order to generate active learning as opposed to harming learners' knowledge
acquisition. Thus, determining the appropriate pause between segments e where and when e is a meaningful and challenging task for
program designers. The current work seems to suggest that leaving the timing of those pauses up to the learner might be best. The idea of
being in control almost seemed comforting to participants in the current work, as they knew they could stop at moments of confusion.
Congruently, having pauses forced upon users, at whatever moments, seemed to sometimes be met with disdain even if performance was
unaffected. Though that disdain had varying degrees (e.g., perhaps less so for the subgoal pauses than for the timed pauses), users
appreciated this aspect being left up to them.
To sum, returning to the idea of system versus user control (Hannafin, 1984), the present research blended the two together. This
combination allowed the user some freedom, such as choosing when to continue after a pause. While future research could look more
closely at the optimal blend between user and system control, computer programmers should carefully consider what this blend might look
like and incorporate both user and system control options in their programs. When fully in control, users might become exhausted due to
usability issues and extraneous cognitive load. When not in control at all, users might become frustrated and like the program less. When
the right blend is struck, perhaps cognitive load can be reduced and the best learning outcomes can be obtained.
4.2. Lessons for educators
Screen media can be used to enhance students' educational experiences. Reinforcing the idea that students can play an active role in their
learning and they can control their own speed and path of learning might help create a positive learning environment and, as seen in prior
research, enhance intrinsic motivation (Lepper, 1985).
A noteworthy finding from the present research that is relevant for educators pertains to the item-order hypothesis, or the notion that
the order in which information is presented plays a key role in memory. The serial position effect refers to individuals having greater
memory for the first word (primacy effect) and last word (recency effect) in a list, particularly when given immediate assessments (Nairne &
Kelley, 2004). Past research has also suggested that item order especially affects memory with short lists of information (Mulligan & Lozito,
2007). Maintaining the item order in one's mind can act to help organize information and maintain a stronger memory trace. For instance,
the temporal contiguity of two items can help someone relate the different items to each other in order to more successfully remember
them.
In our research, we looked at whether participants were, as a potential learning strategy, recalling and performing the four tricks in the
same order they had experienced them during the learning phase. For their written memory recall, 60% of students matched the order of
their recall to the order of the presentation. It is interesting to note that 14% of students omitted one trick completely from their written
recall, and that the majority of these participants left out the third trick presented in particular. Perhaps the third trick was most likely to be
forgotten given the serial position effect and fatigue. In other words, students might have been more motivated as well as had the primacy
effect working in their favor for slideshows 1 and 2 while they may have started to get annoyed or fatigued by slideshow 3. Then, when told
that slideshow 4 was the final show, this final note conceivably led to a rejuvenation in attention in addition to the recency effect enhancing
their memory. For their live performance of the tricks, among those participants who consented to be videorecorded (n ¼ 66), 47% matched
the order while 10% left out a trick entirely. Interestingly, participants left out the fourth trick most frequently. Here, it was perhaps the
fourth trick that was omitted given the time limit. The recorded performance also occurred after free recall, and thus required individuals to
hold items in memory for a longer period of time.
When analyzing how this order matching affected learning outcomes, it is interesting to note that matchers in the written free recall
remembered significantly more of the target actions than the non-matchers on the provided paper (~60% v. 46% actions reported). Additionally, they also wrote significantly more words.3 Matching did not, however, result in a benefit for the hands-on performance task. For the
39% of students who could be further classified as “double-matchers”, meaning that they were true to the original order on both free recall
and recorded performances, this resulted in no added benefit.
One possibility for why matchers only did better on the written recall task is related to the specificity and difficulty level of the task.
While writing, they had no visual cues to help them remember. Therefore, one of the possible strategies may have been to sequentially sort
through the memory of the slideshows. For the performance task, participants had visual cues from the box of trick materials provided to
them. Thus, they might have remembered details regardless of matching or non-matching, which is in line with the lack of condition
differences for performance as well. Further, while one can write about the secrets of a trick, one might not be able to successfully perform
magic or could get more frustrated in the moment given that it is, literally, a tricky task to complete. In other words, when it comes to
performing the tricks, people often simply can or can't do it.
These findings regarding matchers and non-matchers extend the item-order hypothesis to slideshows. More information can be learned
through looking at hierarchical order as well. For instance, do we remember order at the macro level of the total sequence of actions best
(e.g., this trick came first, this one came last), within the slideshows (e.g., within the ring trick, this action happened first, then this, then
that), or at an even smaller level (the micro actions of what might happen within one larger action of a trick). We looked at the macro level of
item order, but it might be interesting to consider the more detailed levels in the future, and what ordering students find most important
and helpful to remember. Further, given that slideshows are often used in the educational realm, this additional finding offers important
information about one potentially useful strategy that educators could promote (paying attention to ordering) and one piece of advice for
how to guide students through a program (presenting key information at the beginning and end of the presentation).
4.3. Limitations and future directions
In everyday life, students often experience longer learning periods than in the present study. The learning phase of this study lasted
approximately 15e20 min, which is shorter than an average class period, and students were tested immediately. This brief exposure to the

3

T-tests were conducted on the target action counts and the word counts between matchers and non-matchers, and both were significant (p's < 0.05).

K. Sage et al. / Computers & Education 81 (2015) 179e190

189

magic tricks may have been inadequate for developing a complete understanding of the nuances of the tricks. Given the rise in popularity of
online courses for college credit, further research should investigate the efficacy of more protracted online learning experiences. Future
work could consider looking to long-term learning to see if retention is influenced by learning medium.
Though magic tricks were utilized for the sake of consistency with prior work and to present a novel domain of learning to college
students, they are also not a topic of much importance to students. It seems possible that different domains of knowledge might be better
learned via different pausing mechanisms. The present work capitalized upon procedural knowledge, specifically learning a sequence of
events, where segments might act as event boundaries to enhance understanding (Spanjers et al., 2010). Agreeably, Marcus, Cleary, Wong,
and Ayres (2013) recently showcased that animation is helpful when learning hand motor skills when compared to static graphics. Slideshows in the present work may have essentially created an animation out of static images in the viewer's mind as the slides advanced in
order. In a different domain, such as remembering declarative knowledge like language or facts, it is possible that a different pausing
mechanism might be superior, as such animation might not be necessary and perhaps even extraneous to the learning process. For instance,
it might seem distracting to have words animated and moving around a page; a simpler design might be more effective, such as static images
that can be advanced at a particular pace. Thus, future work might look at areas of knowledge more applicable to an average college student
and falling within a different knowledge type, such as how to prepare for the GRE via learning vocabulary terms or learning different facts
about a country's history.
The blend of user and program control in the present study allowed for a closer look at different types of pacing and control and how
these features can be implemented into a slideshow program. However, there are other forms of control that one might consider, such as
allowing the user to go backwards and forwards in the slides, zoom in and out, and other program features. Given the work here, one may
hypothesize that having those options, regardless of whether one actually uses them, is helpful in creating an enjoyable program.
Furthermore, the research discussed throughout this paper has focused on slideshows on the computer as opposed to other contemporary
technological devices. Perhaps translating slideshows to a tablet, where users swipe and control the images with a finger as opposed to a
mouse in hand, might have different implications for the type of pacing and control that works best. It is also possible that pauses could be
capitalized upon to check in on the learner's current understanding. For instance, recent work by Cheon, Chung, Crooks, Song, and Kim
(2014) suggests that pauses that are more active, such as by incorporating free recall and short answers, lead users to higher success on
recall and transfer tests when compared to pauses that are more passive in nature, such as those in the present work. Taking these lessons
together, perhaps a program that has the option to pause and answer questions, while not forcing the user to do so, might be especially
worthwhile for learning.
Another possibility for future work is to explore how individual characteristics of the learner affects the type of screen media that works
best. It has been well-established in cognitive models of learning that presenting new information in a manner that matches well with the
individual is beneficial for producing the best learning outcomes (Bruner, 1973; Cronbach & Snow, 1977). For instance, Gay (1986) pointed to
differences based on aptitude; learners with more background information on the topic excel when given more control, while learners with
less of a background on the topic perform better when the online environment is more structured. In a more recent demonstration of this
effect with computers, Park, Lee, and Kim (2009) reported that students with background knowledge more readily benefit from highly
interactive computer simulations, and that students with high prior knowledge may err with non-interactive programs given that these
programs might not be very stimulating to advance through. Similarly, Homer and Plass (2014) were curious about how interactivity of
multimedia and executive function of the learner might interact to affect learning outcomes. They reported that students with a higher level
of executive function apply knowledge more readily following an exploratory presentation while students with lower executive function
apply knowledge more readily after a more guided, less interactive simulation. Future work might look at how individual learning style and
preferences interact with the type of control and pace that works best for the individual learner. Further, it seems pertinent to determine
what type of student is best suited for enrolling in online courses as opposed to face-to-face courses, as this knowledge might influence
course offerings within a college.
5. Conclusions
The use of technology in education is rapidly rising, given recent interests in expanding online college-level courses and constantly
coming out with new and improved versions of technology like e-books and tablets. Research has pointed to the success of interactive
programs in enhancing learning outcomes, while also pointing to the flaws of such technology. The present work demonstrated that
allowing for control of a program, regardless of whether the user actually opts to use this control, might be a valuable design consideration.
Generally, presenting information at a reliable pace, in pausable segments, and in a way that does not overburden the user, seems optimal.
Moving forward, research must continue to disentangle helpful versus hindering design features for technological and online learning, and
determine what type of control is most beneficial for certain users.
Acknowledgments
Many thanks to Jonathan Sage for designing the computer program for use in this study. We also extend gratitude to Rebecca Rees and
Kara Krushel for collecting and coding data as well as providing comments on an earlier version of this manuscript. Thanks as well to Emily
Sherry for assisting in the data collection and coding process. We are also grateful to the undergraduate students who participated in this
research.
References
Becker, D., & Dwyer, M. (1994). Using hypermedia to provide learner control. Journal of Educational Multimedia and Hypermedia, 3(2), 155e172.
Berge, Z. (2002). Active, interactive, and reflective e-learning. The Quarterly Review of Distance Education, 3(2), 181e190.
Bezjian-Avery, A., Calder, B., & Iacobucci, D. (1998). New media interactive advertising vs. traditional advertising. Journal of Advertising Research, 14, 23e32.
Bruner, J. (1973). Beyond the information given. New York: Norton.

190

K. Sage et al. / Computers & Education 81 (2015) 179e190

Cheon, K., Chung, S., Crooks, S., Song, J., & Kim, J. (2014). An investigation of the effects of different types of activities during pauses in a segmented instructional animation.
Educational Technology and Society, 17(2), 296e306.
Cronbach, L., & Snow, R. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington.
Domagk, S., Schwartz, R., & Plass, J. (2010). Defining interactivity in multimedia learning. Computers in Human Behavior, 26, 1024e1033.
Gay, G. (1986). Interaction of learner control and prior understanding in computer-assisted instruction. Journal of Educational Psychology, 78(3), 225e227.
Ghose, S., & Dou, W. (1998). Interactive functions and their impacts on the appeal of Internet presences sites. Journal of Advertising Research, 38, 39e43.
Granger, B., & Levine, E. (2010). The perplexing role of learner control in e-learning: will learning and transfer benefit or suffer? International Journal of Training and
Development, 14(3), 180e197.
Hannafin, M. (1984). Guidelines for using locus of instructional control in the design of computer-assisted instruction. Journal of Instructional Development, 7(3), 6e10.
Hannafin, M., & Colamaio, M. (1987). The effects of variations in lesson control and practice on learning from interactive video. Educational Communication and Technology,
35(4), 203e212.
Hard, B., Recchia, G., & Tversky, B. (2011). The shape of action. The Journal of Experimental Psychology: General, 140(4), 586e604.
Hoffler, T., & Schwartz, R. (2011). Effects of pacing and cognitive style across dynamic and non-dynamic representations. Computers and Education, 57(2), 1716e1726.
Homer, B., & Plass, J. (2014). Level of interactivity and executive functions as predictors of learning in computer-based chemistry simulations. Computers in Human Behavior,
36, 365e375.
Jaffar, A. (2012). YouTube: an emerging toll in anatomy education. Anatomical Sciences Education, 5, 158e164.
Jong, T. (2010). Cognitive load theory, educational research, and instructional design: some food for thought. Instructional Science, 38, 105e134.
Kablan, Z., & Erden, M. (2008). Instructional efficiency of integrated and separated text with animated presentations in computer-based science instruction. Computers and
Education, 51, 660e668.
Kettanurak, V., Ramamurthy, K., & Haseman, W. (2001). User attitude as a mediator of learning performance improvement in an interactive multimedia environment: an
empirical investigation of the degree of interactivity and learning styles. Internal Journal of Human-Computer Studies, 54, 541e583.
Kristof, R., & Satran, A. (1995). Interactivity by Design: Creating and Communicating with New Media. CA: Adobe Press.
Lawless, K., & Brown, S. (1997). Multimedia learning environments: issues of learner control and navigation. Instructional Science, 25, 117e131.
Lee, D., & Lehto, M. (2013). User acceptance of YouTube for procedural learning: an extension of the technology acceptance model. Computers and Education, 61, 193e208.
Lepper, M. (1985). Microcomputers in education: motivational and social issues. American Psychologist, 40, 1e18.
Marcus, N., Cleary, B., Wong, A., & Ayres, P. (2013). Should hand actions be observed when learning hand motor skills from instructional animations? Computers in Human
Behavior, 29, 2172e2178.
Mayer, R., & Chandler, P. (2001). When learning is just a click away: does simple user interaction foster deeper understanding of multimedia messages? Journal of Educational
Psychology, 93(2), 390e397.
Merrill, M. (1975). Learner control: beyond aptitude-treatment interactions. AV Communications Review, 23, 217e226.
Moreno, R., & Valdez, A. (2005). Cognitive load and learning effects of having students organize pictures and words in multimedia environments: the role of student
interactivity and feedback. Educational Technology Research and Development, 53, 35e45.
Mulligan, N., & Lozito, J. (2007). Order information and free recall: evaluating the item-order hypothesis. The Quarterly Journal of Experimental Psychology, 60(5), 732e751.
Nairne, J., & Kelley, M. (2004). Separating item and order information through process dissociation. Journal of Memory and Language, 50, 113e133.
Neiderhauser, D., Reynolds, R., Salmen, D., & Skolmoski, P. (2000). The influence of cognitive load on learning from hypertext. Journal of Educational Computing Research, 23,
237e255.
Paas, F., Tuovinen, J., Tabbers, H., & van Gerven, P. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38, 63e72.
Park, S., Lee, G., & Kim, M. (2009). Do students benefit equally from interactive computer simulations regardless of prior knowledge levels? Computers and Education, 52(3),
649e655.
Sage, K. (2014). What pace is best? Assessing adults' learning from slideshows and video. Journal of Educational Multimedia and Hypermedia, 23(1), 91e108.
Sage, K., & Baldwin, D. (2014). Looking to the hands: where we dwell in complex manual sequences. Visual Cognition, 22(8), 1092e1104.
Schaffer, L., & Hannafin, M. (1986). The effects of progressive interactivity of learning from interactive video. Educational Communication and Technology, 34(2), 89e96.
Scheiter, K., & Gerjets, P. (2007). Learner control in hypermedia environments. Educational Psychology Review, 19, 285e307.
Shackel, B. (1991). Usability e context, framework, design, and evaluation. In B. Shackel, & S. Richardson (Eds.), Human factors for informatics usability (pp. 21e38). UK:
Cambridge University Press.
Spanjers, I., van Gog, T., & van Merrienboer, J. (2010). A theoretical analysis of how segmentation of dynamic visualizations optimizes students' learning. Educational Psychology Review, 22(4), 411e423.
Stark, R., Mandl, H., Gruber, H., & Renkl, A. (2002). Conditions and effects of example elaboration. Learning and Instruction, 12, 39e60.
Sweller, J., van Merrienboer, J., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251e296.
Teo, H., Oh, L., Liu, C., & Wei, K. (2003). An empirical study of the effects of interactivity on web user attitude. International Journal of Human-Computer Studies, 58, 281e305.
Vandewaetere, M., & Clarebout, G. (2013). Cognitive load of learner control: extraneous or germane load? Education Research International, 2013, 1e11.
Van Gog, T., Paas, F., & van Merrienboer, J. (2008). Effects of studying sequences of process-oriented and product-oriented worked examples on troubleshooting transfer
efficiency. Learning and Instruction, 18, 211e222.
Wang, S., & Reeves, T. (2007). The effects of a web-based learning environment on student motivation in a high school earth science course. Educational Technology Research
and Development, 55(2), 169e192.
Zhang, D. (2005). Interactive multimedia-based e-learning: a study of effectiveness. The American Journal of Distance Education, 19(3), 149e162.
Zhang, D., Zhou, L., Briggs, R., & Nunamaker, J. (2006). Instructional video in e-learning: assessing the impact of interactive video on learning effectiveness. Information &
Management, 43, 15e27.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close