You are on page 1of 4

E–Learning and Digital Media

Volume 10, Number 4, 2013


www.wwwords.co.uk/ELEA

INTRODUCTION

New Media, New Learning and New Assessments

BILL COPE & MARY KALANTZIS


University of Illinois, Urbana-Champaign, USA

This special issue of E-learning and Digital Media provides an overview of the work of the Assess-as-
You-Go research group in the College of Education at the University of Illinois. Over the past
several years, this group of professors, postdoctoral researchers and graduate students has been
focusing on the question of the future of assessment and its relationship to learning. Our work has
been both practical, developing software tools and trialing them in classrooms, and theoretical,
reflecting on the changing shape of learning.
For the authors of this brief introduction and editors of this special issue, the journey began
with our move from Australia to the United States at the height of the era of ‘No Child Left
Behind’, the 2001 revision of the US Elementary and Secondary Education Act, championed by
President George Bush. We encountered an educational landscape formally dominated by state-
decreed summative assessment, and assessment practices where the nature of learning was skewed
by what could be practicably and economically assessed at scale.
Take ‘literacy’, for instance, our area of scholarly interest. Effectively, literacy had been
reduced to reading, and reading to comprehension, because you could assess comprehension with
multiple-choice or selected-response questions, using machine-readable answer sheets. We’d go
into literacy classrooms and find wall charts explaining how to answer multiple-choice questions –
eliminate the things that are definitely wrong, look for the trick distractor which seems right but is
put there to trip you up, be sure that the answer you give is the best by comparing all four potential
answers. And if you don’t have a clue, answer anyway because you have one chance in four of
getting the answer right.
This seemed a travesty of the spirit and purpose of literacy learning on so many levels. To
start with, for the smallest granular unit of measurable knowledge, the selected-response
comprehension question, in general, it was the least important things that could be questioned,
those things that could have clear answers – textual specifics such as setting and sequence. These
are the kinds of things that you might not have regarded as most central, because if you are a good
reader you will be travelling with a character, or been swept along by a plot’s momentum. The
questions were in their nature peripheral, and in order to answer the question, you often had to
scan back over the text to find the name of a place, the time something happened, the name of a
minor character, or some such. The most vital aspects of the text were not examined because these
raised questions that could not be answered with simplistically right or wrong answers. These were
things which any half decent theory of reading will say are more important than empirical specifics
of a text, such as why you as a girl relate to one of the characters or you as a boy relate to another,
or why you as a person with a certain cultural or life experience relate to the drift of the narrative,
while someone else finds it alien. This is the more important stuff of reading, the stuff of
interpretation. For students who had become used to the game of guessing the answer that the
teacher expects to her question, reading assessment was a matter of guessing what was in the test-
makers’ heads when they asked the question, and this question, in turn, was ostensibly framed to
get at what the author ‘really’ meant. No theorist of reading thinks this is how reading happens –

328 http://dx.doi.org/10.2304/elea.2013.10.4.328
Introduction

reading is a process of active communicative engagement, of interpreting meanings rather than


finding them, fixed and ready to be delivered to every reader in the same way, as textual ‘truth’.
But this was how reading is framed by selected-response comprehension tests.
And what of writing? Just after we arrived in Illinois in 2006, the state abandoned its
statewide, standardized writing test because, in the era of shrinking government, it was simply too
expensive. Writing tests meant having to employ an army of expert readers, train them, and put in
place moderation processes to insure inter-rater reliability. As a consequence, literacy was reduced
to reading, and as we have just argued, a reduced form of reading at that. Writing almost dropped
out of the school curriculum because it was not going to be tested. This, we thought, was a
dreadful irony. In the era of the so-called ‘knowledge economy’, when our schools were supposed
to be nurturing creative, innovative, responsible, risk-taking intellects, our education system had
stopped assessing writing and, for this reason, virtually stopped teaching the more actively
productive side of literacy. At the same time, the system seemed to have reframed the receptive
side of literacy, reading, as understanding meanings that were supposed to be absorbed without
interpretation, a series of isolated fragments representing what the author definitively meant to say.
So we came to the conclusion that, as the assessment tail was wagging the educational dog,
we had better explore the possibilities of creating assessments that more effectively tested the
outcomes of the learning process. For us, this could not just be a program of disinterested research.
We decided to embark on an agenda of creating and testing new assessment tools, deeply
embedded into a ‘social knowledge’ web learning space, exploring the affordances of the
burgeoning world of new media. This, because we had come to believe that these might be
productive spaces for what we came to call a ‘new learning’. This special issue tells the story of the
first stages of this endeavor.
Our overarching objectives have been threefold, framed as responses to the following
questions. First, how do we assess higher-order disciplinary practice and complex epistemic
performance that more closely align with the broader objectives of education for a ‘knowledge
society’? Second, how do we create assessments that are more directly useful for learners, more so
than the summative tests which offer retrospective judgments for primarily institutional-
managerial purposes? Third, how do we assess learning in the era of collaborative intelligence and
social knowledge media? Today’s learners are offered the ever-present possibility through the web
to reach for empirical, heuristic and algorithmic epistemic ‘stuff’ that makes anachronistic the
isolated deductive and memory work that is valued by heritage tests. Today, knowledge is more
social than ever. Several examples are: the recursive feedback processes on the wikis, blogs and
other media spaces on the web; the collective ‘knowledge management’ processes of new economy
workplace; and the now-digitized peer review processes of scholarly knowledge production. Other
examples abound. These social and technical transformations require more than just assessing
individual, in-your-head knowledge characteristic of traditional assessments.
Our starting point for this ambitious endeavor was an analysis of state-of-the-art
developments in assessment technologies. To address the high cost of assessing writing, a number
of ‘automated essay grading’ systems have been developed using natural language-processing
technologies. These have been proven at times to be as reliable as human raters. On this evidence,
they are increasingly used in high-stake tests. However, on the measure of our three overarching
questions, their achievements are disappointing. They measure different things from human raters,
and in different ways. These are intellectually shallow textual things that are then taken to be
circumstantial correlates and thus proxies for higher-order thinking capacities. For instance, you
can write a text which is intellectually garbled but with correct spelling and grammar and get a high
score. You can write something which is intellectually incisive, but poorly written in terms of
conventions and get a low score. One of the most important elements of a score, it transpires, is
word length, because it happens to be the statistical case that better writers and thinkers usually
write more. The overall scores may prove reliable when compared to human raters in summative
assessments, but because these systems measure the surface features of writing, they cannot give
useful feedback to learners, in other words offer constructive formative assessments (Vojak et al,
2011).
So early in the project, we decided to create a roadmap of emerging technology-mediated
assessments that we might deploy in a new learning and assessment environment. We came up
with six: natural language analytics, corpus comparison, in-text network-mediated feedback, rubric-

329
Bill Cope & Mary Kalantzis

based review and rating, semantic web processing, and survey psychometrics (Cope et al, 2011).
Three years later, we have developed and trialed some of these technologies, and have others in
design.
We were fortunate to have arrived in the United States just before the moment of the
Common Core State Standards. Created under the aegis of the National Governors Association,
Council of Chief State School Officers, these are one of the defining educational policies of the
Obama regime. The standards have now been adopted by 45 of the 50 states (Common Core State
Standards Initiative, 2010). The Common Core Standards are designed to up the intellectual ante
with their focus on higher-order thinking skills and disciplinary practice. From the specific
perspective of our scholarly interest in literacy, the new standards elevate the role of writing so that
it is clearly on a par with reading. They define three canonical text types at a high level of cognitive
abstraction – argument, informative/explanatory, and narrative. And they proclaim that writing in
informative/explanatory and argumentative genres is a key aspect of disciplinary work in Science,
Social Studies/History and Technical Subjects. We want to call these processes of writing across
the subject areas ‘knowledge representation’. In fact, we want to argue that writing is a key site for
representing knowledge and demonstrating complex disciplinary performance – the write-up of the
science experiment, the argument from scientific evidence about a critical environmental question,
the social studies community survey, the local history, the argument about historical causation, the
presentation of a product or architectural design ... and so on.
We found no extant machine-supported testing environments that were capable of measuring
these foundational disciplinary practices, no matter how sophisticated – not item-based testing and
not automated essay grading. Nor did these so-called ‘automatic essay scorers’ offer the immediate
feedback to learners that is required of formative assessment worthy of its name. This has been our
challenge, to imagine and create new technology-mediated learning and formative assessment
environment which genuinely reaches toward the intellectual ambitions of the Common Core
Standards. We call the environment that we have developed and trialed, Scholar.
This special issue reports on our findings in the first three years of this research and
development endeavor.[1] The first article outlines the most ambitious of our goals, with some
practical examples of the first steps we have taken in this direction, in the form of Scholar’s ‘social
knowledge’ technology. The following articles describe some of the early findings in the schools
and classrooms with which we have engaged in the process of collaborative co-design of the Scholar
environment. This is very much a work in progress, and the articles reflect upon our first steps an
ambitious program. The final article in the issue looks forward, offering a framework for analyzing
the learning dynamics of technology-mediated learning environments.

Note
[1] We wish to acknowledge funding support from the US Department of Education Institute of
Education Sciences: ‘The Assess-as-You-Go Writing Assistant: a student work environment that
brings together formative and summative assessment’ (R305A090394); ‘Assessing Complex
Performance: a postdoctoral training program researching students writing and assessment in digital
workspaces’ (R305B110008); ‘u-learn.net: an anywhere/anytime formative assessment and learning
feedback environment’ (ED-IES-10-C-0018); ‘The Learning Element: a lesson planning and
curriculum documentation tool for teachers’ (ED-IES-10-C-0021); and ‘Infowriter: a student feedback
and formative assessment environment for writing information and explanatory texts’ (ED-ED-IES-
13-C-0039). We also wish to acknowledge funding support from the Bill and Melinda Gates
Foundation. Scholar code owned by the University of Illinois has been licensed by Common Ground
Publishing LLC, directed by Bill Cope and located in the Research Park at the University of Illinois.

References
Common Core State Standards Initiative (2010) Common Core State Standards for English Language Arts
and Literacy in History/Social Studies, Science, and Technical Subjects. National Governors Association
Center for Best Practices, Council of Chief State School Officers, Washington DC.
http://www.corestandards.org/

339
Introduction

Cope, Bill, Kalantzis, Mary, McCarthey, Sarah, Vojak, Colleen & Kline, Sonia (2011) Technology-mediated
Writing Assessments: paradigms and principles, Computers and Composition, 28, 79-96.
http://dx.doi.org/10.1016/j.compcom.2011.04.007
Vojak, Colleen, Kline, Sonia, Cope, Bill, McCarthey, Sarah & Kalantzis Mary (2011) New Spaces and Old
Places: an analysis of writing assessment software, Computers and Composition, 28, 97-111.
http://dx.doi.org/10.1016/j.compcom.2011.04.004

BILL COPE is a professor in the Department of Educational Policy Studies at the University of
Illinois. He is also Director of Common Ground Publishing, located in the Research Park at the
University of Illinois, developing the internet publishing software Scholar for schools and scholarly
publications. Recent books include The Future of the Academic Journal, edited with Angus Phillips
(Chandos, 2009) and Towards a Semantic Web: connecting knowledge in academic research, co-authored
with Kalantzis and Magee (Woodhead, 2010). With Mary Kalantzis, he is co-author or editor of:
Multiliteracies: literacy learning and the design of social futures (Routledge, 2000); New Learning: elements
of a science of education (Cambridge University Press, 2008/2012); Ubiquitous Learning (University of
Illinois Press, 2009); and Literacies (Cambridge University Press, 2012).
Correspondence: bill.cope@illinois.edu

MARY KALANTZIS is Dean of the College of Education at the University of Illinois at Urbana-
Champaign. She was formerly Dean of the Faculty of Education, Language and Community
Services at RMIT University in Melbourne, Australia, and President of the Australian Council of
Deans of Education. With Bill Cope, she is co-author or editor of: Multiliteracies: literacy learning and
the design of social futures (Routledge, 2000); New Learning: elements of a science of education
(Cambridge University Press, 2008/2012); Ubiquitous Learning (University of Illinois Press, 2009);
and Literacies (Cambridge University Press, 2012).

331

You might also like