The backwash effect is used to describe the effect of
assessment on student learning. What a student learns and how they learn it
depends on what will be assessed (Tiwari et al., 2012).
If a student perceives that the assessment will require
mostly recalling facts, they are more likely to simply memorise disconnected
facts in a surface learning approach which can be recalled at the time of
assessment, this is negative backwash. Positive backwash occurs when there has
been a deeper approach to learning. This occurs when the assessment is
perceived as requiring personal interpretation of the learnt facts. (Tiwari et
al., 2012)
|
(Clipart Panda, 2014) |
Most universities still use the essay as a major form of
assessment (Attwood, 2008). Proficient essay writing is an expected ability in
higher education. Being able to achieve good marks in written exams and to be
able to confidently write an essay are both important skills as they allow the
writer to express ideas and facts in a logical and clear manner. These
traditional methods not only allow knowledge to be assessed in the content of
the text but also enable further assessment to be made of critical thinking and
analysis (Riddell, 2015).
A chosen assessment method must be both valid and reliable.
Reliability means producing consistency and equivalent results over time.
Validity means the assessment actually measures what it claims to measure
(Bannigan & Watson, 2009). Written exam and essays are generally reliable
forms of assessment; they are tried and tested over generations of students.
Van Der Vleuten and Schuwirth, (2005) added to validity and
reliability in the context of assessment in medical education. They added acceptability,
feasibility and educational effect to these existing principles.
Acceptability is the extent to which those involved in the
process (e.g. students, faculty, and patients) recognise and are happy with the
form of assessment, feasibility is the degree to which the assessment method is
affordable and efficient and educational effect ensures that the goal of the
assessment is achieved. For example, if the goal is to increase knowledge then
a written assessment will appropriately motivate students to study from a book
(Norcini, & McKinley, 2007).
There has been some concern that these traditional, tried
and tested methods of assessment may not be the most powerful methods of
challenging students and promoting in depth learning of information as the student
can rely heavily on the works of others and with this possibly some issues
surrounding validity. (Attwood, 2008).
Increasing numbers of students are entering into higher
education, generally with one aim: to find a job following graduation. This is
especially seen in health professional education where the focus throughout the
course is working towards an end goal and career (The Higher Education Academy, 2012).
With this in mind, although traditional written exams can
provoke critical thinking and analysis, it raises the question of whether these
are really the most relevant forms of assessment. Traditional assessment has a
focus on memorising and repeating knowledge. There is no doubt this is
important however the main skills required in health professional employment
tend to be the ability to apply knowledge by solving problems, thinking
critically, analysing cases and performing in the professional setting and conventional
assessment strategies such as end of year written exams do not generate active
learning. Are the qualities vital for
working in healthcare really able to be best assessed using written exams or
are alternative methods more appropriate? (Joy & Nickless, 2007; The Higher Education
Academy, 2012).
|
(Prestige Medical, 2017) |
Nicola-Richmond,
Richards and Britt (2015) defined simulated learning activities as ‘an
educational technique that allows interactive, and at times immersive, activity
by recreating all or part of a clinical experience without exposing patients to
associated risks’.
They may involve the use of simulated patients (actors
role-playing a patient), role-play using peers or staff, use of mannequins,
video-recorded or written case studies and interactive computer based programs
(Nicola-Richmond et al., 2015).
OSCE is a type of simulation assessment. Introduced by
D.R.M. Harden of Dundee University in 1975, it is an objective method which
uses a standardised model to test the clinical working in a simulated situation
and there should be equal emphasis on knowledge, skills and attitude (Du, Yu,
Li, Wang & Wang , 2011).
Simulation is widely used in health professional education
and is rooted in adult learning theory (Rutherford-Hemming, 2012). Roberts
(2012) said that health care is constantly evolving and that in order to be a
proficient healthcare professional, improvement in the ‘links among knowledge,
practice and clinical reasoning skills’ must occur.
In the clinical
environment, the environment is unpredictable and things can change very
quickly. Knowledge needs to be learnt to be able to be accessed when needed. Briggs
(cited in Tiwari et al., 2012) describes this as functioning knowledge and
argues that it can only be acquired through a deep approach to learning and not
a surface one.
Traditionally,
assessment of practical skills was undertaken in clinical practice and relied
on the observation of the skill by one individual and therefore creating high
levels of bias and halo/horn effects- where one good or bad aspect tends to
overshadow the rest of the performance and thus reducing reliability. The introduction of the OSCE has now allowed
for simulation to be used as a valid and reliable form of assessment and
advances in technology have allowed the development of simulators with high
fidelity (Joy & Nickless, 2007; Norcini & McKinley, 2007).
However, simulation as a form of assessment is not without its limitations.
Feasibility can be an issue, standardised patient examinations are
expensive to develop and maintain. When actors are used as aids for simulation
over long periods of time “performance drift” sometimes occurs which can affect
the validity of an assessment and scores for simulated assessments tend to be
less reliable than other traditional forms of assessment (Norcini &
McKinley, 2007).
Simulation is now commonly used in
health professional assessment and can effectively assess knowledge, clinical
practice, critical thinking, communication skills and clinical decision making.
All skills vital for working in healthcare (Omer, 2016; Roberts,
2012).
Stress and anxiety are key factors that impact on student performance during
practical/simulated assessments and there have been concerns about the negative
effects on learning caused by the anxiety induced by these assessments. This raises
the question of whether a simulated experience being assessed can really
reflect how a student would perform in a real life situation? (Nicola-Richmond
et al., 2015; Tiwari et al., 2012).
There is a lot of positive literature around using
simulation as a method of teaching and how using regular simulation has a
positive effect on self confidence and reduces student anxiety surrounding
simulation (Nicola-Richmond et al., 2015).
A study by Tiwari et al. (2012) found using regular assessment minimised
the anxiety associated with a one off exam situation and can increase
accountability and self-reliance. Again essential attributes in a health care
professional. The use of feedback has also been thought to be essential in
successfully using simulation as a form of assessment. Feedback used as a way
of formative assessment can be used to promote learning and improves confidence
(Tiwari et al., 2012). As well as this,
feedback can be used as a technique for preparing the student for the assessment,
promoting familiarly and reducing anxiety. The use of formative assessment is
something I will come back to discuss in more depth (Joy & Nickless, 2007).
Historically,
teachers have started with a limited number of available assessment methods and
used these methods to assess all the skills required to become a qualified
healthcare professional (Norcini & McKinley, 2007).
The introduction
of new technologies in society has given the potential for these technologies
to be used to introduce many more methods of assessment and the use of mobile
technologies is healthcare is rapidly increasing (Dearnley,
Haigh & Fairhill, 2007; Noemi & Maximo, 2014).
|
(Clipart Panda, 2014) |
Assessment
methods such as blogs, wikis, computer based assessments, video and even the
use of virtual reality are all available novel assessments methods that reflect
the rapid progression of technology. Using these methods as assessment tools
instils a skill base that will be expected in the workplace. As the NHS
introduces more and more technology into its ways of working as hospitals become
paperless, there is a need to be familiar with technology to be safe in the
workplace. Using technology in assessment allows not only knowledge and
critical analysis to be assessed but also allows for an assessment of
technological proficiency.
Ferris and O ‘ Flynn (2015) state that there is a need to
keep up with ‘generation Y’. Generation
Y being defined in the oxford dictionary as “the generation born in the 1980s and 1990s, comprising primarily the
children of the baby boomers and typically perceived as increasingly familiar
with digital and electronic technology”. As those in generation Z (children who
have grown up surrounding by fast moving technological advances (W.J.Schroer))
start to enter into higher education there seems even more reason to be using
novel new forms of assessment and these methods of assessment are likely to be
more relevant but also more enjoyable for students. Another advantage of forms
of assessment that incorporate technology is that these methods are often more
adaptable for students with learning disabilities or specific learning needs
making them a more inclusive form of assessment(The
Higher Education Academy, 2012).
There are
however criticisms that can be made of these methods of assessment. These
forms of assessment are just being introduced so there is not yet a large body
of research to ensure these methods are valid, reliable and feasible (Norcini
& McKinley, 2007). In a case study conducted by Dearnley et al., 2007 where
students used an electronic mobile form of portfolio to replace a paper format
it was found that some students were anxious about the reliability of the
device and the possibility of losing assessment data.
Another issue is that although the majority of students now
in higher education at university are part of generation Y and increasingly
generation Z many of the teachers and assessors are not. They are more likely
to be older, especially in an area such as Health Professional Education where
generally teachers have come from a clinical background and then later into
their career taken a teaching role
meaning that, in general they are less proficient and comfortable in the use of
newer technologies.
In my experience as the student in my clinical environment I
have noted this. My foundation training was assessed using an e-portfolio with
electronic work based assessments. This worked well for me, was fairly easy to
access and navigate. I however found many more senior colleagues who would be
completing my assessments could struggle with this. Sometimes even for them to
log into the portfolio to assess my work could be quite challenging!
Technology is increasingly being used across the NHS and
obviously this will create some barriers for those less confident with these
skills. I think that to try to overcome this, there must be adequate teaching
for the teacher in these areas as well as ongoing support.
|
(Essay, 2016) |
Academic integrity is another longstanding and important
issue which is thought to have ‘eroded the higher-education-system’ (Starovoytova & Namango, 2016). Although there is a notion that if I student
is going to find a way to ‘cheat’ on an assessment they will regardless of the
format of the assessment, this is surely easier in some settings than others?
Traditional methods such as essay allow opportunity for plagiarism and to use
the work of others and pass this off as their own. Written unseen exam and
multiple choice questions may make this more difficult however, it isn’t
impossible, especially in a large exam hall where the student isn’t observed
closely. Similarly when writing a blog or wiki, it is possible to plagiarise
information or have someone else write the work for you.
In the setting of simulation, this is more difficult. The student does not
leave the simulation during the assessment and they will be closely observed by
the examiner and often also by the actor or patient.