The importance of being e-Pedagogical: Reflections on the OECD report on Students, Computers and Learning

The recently released OECD report  Students, Computers and and Learning: Making the connection, seems to have been widely misrepresented in the press. The emphasis has been on headline grabbing statements such-as, “Computers ‘do not improve pupil results” and ‘iPads in school a waste of money? OECD says yes‘.  However, although the report does explore the negative impacts of poorly used technology, the focus is much more on how do we use technology appropriately in order to support learners to succeed in a digital world (conveniently buried in the news reports).

As mentioned in the report, dropping technology from  education is simply not an option. Technology is such a central element of our culture that to be unable to confidently navigate your way around technology leaves you at a disadvantage socially and economically, with the majority of the UK workforce expecting some level of digital literacy.

To return to analogue classrooms in a sea of digitization just isn’t practical and it is failing our children who need help and support with how to face and confidently handle the challenges and opportunities that the digital world presents.

However, we also cannot ignore the message given by the OECD report that when technology is used inappropriately in the classroom it has a negative impact on learners. That is a powerful statement and one that we need to examine more closely. Technology isn’ t neutral and it certainly isn’t a panacea for poor teaching. The OECD report is saying that in some instances learners would learn more effectively if technology was removed from the classroom (hence the headline grabbers that popped up all over the BBC).

This statement in itself could be enough to have swathes of teachers hanging up their ipads for good. It certainly doesn’t read like a ringing endorsement for technology enhanced learning. But this is also a message that educational technologists, like myself, have been trying to get heard for years. The technology alone does not improve learning.  it is a tool, like any other tool. If it is ignored it will do nothing, if it is used badly it will detract, if it is used well it will improve. Placing a paintbrush in your living room will not improve the state of the decoration, unless you use it appropriately. Using it badly, like poor use of educational technologies, will likely leave you with something worse.

Adding a computer to a classroom is not going to do anything, unless, its use is carefully and considerately designed into learning activities. Handing an ipad to every student at the start of term but not explaining how and, possibly more importantly, why it’s going to be used is a waste of time and money. The same argument could be made for the use of a VLE – if it is not explained to students how and why it should be used, we should not be surprised when they don’t use it.

The OECD calls for a “21st Century pedagogy” and that we should be using technology not to support 20th century teaching practices but to take full advantage of the opportunities that technology offers to support “experiential learning, foster project-based and inquiry-based pedagogies“.  In other words, we need to focus much more support for educators on how to use technology pedagogically – this is not a call for computers to be banned from classrooms but plea for technology to be seen for what it is, a tool. Without great teachers the technology will never enhance learning. We need to help and support teachers to design learning activities that are in line with the digital demands of the 21st century.

Advertisements

How the TEF do you measure good teaching?

This week I attended an in-house teaching and learning conference. The theme of the day was “What is good teaching?” and was framed around discussions about the Teaching Excellence Framework (TEF) that may be introduced into Higher Education.

But what is “good” teaching and how do you recognise it?

The terms that are used to discuss good teaching are fascinating and inspiring. They demonstrate the passion people hold for teaching, and acknowledges that “good” teaching is important. However, it is a slippery beast to define and the language that surrounds it does not lend itself to easy measuring.

What is “good” teaching?

…..enabling discoveries, mentoring, supportive, encouragement, developing relationships, helping to achieve potential, challenging, inspirational, constructive, motivating, active, personable, patient, experimental, innovative, authentic, creative…and much more beyond.

These are just some of the words and phrases that were being used during the conference to describe what good teaching means to people – but how, and should, this be measured?

Will the TEF try to measure the unmeasurable?

Where would you start to measure creativity or innovation? What is creative to one person, is standard practice for another.  Will a matrix based approach simply lead to people teaching to the matrices, resulting in stifling of innovation and creativity? Or will it help to improve quality and enhance the learning experience of students?

The matrix itself cannot be neutral, as by its very nature, it is making value judgements about what we as a society should value as good and what as bad…who is responsible for making these judgements and defining them? How will they shape how learning happens? Will it shape it for better or worse? Should higher education be free to challenge the dominant stances in society, or is the role of HE to provide employable workers shaped to the needs of businesses and corporations?

Perhaps a TEF is needed to support staff to invest time in improving their teaching practices and being recognised for excellence in the field? Teaching is often seen as a poorer relation to research and maybe a TEF would provide an incentive for staff development in this area, and to management to reward and recognise excellence in teaching. There is certainly an issue that being an excellent teacher is not as strongly rewarded or recognised by institutions as being an excellent researcher….and yet students are core to HE, and their main interaction with academic staff is through being taught.

The Times Higher Education (@timeshighered) are running an interesting twitter discussion around theses issues, take a look at #TeachingMetric to see answers to their question ‘how would you measure uni teaching quality?’.

 

What do we mean by “engagement”?

Embed from Getty Images
I’m currently studying for am MA in Academic Practice as part of my CPD. At the moment I’m working on Research Methods, not necessarily the stuff dreams are made of, but useful and interesting nevertheless. Our weekly task has been to critically read and reflect on a journal paper.

I’ve decided to post a blog about this, as the article I read has made me consider a term that gets bandied about a lot in my line of work- engagement.

The paper I have read is Ella Kahu’s “Framing student engagement in higher education” (Kahu, 2013). The paper is a discussion around different types of engagement and reading it has made me stop and think.

What do I mean by engagement? Is it what other people mean by engagement?

I talk about student engagement, I talk to others about student engagement, I sit in meetings about student engagement, but are we all talking about the same thing? I have always had a reassuring assumption that there was a shared understanding, but after reading Kahu’s paper, I’m not so sure. The OED presents numerous definitions of the term engage, which perhaps clearly illustrations the trouble with the term.

Can engagement be measured through clicks in Moodle? Or attendance at lectures?

I would argue not. Click counters in Moodle demonstrate that a student has visited a page or resource, but not that they have internally reflected and considered the content, debated it with their peers, or contextualised into their own understanding. Similarly, attendance in lectures demonstrates a physical, but not an emotional, engagement with the subject. However, both of these have been used as ‘measures’ of student engagement.

When I consider engagement, it is definitely something more than a physical attendance or interaction with a web page. To me, to be engaged, truly engaged, must be an emotional state. There needs to be some connection with what is being taught, beyond that of ‘going through the motions’ of what is needed to achieve the qualification.

As discussed by Kaul (2013), depending on your definition, engagement is influenced by many factors outside of the control of a teaching institution…and perhaps, cynically, it isn’t important if a student is paying lip service to the ‘engagement’ requirements of the organisation, if they, the student and the institution, achieve the outcome they desire? But surely, education that provokes and extracts emotional reactions from students, that forces them to challenge their viewpoints and understanding of the world, that excites and intrigues them, creates a more rewarding educational experience? And I think that is what I would look for in engagement. Engagement that has an emotional impact on the student, not something that can be measured by a Moodle clicker.

In future, when people talk about student engagement, I’ll explore what that means to them!

Anyway, I’m rambling…

What does engagement mean to you? Do you think engagement matters? I’d love to hear your thoughts.

References:

Kahu, E.R. 2011, “Framing student engagement in higher education”, Studies in Higher Education, vol. 38, no. 5, pp. 1-16.

What does your Moodle area say about you?

Embed from Getty Images
It is important to consider that VLEs, like Moodle,  are not a neutral environment. Research by Rubin et al (2010), and Maltby & Mackie (2009) states that how we design and use our VLE sends students a strong message about what the unit area values in terms of teaching and learning. It helps to shape where teaching and learning happens.

How you design your Moodle areas will influence the behaviour of your students and how frequently they will interact with it.

It is important to consider what sort of message your Moodle area is sending to your students. Is it supporting, or even enabling, desirable behaviours? Does it reflect your teaching and learning values?

How do you use yours?

I’ve been doing a bit of research recently into how Moodle and other VLEs are being used. Research has shown that many VLEs, across the board, are content driven environments used to support traditional lecture models of teaching. Moodle, and other VLEs, are being  used as a storage device for files (Baker & Grossman, 2013; Blin, 2008; Maltby & Mackie, 2009). It is difficult to say that this use of a VLE represents a truly ‘blended learning environment’, as is the aim in my institution.

These environments provide students with content interaction only, in that they have access to lecture notes and readings but  interactions between students or staff are not fostered or encouraged within the VLE. This is important as in online courses,  VLEs  that encourage student-to-staff and student-to-student interactions have been shown to have a positive correlation between the use of a VLE and student attainment. However, where the interactions are predominantly student to content, the same positive correlation has not been found (Agudo-Peregrina et al 2014).

What do they want?

Students’ expectations of Moodle are fairly modest. Generally they are looking for areas that are easy to navigate, have relevant and timely content and responsive staff (Naveh et al, 2012). It is also interesting to note that students perceptions of good pratice are shaped by the areas that they are exposed to. If a student is sees something that they like being used in one Moodle area, they are more likely to request it to be used in another (Henderson et al, 2015). This has important implications for programme teams, and I recommend that you approach your Moodle design as a team to try and create a consistent experience for students across a programme and also to share good practice between colleagues.

Look at it as a student

Have a critical look at your Moodle area and try to view it through the eyes of your students. What message does your Moodle area send to students as to what you value? Does your Moodle area reflect your approaches to teaching and learning? How does it sit within the Moodle areas across the programme? Consider ways in which you could align your Moodle area to your pedagogical principles.

Moodle has the potential to be much more than a content repository!

This blog post from York St John University explores a Moodle designer tool created by the Institute of Education.

References:

Agudo-Peregrina, A., Iglesias-Pradas, S., Conde-Gonzalez, M. & Hernandez-Garcia, A. 2014, Can we predict success from log data in VLEs? Classification of interactions for learning analytics and their relation with performance in VLE-supported F2F and online learning, Computers in Human Behaviour, vol. 31, no. 1, pp. 542-550

Barker, J. & Gossman, P. (2013) The Learning Impact of a Virtual Learning Environment: Students’ views. Teacher Education Network Journal (TEAN), 5 (2). pp. 19-38

Blin, F. & Munro, M. 2008, Why hasn’t technology disrupted academics’ teaching practices? Understanding resistance to change through the lens of activity theory, Computers & Education, vol. 50, no. 2, pp. 475-490.

Henderson, M., Selwyn, N. and Aston, R (2015) What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education, DOI: 10.1080/03075079.2015.1007946

Maltby, A. & Mackie, S. 2009, Virtual Learning Environments–Help or Hindrance for the “Disengaged” Student?”, ALT-J: Research in Learning Technology, vol. 17, no. 1, pp. 49.

Naveh, G., Tubin, D and Pliskin, N (2012) Student satisfaction with learning management systems: a lens of critical success factors. Technology, Pedagogy and Education, 21(3), 337-350.

Rubin, B., Fernandes, R., Avgerinou, M.D. & Moore, J. 2010, The effect of learning management systems on student and faculty outcomes, The Internet and Higher Education, vol. 13, no. 1, pp. 82-83.

Why hasn’t technology revolutionised teaching in Higher Education?

I’m currently working towards an MA in Academic Practice – it’s been quite interesting so far and is focusing on research methods (however it is taking up a fair chunk of my time, which is why I have been quieter recently!). This is just a short one exploring some ideas that recent readings have sent whirring around my head.

This weekend I’m reading Why hasn’t technology disrupted academics’ teaching practice? Understanding resistance to change through the lens of activity theory (Blin & Munro 2007). This is a follow up reading after having read Technology Enhanced Learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review (Kirkwood & Price 2014),  which has highlighted to me how many technological interventions in learning and teaching are trying to replicate existing teaching and learning processes.

In this post I’d like to quickly explore a couple of ideas raised by Blin and Munro (2007) as to why learning technologies haven’t had more of an impact on how and where teaching and learning happens.

Is teaching really valued by the institution?

The authors explore some research done by Brill and Galloway (2007),  which state that the room for technological innovations in an institution that rewards staff only on research and not on teaching, will be slight.   I think this insight is useful, and points towards the necessity for institutions to truly support and encourage excellence in teaching. If the institution explicitly or implicitly (I.e. through promotion guidelines) values research over teaching practices then perhaps we should not be surprised if the full transformative potential of technology is not explored.

Academics have many different claims on their time and if innovative teaching is not rewarded or recognised and time and support for exploration and research is not provided, then why would things change? At an institutional level there needs to be support for evidence led explorations of how technology can truly be used to enhance teaching and learning practices. Without this support and recognition, the potential transformational effects of technology will be left to occasional enthusiasts. I have put emphasis on ‘evidence led’ as the paper written by Kirkwood and Price (2014) also highlighted that many incidents of technology enhanced learning is technology driven.

Is the customer always right?

The authors also explore the conflict between the emerging binary student status; student as pupil vs student as customer. This is another interesting area to explore in terms of the potential impact on technology enhanced learning and not one I had considered before reading this paper. This discussion is around research done by Scanlon and Issroff (2005). Technology is frequently seen as a way of providing students with additional resources. For example, VLEs are often discussed as a way of adding value to the student experience, a way of providing them with a little more flexibility with their studies – however it is typically used as a method of passing files to, and receiving files from, students (Kirkwood & Price, 2014). Generally the VLE is used to support traditional lecture/seminar approaches to  higher educational teaching, not as a way to transform student experiences. That’s not to say academic staff do not have the student experience at heart, just that we are trapped within a mindset of replication.

Blin and Munro (2007) use Scanlon and Issroff’s (2005) research to explore the impact of students’ expectations of what efficient teaching and learning is. All students will have a level of expectation as to what teaching and learning in higher education looks like. This will be modeled from their own culturally based experiences of previous education and also exposure to images of university teaching portrayed in media. If students place value on being lectured to in a “sage on the stage” model, because this reflects most closely their concept of higher education, then is a technological approach that disrupts this going to be welcomed? Do we, by making the student customers, feel more compelled to provide them with an educational experience that they recognise and value- even if there may be more effective ways to teach?

Across institutions we place significant value on student satisfaction, and rightly so, however if students dislike being challenged by new methods of teaching that they do not recognise, does their position of customer make them always right? Do they feel ‘short changed’ by self directed learning because it challenges their expectations of the role of student/tutor? I do not have any answers to these questions right now, but it has raised an interesting consideration and one I will discuss with my academic colleagues to see if it has a conscious or subconscious impact on their willingness to experiment with more innovative approaches to learning and teaching technologies. Please do feel free to share your thoughts on these topics, I’d love to hear them!

References:

Blin, F. & Munro, M. 2008, “Why hasn’t technology disrupted academics’ teaching practices? Understanding resistance to change through the lens of activity theory”, Computers & Education, vol. 50, no. 2, pp. 475-490.

Brill, F., & Galloway, C., (2007). Perils and promise: University instructors’ integration of technology in classroom-based practices. British Journal of Educational Technology, Vol. 38, no. 1, pp. 95-105.

Kirkwood, A., and Price, L. (2014). Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, vol. 39, no. 1, pp. 6-36.

Scanlon, E. & Issroff, K. (2005). Activity theory and higher education: evaluating learning technologies. Journal of Computer Assisted Learning, vol 23, no 1, pp. 83-94.

5 Tips for Successful Webinars

Embed from Getty ImagesWhether you are looking to use webinars (web-seminars) with staff, students or even customers, the following tips are useful for all.

There are many different webinar systems available, and they can be used very effectively for distance learning, remote training or for disseminating information. The main advantage of a webinar is that it can be accessed remotely – no need for you or your attendees to travel and people can easily attend from all over the world. If you need to give information to a large group of people, it also removes the hassle of trying to find a physical location large enough to accommodate everyone. There are many different webinar systems available and if you are just starting out, you will want to spend a bit of time comparing the different features and limitations of the various options.

I have had experience of using LiveMeeting, WebEx, Citrix GoToMeeting, Microsoft Lync, Adobe Connect and Google Hangouts,  and there are many others available besides those.  They are all broadly similar in the core features that they offer, but they do differ in user experience and some of the added features available in the systems. Before you start looking for a webinar system, consider which features are most important to you. What do you need your webinar tool to do? This will help you in deciding which tool is appropriate for your use.

However, selecting a webinar system is a whole other blog post, this blog post is focused on helping you get the most out of your webinar system once you have picked one.  The tips listed here are not specific to any one system, and should translate onto whichever webinar tool you decide to use.

5 Tips for Successful Webinars:

1. Define your objectives

Before you start to develop, promote or deliver your webinar, spend some time thinking about why you are doing this. What are the objectives of your webinar? What are your intended learning outcomes or key messages you want your attendees to take away from the session? Who is your audience? Do you know where they will be located or could they be connecting from all over the globe? Why will they want to attend? Is the session voluntary or compulsory?

Figuring out the answers to these questions, before you start to design your webinar, will help to ensure that the session you deliver meets your objectives and also the objectives of your audience.

2. Tailor your content

Once you have the answers to the questions above, you can begin to tailor your content for the webinar. Always, always, always revise content that was designed for face-to-face delivery before delivering it online. A webinar is a very different experience from a face-to-face training course, lecture or seminar. It is likely that your audience is surrounded by distractions – they may be in an office with others, be checking emails on their phones or computers, or fielding calls. As opposed to face-to-face delivery, where you can clearly see if your audience is engaged and attentive, webinars can sometimes feel as though you are talking into the dark. This issue can be overcome to a point with the use of webcams, although at present this is really only appropriate for small groups, and depending on your target audience, the use of webcams could be off-putting.

The most effective way to avoid this is to try and make your webinars as relevant and engaging as possible. Completing the questions in the previous tip will help to keep your sessions focused and relevant to your audience. Try to limit the length of your session to around an hour and definitely no longer than 90 minutes without a break. Ensure you incorporate interaction points into your seminar – many systems include poll tools you can use or chat boxes for questions and answers. From my experience, attendees can be very reluctant to talk on a webinar – particularly when the audience do not know each other. Providing a text chat window can be a good way to collect questions during the session. If you intend to answer questions as you move through the session, you may want to have a buddy on-hand to let you know when a question has been asked in a chat window, as this can be easily missed when you are presenting.

3. Provide practice sessions

As always with technology, there can be glitches. It is useful to provide attendees with access to a practice session so that they can confirm they can login to the system and navigate the tools they will be using. Depending on the consequences of the session (i.e. a freely available marketing seminar vs a paid-for online training session), you may wish to have a rehearsal session which replicates exactly how you will be using the technology in the live session. This provides you and your attendees with an opportunity to iron out any technical difficulties and become familiar with the software. Trying to assist attendees with technical difficulties on the day can be seriously disruptive to your delivery. It can be useful to have a buddy on hand to tackle any technical difficulties that attendees encounter, leaving you free to continue with the session.

Different systems also come with different audio options. If you are using a conference call system ensure you know how to mute and unmute the lines of the attendees. If your webinar is to a large group, the background noise from the attendees can become very distracting. Alternatively if you want people to be able to ask questions at will during the session, ensure everyone knows how to mute/unmute their own lines – if this is important to the success of your session, ensure you cover how to do this in the rehearsal or with any instructions you send out to the attendees.

4. Record your session

If you can, consider providing a recording of your webinar session after the event. This can be a great way to make your session accessible to those who were unable to attend or those who want to use it for revision purposes after the event. If you are going to record the session, make sure you let the attendees know they are being recorded and how the recordings will be used.

5. Follow up with a survey

To help you to refine and develop your webinar delivery skills and content, ensure that you invite your attendees to provide feedback on your session. This could be quickly and easily set up using an online survey tool like survey monkey. This provides an opportunity to collect feedback on what the attendees like about the session, what they would improve and how they found the technology you were using. Using the feedback from the survey will help you to establish if you need to add more interaction to your sessions, make them longer or shorter, or refine the content of your delivery.

When you are creating your survey, try to keep it as short and as focused as possible. Keep asking why you want an answer to this question? What are you going to do with the answer? If the response does not add anything or provides informaition you will do nothing with then remove the question. Ensure to check that your survey isn’t accidentally repetitive (many surveys ask the same question to elicit the same answer in a variety of formats). Always put your key questions at the start of the survey, if your participant drops out half way through, you have at least collected the answers to the questions most important to you.

I hope these tips are of some use to you, I’d be interested to hear any of your top webinar tips too.

Go Swivl! Exploring Swivl for lecture capture

We’ve got a new piece of kit to play with, it’s called a Swivl. I’m in the the process of evaluating how best to use it, but it seems to naturally lend itself towards lecture capture and recording of demonstrations. In this post I’m going to explore lecture capture and also share some tips on using Swivl.

But they’ll stop coming to lectures!

This is a commonly raised objection to using lecture capture – that the students will stop turning up to lectures if they can watch the lectures online. Many studies have found that providing lecture recordings has a minimally negative impact on attendance (Gorissen, Van Bruggens & Jochems, 2012, Larkin, 2010,  Nahash & Gunn, 2013, Sloan & Lewis 2014) and that students report that they use the recordings mainly for revision purposes (Nahash & Gunn, 2013, Gorissen, Van Bruggens & Jochems, 2012, Sloan & Lewis 2014).

What happens if they don’t attend?

I suppose the bigger question here is if there is no real benefit to the student in attending the lecture in person, why do we need them to attend? If the same learning can be achieved through watching the lecture then perhaps either it should be acceptable for the students to chose their method of learning or the lectures need to be less passive and more interactive to exploit the fact that they are face-to-face. For students who are commuting long distances to attend lectures in person, or for those juggling caring responsibilities, then there is a potentially huge added value to them to be able to watch the lecture online and not have to attend in person.  Research has shown that lecture capture does not generally have a negative impact on student attendance but even when it does it is minimal and offset against the gains made in student attainment, satisfaction and engagement (Gorissen, Van Bruggens & Jochems, 2012).

If you have real concerns about student attendance dropping off, then consider making elements of your lectures available in a video format for revision (the most commonly cited reason for watching lecture recordings (Van Bruggens & Jochems, 2012, Larkin 2010, )) or as a means to catch up on lectures missed due to legitimate reasons (Larkin, 2010, Riismandel, 2011) .  Providing videos of practical demonstrations can be of huge value to students and provides them with the opportunity to watch, pause and re-watch a demonstration which, particularly in a large cohort, could be difficult to see or understand fully in a lecture environment.

It’s too time consuming

The easiest way to make lecture capture time efficient is to record your live lectures and not do the recording as a separate event. This has an added advantage of picking up any questions and answers or interactions in the session and also does not add any additional time onto your session. If you are only interested in recording your demonstrations and not the whole presentation, do this within the live sessions also. Once the session is complete, resist the urge to edit. Unless there is a significant reason to do so, editing your live lecture recordings should not really be necessary, students will be able to skip and rewind to the areas of particular relevance to them.

What is a Swivl?

A Swivl basically turns your mobile device (e.g. iPhone, iPad, Android etc.) into your own personal cameraman. Whilst using a Swivl you wear a marker. This marker doubles as being a microphone for recording your audio and a tracking device – as you move around the room the Swivl (somewhat disconcertingly) follows you, so you are never out of shot!

This YouTube video made by Swivl gives a good idea of the concept:

Swivl Tips

The following are some tips I have picked up whilst playing with the Swivl over the last couple of weeks:

  1. Slides can only be uploaded as pictures, so if you are using video, animations or audio clips use the Swivl to record your presentation as it is projected. You can always upload the slide-deck to your LMS as an additional resource.
  2. If you want to project using Swivl you’ll either need to use Apple TV or a display cable.
  3. The tracker is good at following the marker, but only when it is in line of sight so try not to turn your back to the Swivl as you are moving around.
  4. The rear facing picture will pick up your slides and whiteboard text more clearly than the front facing, but will create larger files.
  5. Make sure you have enough space on your device for the recording – 1 min on a rear facing picture is 100mb.
  6. If you take questions during your presentation remember to repeat the question so it is picked up by the mic!
  7. Tracking can be switched off if you want the camera to be fixed, remember you’ll still need to wear the marker to pick up the audio.

What are your experiences of using lecture capture or additional videos? Have you used a Swivl?

 References

Al Nashash, H. & Gunn, C. 2013, “Lecture capture in engineering classes: Bridging gaps and enhancing learning”,Educational Technology and Society, vol. 16, no. 1, pp. 69-78.

Gorissen, P., Van Bruggen, J. & Jochems, W. 2012, “Students and recorded lectures: Survey on current use and demands for higher education”, Research in Learning Technology, vol. 20, no. 3, pp. 297-311.

Larkin, H.E. 2010, “”but they won’t come to lectures …” the impact of audio recorded lectures on student experience and attendance”, Australasian Journal of Educational Technology,vol. 26, no. 2, pp. 238-249.

McAlister, R.B. 2014, “Use of instructor-produced YouTube videos to supplement manual skills training in occupational therapy education”, AJOT: American Journal of Occupational Therapy, vol. 68, no. S2, pp. 567.

Sloan, T.W. & Lewis, D.A. 2014, “Lecture Capture Technology and Student Performance in an Operations Management Course”, Decision Sciences Journal of Innovative Education,vol. 12, no. 4, pp. 339-355.

Riismandel, P. 2011, “Capture Lecture, Skip Class?”, Streaming Media Magazine, [Online],  pp. 82.

The power of voice – Providing audio feedback to students

Embed from Getty Images
Tone, intonation, emphasis, expression –  these are all extremely important elements of communication that can be lost in the written word. In fact some have argued that the way that you say something expresses more meaning than the words that you use (Mehrabian 1972). If how we say something adds extra meaning, then why is so much of our feedback to students in written form?

I have been working with colleagues this week who are using recorded audio feedback for students, and the students love it! They express that they feel more connected to the lecturer, that they understand the feedback better and that they feel more supported. In summary, that they value the feedback more. This anecdotal evidence is supported by research done in several studies (Attenborough et al 2012, Merry and Osmond 2008, Brearley and Cullen 2012).

Sounds great, but I don’t have the time!

Time is commonly cited as a reason for not using audio feedback, however users have fed back in studies that it can actually speed the process up.  Rotherham (2007) and Cullen (2011) both found that audio feedback could be as efficient or even more efficient than written feedback. Anecdotally, lecturers I have worked with have also claimed that it saves them time and that they are able to provide students with much richer feedback.

Give it a go!

If you’d like to have a go at providing audio feedback, have a look at these tips:

  1. Decide why you want to use audio feedback – when introducing any new technology it’s always a good idea to spend some time thinking about why you are doing it. What are you hoping to achieve and how will you measure the impact/success?
  2. Choose your tech – make sure the technology is easy to use and produces a file type that is accessible on multiple devices, a mp3 or mp4 can be played on most devices.
  3. Practice using the technology – once you have selected the technology that you want to use, have a play with it and make sure you are comfortable using it. This will help to build your confidence when using it with your students.
  4. Don’t bother to edit –  speak to the student as if they are in the room, like it is a conversation you are having with them. It doesn’t need to be perfect so don’t worry about editing it out, it’s not going to be broadcast anywhere.
  5. Name your files- if you are recording the audio files outside of a built-in coursework tool, make sure you name each file consistently and use the student name or ID. This will make it much easier when you come to distribute feedback to students.
  6. Consider file sizes  – you want to make sure you’ll be able to easily upload and store the feedback files and you also want to make sure the files are accessible to students, particularly if they are likely to be trying to access the files over mobile internet. Try to limit your recordings to around 3-4 minutes to keep the file size small.
  7. Don’t forget to tell the students  – make sure they know how to access the feedback in the new format, what they can expect and why you are doing it.

Technology:

Turnitin: If you are using Turnitin for your assignments, then you can easily record audio feedback using the built-in tool. Feedback can be recorded using Turnitin’s Grademark App for iPads (reviewed by David Hopkins in his excellent blog, Don’t waste your time).  If you are going to be recording using your PC or laptop, consider investing in a headset with mic, as this will greatly improve the audio quality of your recording.

If you are not using Turnitin, you can still easily record audio feedback using your iPhone or iPad. Have a look on the app store for recorders. My new favourite app for audio feedback is voice record Pro, freely available on iTunes and compatible with iPhones and iPads. It’s really easy to use and the recordings can be emailed or saved to various locations, including Google Drive, DropBox and OneDrive – have a look at this review by Dave Yearwood on Faculty Focus.

Your recordings could be shared with the students by email or uploaded as feedback files in your VLE.

References

Attenborough, J., Gulati, S. & Abbott, S. (2012). Audio feedback on student assignments: boon or burden?. Learning at City Journal [online] Vol. 2, Available at http://openaccess.city.ac.uk/1638/ [Accessed 18 December 2014]

Brearley,F. Q. & Cullen, W.R. (2012) Providing Students with Formative Audio Feedback. Bioscience Education, Vol. 20, 22-36. DOI: 10.11120/beej.2012.20000022

Mehrabian, A. (1971). Silent messages, Wadsworth, California: Belmont

Merry, S. & Osmond, P. (2008). Students’ attitudes to and usage of Academic Feedback Provided Via Audio Files. Bioeducation eJournal [online]. Vol. 11, Available at www.bioscience.heacademy.ac.uk/journal/vol11/beej-11-3.pdf [Accessed 18 December 2014]

Rotherham, B. (2007) ‘Using an MP3 recorder to give feedback on student assignments’, Educational Developments, Vol. 8, No. 2, pp.7–10.

Learning from your assessments

Embed from Getty Images
Assessments can be very revealing and tell you much more than just what the student knows.

The value and importance of providing feedback to students has been well documented and researched, so in this blog post I want to look at the formative value of assessments to instructors (lecturers/teachers/trainers/ anyone else running assessments!).

How frequently do you evaluate your assessments? 

Once you have designed your assessment (formative or summative) and it has been deemed fit for purpose by whichever “power that be” in your environment, it is very tempting (and common) to see the process as finished and the same assessment may run, without change, for years. So if you’re not regularly evaluating your assessments, you might find this blog post useful.

Assessments provide an opportunity for you to evaluate three main things:

1. The students – Can the students do what you want them to be able to do?

2. The assessment – does your assessment actually assess what it is supposed to?

3. The teaching and learning process – Do your teaching and learning materials align with your assessment criteria?

The first point is the most common use of assessment – establishing that the student can or can’t do what you have identified as key criteria.

The other two are less common uses of assessments (but no less important!), so today I’m looking at the ways we can use assessment data to help to develop fairer and more valid assessments.

The assessment: 

The data you get from students completing assessments gives you a very powerful insight into how well the assessment itself performed. Have a look over your results holistically, do they look like you expected them to? Have more people passed or failed? Did you get a lot of missed answers or common incorrect ones? Investigating this regularly can help you to determine if your assessment is actually assessing what you wanted it to assess.

Electronic assessments can make the evaluation of an assessment easier to do, especially in the world of selected response questions (MCQ, true/false, select a blank etc).  If you are working with project work, essays or other more variable student submissions then it’s always a good idea to look out for trends in either the feedback you are providing or the grades you are giving.

How has the group as a whole performed on your assignment? Take a look at your standard deviation of grades. Has your assessment resulted in normal distribution or are the scores generally low or high – what factors could be causing this? Are your usually strong students struggling with this assignment. Using any unusual trends or patterns in your assessment data acts as a flag for you to review the assessment.

I’m not getting the results I expected?

One of the first things to check is that the assessment design matches your assessment criteria – i.e. you are genuinely assessing what you wanted to assess. Have a look to see if there could be any subtle influences that could be a factor, for example the use of unfamiliar software or students not having access to the necessary resources.

Secondly, check that the assessment criteria are actually aligned to your learning outcomes i.e. what you are teaching the students in your sessions actually matches what they are being assessed on (this is a very common mismatch which can result in invalid and unfair assessments). For example, if you are assessing students on a presentation, are presentation skills an actual learning outcome for your course? If not, you may be assessing your students unfairly.

If you are using computer software to run quizzes and tests, then many of these come with extremely useful statistical reports that help you to review how valid your questions are. Have a look at the statistical report from Moodle for more information. This is a particularly strong benefit of using e-assessments. You can have access to extremely valuable and insightful statistical information in seconds, that would have taken weeks or months to compile from paper based assessments.

The teaching and learning process

Assessment data can also give you a huge amount of insight into how the students are responding to your teaching and learning materials. If you find that you are being presented with the same incorrect answers or skipped answers, as well as addressing this in feedback with the group, use it to think about how you teach these concepts. It might be that there is not enough time dedicated to the area in classes or that the learning outcome, which is aligned to the assessment criteria, is not covered in the same way in the learning materials. This can also result into a subtle, but sometimes significant, misalignment of assessments.

Focus On Assessments

Assessments.

Embed from Getty ImagesNot exactly a small topic, but working on the old adage ‘write about what you know’, it seems like a good starting point.  Although this is my first role where I am exclusively supporting technology enhanced learning in HE, I have been working with learning technologies for many years. In particular, I have been working with assessment technologies for many years.

I first cut my learning technology teeth whilst working for a vocational awarding body back in 2006. It was here that I was exposed to the complicated and occasionally painful world of assessment theory and practice. I had been brought in to help make the transition from paper-based portfolios and exams, to e-portfolios and e-assessments. Whilst there I worked with examiners, moderators, colleges and students and quickly got a sense of where the challenges were (in both the paper-based and online worlds). From there I moved onto working for a company that created e-assessment software. I worked in a consultative capacity with clients all over the world, assisting them make the transition from paper based to online assessments. It was during this time that I completed my PGCert in Online and Distance Education and created some online courses of my own.

To start off my first blog post I thought I would share a few good practice pointers that I have picked up over the years (NB these will probably not be a revelation, but could help if you’re starting out).

Considering eAssessments? Three questions to ask yourself:

If you are considering changing from paper-based to online assessment, thinking about the following three questions can be a good starting point.

Of course, and this goes without saying, before you make any changes to how your assessment is delivered, make sure you have it approved by following your faculty processes for changing assessment methods.

Why do you want to use e-assessment?

This is really a rule for introducing any learning technology, make sure you have a clear understanding of why you are considering moving to e-assessment. Ideally you should be looking at ways in which it will improve the student experience, e.g. you will have access to reports that’ll help you ensure your questions are valid and reliable, or the students will be able to receive their marks and feedback more quickly. Whatever it is, make sure you have a good clear understanding of the benefits you are looking to achieve. It is also very helpful to explore the potential risks involved in the change too, so you can factor them into your assessment design and processes for delivery.

Will your e-assessment still meet your assessment objectives?

This is a tricky one, and easily overlooked. When you are deciding on which technology to use, consider how the technology might be affecting your assessment objectives and criteria. For example, are you changing from constructed response questions (essay/short answer) to selected response questions (multiple choice, select a blank, true false etc). This may be still be an entirely valid way of assessing your students, but make sure you consider the impact this might be having on your assessment objectives – are you still assessing the same things? If not, is this OK? Also be careful that being able to use the technology has not become an inadvertent assessment criteria. This can happen if the technology is not intuitive or familiar to the students, thus placing them at a disadvantage to peers who are comfortable with the technology.

One way around the last point is to make sure that students have ample access to practice tests using the software you have picked. This gives them a chance to familiarise themselves with the software and helps you to identify any potential issues with the technology.

Why are you replicating your paper-based processes? 

When you move to e-assessments it is natural to try and replicate as closely as possible your paper based process – these are the processes you are familiar with and have probably been using for years. However, you could be missing out on the opportunity to introduce some real benefits to staff and students.

When you first try and transpose your paper-based assessment into an online version, examine the functionality of the software and consider what is appropriate for your assessment. For example, if you do not currently give students their results for 4 weeks, to give markers a chance to grade the scripts, consider if you really need to impose the same delay for an e-assessment? Conversely,  there may also be good reasons to restrict e-assessments to the conditions of paper-based exams – for example, if you don’t want the students to collaborate on their assessments then you will still need to conduct the assessment in exam conditions (even if technically they could take the assessment on their mobile). It is important to establish which of the processes are a core part of the assessment to ensure it is still valid to it’s design, and which are simply the result of the paper-based process.It all depends on what your assessment objectives are!

I have worked with many, many people who have spent a lot of time, effort and money trying to replicate what are essentially unnecessary elements of paper-based assessments, simply to produce an assessment that is almost identical (including all the flaws) to their current paper-based assessment.