June 2025

June 2025 | Oceanography

83

THE OCEANOGRAPHY CLASSROOM

TAP

TEACHING ANALYSIS POLL FOR STUDENT FEEDBACK

By Robert Kordts, Mahaut de Vareilles, Kjersti Daae, Eirun Gandrud, Anne D. Årvik, and Mirjam S. Glessmer

Many university instructors receive end-of-semester responses

to standardized student questionnaires (student evaluations of

teaching, SETs) collected through online systems. But how well

do SETs work to improve teaching and student engagement in

learning? Research has found a large number of challenges and

problems with SETs, including, (1) they do not assess teaching

quality; (2) they often use quantitative, predefined scales that

leave little space for additional comments; (3) they often have

unclear goals, with course improvement not being the main one;

and (4) there is often little student engagement, indicated by low

response rates for online evaluation.

At the Geophysical Institute (GFI), University of Bergen

(UiB), we consider high-quality feedback from students to

instructors important in order to improve course outcomes.

However, we wanted to move away from SETs and so looked

for alternative feedback methods that would better represent

student views (respecting both their qualitative and quanti­

tative aspects) and could be presented to the instructors in a

motivating way.

We chose to experiment with the Teaching Analysis Poll

(TAP; Hawelka, 2019) that was, to our knowledge, developed

at the University of Virginia and has been used in different

higher-education institutions, countries (e.g., United States,

Germany, Switzerland), and disciplines. The recommended

TAP procedure for face-to-face classes takes about 30 minutes

and is performed by an external facilitator who collects student

feedback on three aspects, which are then communicated back

to the class instructor:

1. Which aspects of the course facilitate your learning?

2. Which aspects of the course hinder your learning?

3. What suggestions do you have for improving the obstructive

aspects?

Box 1 provides a detailed description of the TAP procedure as

employed by the authors.

The method can easily be adapted to different teaching sit­

uations. As facilitators, we have experience with TAP in both

small courses with two or three student groups and very large

courses with several hundred students; both face-to-face and

online (using online collaborative writing and poll tools); and

with both TAP on the course level and TAP on the study-​

program level (with students commenting on aspects related

to the program curriculum). See the variants described in

Johannsen and Meyer (2023).

At GFI, TAP implementation was part of a larger educa­

tion initiative, iEarth Center for Integrated Earth Science

Education, and of an ongoing collaboration with the UiB uni­

versity pedagogy group. Between 2022 and 2024, we conducted

seven TAPs in selected geoscience courses (many of which had

a focus on active learning), and two courses repeated the TAP

after one year. People involved were administrative staff at GFI,

a university pedagogy colleague, and two students who served

as TAP co-facilitators and helped analyze the data.

Because one of TAP’s characteristics is confidentiality, we

will not detail TAP results. However, to provide an overview

of the topics mentioned, we analyzed all TAP reports based

on the categories identified by Hawelka (2019). Hawelka’s sys­

tem includes eight main categories and several subcategories,

ranging from comments about interactions between students

and instructors to students’ understanding of the task, their

motivation, their learning strategies, and their self-regulation

for learning, to general resources and overall ratings about

the course and about its structural conditions. Table 1 shows

samples of the Hawelka (2019) categories that appeared most

often in the TAPs together with examples of students’ positive

or negative quotes.

TAP results provide not only general positive or nega­

tive views (Category No. 7) but also comments on more spe­

cific points, such as the learning materials (Category 6.2) or the

lecturer’s presentation style (Category 1.1). In fact, most com­

ments found in the TAP were about aspects that the instruc­

tors typically can change. Rather surprising to us, the students

commented on aspects that support their learning progress

(Category 5.2), specifying positive and critical examples. This

indicates that the TAP stimulates the students to evaluate what

others, such as the instructors, do, as well as what they need for

their own learning success. This is a huge advantage of the TAP

compared to traditional SET methods. Finally, some TAP feed­

back relates to aspects that instructors alone typically cannot

change (Category 8).