“O”-Level English students can struggle with conversational skills due
to limited access to tutors, practice partners, or English-speaking environments.
This lack of practice impacts language ability, clarity in communication,
as well as the ability and confidence to engage in meaningful dialogue.
It can also hinder their performance in oral examinations.
Teachers have limited bandwidth inside or outside the classroom to support
students because the 1-to-1 nature of conversational practice doesn’t scale
as well as, for example, listening practice, where an entire class of students
can listen to an audio recording simultaneously. This limited bandwidth
also impacts the quality of learning feedback teachers are able to provide
“Teachers can give a lot of practice but it’s whether you have the time to give the feedback.” - P02, secondary school English teacher
“It is hard for us in a class of 40 to listen to every single student” - P03, secondary school English teacher
The 2023 changes to the “O”-Level English oral syllabus add to this challenge
by shifting emphasis further towards critical thinking. Developing critical
thinking skills alongside conversational practice requires differentiated
approaches for each school level, and teachers now need to further tailor
their approaches for the different needs and abilities of students in different
“We need to cater to different needs of students. We are lacking on differentiation.” - P01, upper primary English teacher
Using two-way speech-to-text conversion and the power of generative AI,
Converse actively engages students in verbal discussion on their topic
of choice, providing them the practice and feedback they need to enhance
their conversational skills.
Large Language Model (LLM) technology enables Converse to initiate conversational
prompts, process diverse student responses and flexibly continue the conversation
across different levels of language ability. LLM-based generation also
relieves the teacher from having to manually generate prompts, only requiring
a small set of seed prompts to self-generate a large range of prompts.
After every conversation, Converse provides instant actionable feedback.
User research with teachers informed the nature and amount of feedback
to provide, with Converse eventually focusing on providing examples and
suggesting subtopics to explore because that directly supports teachers'
bandwidth limitations and students’ confidence and ability to engage more
“This really really helps students with content, which is a big part of confidence going into oral exam” - P03, secondary school English teacher
“Good at picking up specific ideas that students can elaborate on. When it comes to Oral, we expect them to elaborate and it gives direction on how to improve and how to elaborate” - P03, secondary school English teacher
The feedback is informed by frameworks and “O”-Level rubrics that are
provided through the LLM prompt, with the potential for further customisation
depending on a teacher’s pedagogical style.
What users said
In user research and conversations during Demo Day, teachers we spoke
to saw the potential of Converse immediately and had confidence in bringing
it into their classrooms in its current state. One teacher expressed interest
in running a live pilot trial with their class.
“7/10 I would be okay to bring it into classrooms now. No show stoppers, I would use it as it is.” - P03, secondary school English teacher
Core use cases we identified were for assigning conversational practice
as homework. Users saw Converse supporting continued conversational practice
and learning throughout the year, instead of the current once-off focus
right before oral exams.
Beyond core conversational and feedback elements, users wanted Converse
to support teachers’ ability to assign homework, track students’ progress
and identify who needs additional guidance the most.
Check out the prototype here.
Team members: Arshad Ali, Antariksh Mahajan, Rachel Tan, Darren Ng