Executive Summary
This report documents our six-month journey developing a conversational artificial intelligence (AI) chatbot for migrant workers in Singapore. What began as a financial literacy project transformed into a profound lesson about designing with, rather than for, marginalized communities. Although migrant workers are essential to Singapore’s economy they face significant financial challenges: recruitment debt averaging $16,000 each, complex remittance obligations, and limited access to culturally appropriate financial tools. While previous interventions assumed these workers lacked financial knowledge, our research showed that they manage their finances in unique and unconventional ways — what they lack are tools to help systemize and support their existing practices.
To address that gap, we adopted participatory co-design frameworks and worked directly with migrant workers to create a personalized AI-powered chatbot. Through three in-person focus group discussions (FGDs) and hands-on testing, they became the co-designers of the product, not just users. This approach fundamentally shaped our technological development and heightened our passion for strengthening marginalized populations’ financial literacy.
Three key insights shaped our chatbot’s development. First, context is important. We found that the migrant workers didn’t have a single preferred way of interacting with the chatbot — they used both voice and text options in different circumstances. This pushed us to design a more flexible, multimodal interface that could adapt to their routines.
Second, collectivist values deeply influence how financial decisions are made. For many migrant workers, money belongs to the family — it isn’t personal. Understanding this, along with the common life stages they go through — education, marriage, home making — was crucial to offering support that made sense to them. Existing tools often fail to recognize and adapt to the migrant workers’ unique financial habits. By shifting our chatbot to appreciate and accommodate their ways of managing money, we saw a meaningful change in how they engaged with it. The migrant workers became more comfortable using the chatbot, showing greater interest in exploring its features, and shared that the suggestions felt more relevant to their needs and priorities.
The chatbot underwent a significant transformation driven by several key design changes. We learned that trust building is not a separate step from design; it’s embedded in every decision and interaction. We also saw clearly that marginalized communities expect and deserve the same level of quality and reliability as any other user group. Every technical choice has human consequences that either include or exclude. Co-creation requires meaningful collaboration from the beginning, with space for community members to shape outcomes at every stage.
Building meaningful technology for vulnerable populations requires building relationships instead of rushing toward deployment because extended engagement is essential. And because financial decisions are often collective, financial literacy tools should include family networks. The most important takeawayfrom this project was a shift in mindset: from being prescriptive to recognizing the financial knowledge migrant workers already have. Instead of building for them, we built with them.
Context
Migrants in Context: Economically Vital but Socially Marginalized
Migrant workers play a vital role in Singapore’s economy, yet their social standing and lived experiences reveal a complex web of challenges marked by exclusion and marginalization. Despite being essential to sectors such as construction, marine, process, and domestic work, migrant workers often occupy a low social status and are stereotyped as uneducated “outsiders.” Even migrant workers with diploma or degree qualifications reside in segregated dormitories, physically distant from the local residential regions but also socially isolated from the locals.1 This separation fosters a perception of migrant workers as temporary economic contributors rather than integrated members of society, isolating them and causing them to form tight-knit support networks within their own communities.2
While these networks provide emotional refuge, they also risk entrenching “parallel communities” and limiting broader social integration. Besides integration issues, financial struggles are another persistent challenge for many migrant workers. Attracted by the prospect of higher earnings, they often pay huge recruitment fees — sometimes up to SGD 16,000 each — through high-interest loans from informal lenders.3 This debt forces workers into cycles of overwork and exploitation for years before they can repay. Wage theft and illegal deductions by unethical employers exacerbate their financial insecurity.4
A 2020 randomized study of Filipino domestic workers in Singapore found that the invitations to financial education programs did not significantly improve their savings behaviours. In fact, participants reported lower self-reported savings and more disagreements over finances, suggesting a baseline lack of financial literacy and behaviour change.5
Moreover, the Ministry of Manpower explicitly encourages employers to send their helpers for financial literacy programs, highlighting that financial education helps them manage money responsibly.6 Without the access to financial education, many prioritize remittances to support families back home over personal savings or financial planning, increasing their vulnerability upon returning home.7
Our project aimed to improve migrant workers’ financial literacy by developing a chatbot to support them in managing their money more effectively. Specifically, we focused on goal setting and expense tracking — key areas identified through conversations with migrant workers who revealed that they rarely set financial goals or that they track their expenses manually in a notebook. These real- life observations appear to align with broader literature findings, demonstrating a clear unmet need for accessible, culturally relevant, and low-barrier tools that can empower migrant workers to build healthier financial habits — precisely what our chatbot is designed to do.
While chatbots have been studied extensively in educational settings —particularly in relation to self-regulated learning and personal development (with some of these studies seeing success), such findings cannot be directly applied to the financial context of migrant workers.8 Distinct demographic characteristics specific to them and different goal orientations may or may not cause a large discrepancy in the success of such chatbots. Our project therefore also narrows this research gap by specifically investigating the applications of chatbots in financial goal setting.
To delve deeper into understanding their financial struggles, we worked with the migrant workers who attended DBS’s Digibank Ambassador workshop.9 Learning from last year’s Reach Alliance team’s experiences, we noted the importance of and actively sought “community champions” who could serve as a bridge between the migrant workers and the research team. Their strong presence and influence in their communities make them valuable collaborators; working with them eased our efforts to establish regular and meaningful communication with the wider migrant worker population. While migrant workers can be skeptical of researchers they have just met, they are more open when someone they already trust introduced us. Community champions served as translators, communicating our research goals and procedures in ways that were clear and contextualized to other migrant workers.
Working with these participants, we used an iterative process grounded in direct user engagement. After developing our initial prototype, we conducted focus group discussions (FGDs) to gather feedback from the migrant workers. This feedback was largely qualitative, ranging from usability impressions to personal preferences and cultural insights. These narrative-based reflections offered rich, context-specific understanding that shaped the chatbot’s features, tone, language, and delivery. We continuously refined the chatbot prototype by cycling between feedback collection, analysis, and feature adjustments.
Co-Creating with Migrant Workers
What Is Co-creation?
At the heart of our project is the belief that the people most affected by a solution should have a say in shaping it. Such co-creation means designing alongside the product user — in this case, the migrant workers. Rather than just collecting feedback at the end, we found it important to meaningfully involve the workers throughout the design process as collaborators. Migrant workers contributed to our project not only as testers, but also as co-designers who offered ideas, flagged issues, and helped us understand what financial literacy means in the context of their lived experiences. This approach helped us create a chatbot that is technically functional, culturally relevant, accessible, and grounded in real needs.
Note: This group of workers was invited by another migrant worker who we first met through the DBS digital literacy workshop. Workers bringing along their peers was an effective way to organically expand our reach and engage a more diverse group of participants. Figure 1. Migrant workers reviewing the IRB agreement in a focus group discussion
How Did We Co-create?
We approached co-creation through a series of engagement activities designed to encourage open dialogue, build trust, and gather feedback in an interactive and meaningful way.
Because many migrant workers face language barriers, may not be digitally fluent, and are often unfamiliar with AI tools like chatbots, we had to be intentional in how we designed each co-creation activity. Every engagement needed to meet two key criteria. First, the activities had to be accessible and inclusive. We avoided overly technical instructions and kept tasks simple. We also created space for participants to engage in ways that felt natural to them, whether through discussion, drawing, or even just pointing or reacting. Second, each activity needed to be purposeful. Given our limited timeline, we could afford only three in-person engagement sessions. So everything we planned had to serve a clear function to surface insights that would shape the chatbot’s design.
These interactions stimulated conversation, sometimes causing differing perspectives to arise. The activities helped to shape some of the key themes in our chatbot. One of our earliest and most effective activities was called “Draw Your Achievements.” In this exercise, we asked participants to illustrate something they were proud of. This low- pressure prompt helped ease participants into the session while giving us valuable insight into their aspirations. The migrant workers drew houses, their children, and other things that they keep close to their heart. Their drawings surfaced common themes of family and stability — insights that later shaped how we framed the chatbot’s goal-setting feature. Instead of using generic financial goals, we began tailoring prompts to reflect
Figure 2. A migrant worker’s response to “draw your achievements”
the kinds of milestones that migrant workers talked about, like building homes, paying off debts, and supporting children’s education. As Figure 2 shows, the drawing in blue is the migrant worker’s response to the question: “What is your proudest achievement?”while the note in green is his response to the question: “What are some of things you want to achieve in the next five years?”
In a subsequent session, we introduced “spectrum mapping” where participants physically positioned themselves along a line in response to different statements such as “I set a monthly budget and stick to it.” This activity allowed us to gauge attitudes and behaviour without relying on written surveys or one-on-one interviews. It sparked spontaneous group discussions, where the migrant workers explained why they struggled to follow budgets despite having good intentions. These candid reflections led us to rethink how the chatbot could support budget setting by offering more flexible, relatable prompts. When we offered the statement “I have a financial goal that I am actively working towards,” all the participants strongly agreed and shared that they were working toward various goals such as saving up for marriage, building a house, etc. They also acknowledged the importance of these goals. However, when asked if they had actually set financial goals or tracked their expenses, many said they didn’t. In other words, agreeing something is important does not mean that people will act on that goal.
Another insightful activity was called “Break It Down,” where participants were given printed notifications from existing financial apps and asked to annotate them. Some circled phrases they liked and pointed out aspects they felt were confusing or unhelpful. This activity made it clear that preferences around tone, length, and detail varied significantly across users.
Across all these activities, what mattered most was the framing. We didn’t ask participants to evaluate a finished product but rather invited them to help build it with us. By using simple activities, we were able to capture a wide range of perspectives that we might otherwise have missed. The participants’ active involvement enabled proactive co-creation to happen much easier through constant dialogue.
Figure 3. “Spectrum mapping” scale where all participants and Reach members collectively positioned themselves on the “strongly agree” spectrum
Figure 4. A migrant worker’s annotations to an existing financial app
Obstacles in the Process
While our co-creation approach yielded valuable insights and fostered a sense of ownership over the chatbot’s improvements, it also came with its fair share of challenges, both structural and technical. They revealed important limitations in both our design process and the assumptions we carried into our onsite executions, underscoring the multifaceted nature of designing an inclusive technology for diverse and underserved populations.
One of the more persistent challenges we encountered was the variation in language proficiency across different groups of migrant workers. While some were comfortable communicating in English and navigating digital interfaces, others had limited exposure to both, particularly those newer to Singapore. This disparity shaped not only how participants engaged with the chatbot, but also how they interacted during our co-design sessions.
It highlighted the importance of designing activities that could accommodate different levels of language literacy. For example, “Draw Your Achievements” worked well in bridging these gaps. It allowed participants to express their financial aspirations visually, giving us meaningful insight into their goals without relying on any written or verbal fluency. These experiences demonstrated the need for flexible, multimodal facilitation strategies when working with diverse user groups.
Another challenge we encountered was managing the scope of each engagement session. In our eagerness to maximize each focus group, we often packed in multiple objectives — ranging from user testing to co-design and feedback collection. While well-intended, our zest for productivity was at times counterproductive. The second half of our sessions showed a clear drop in participant energy and engagement, particularly during cognitively demanding activities like co- designing prompts. These prompts play a critical role in shaping the chatbot’s responses, influencing aspects like content, length, tone, and language. In hindsight, we realized that co-creation is not just about the right activities, but also about sequencing and pacing. Giving participants space to reflect, and introducing complex tasks earlier when people’s attention is highest, would have allowed for deeper and more meaningful participation.
In our chatbot testing, two key shortcomings emerged: the lack of contextual depth in the conversational flow and the inadequacy of generic financial advice. For example, early iterations of the chatbot followed a rigid structure with minimal onboarding. This failure to uncover the diverse financial circumstances of the users resulted in advice that felt impersonal and disconnected. As one participant noted, “If it doesn’t understand my situation, how can it help me make decisions?” Despite being technically functional, the chatbot lacked the context and tone needed to establish trust and relevance with its users.
On that note, a “50/30/20” budgeting rule (a widely recommended method for managing personal finances by breaking down one’s after-tax income into three broad categories of 50 per cent needs, 30 per cent wants, 20 per cent savings and debt repayment) or a generic recommendation to save 50 dollars a month were too generalized to be meaningful for our targeted user. Such suggestions ignored the realities that many migrant workers face, including remittance obligations, debt, and daily expenditure. Without sufficient cultural and economic context, the advice not only felt redundant but risked overlooking the constraints they navigate daily. Together, these early iterations revealed a crucial gap: the need for a more personalized, context-aware approach that acknowledges and responds to the lived experiences of our target users.
Beyond the limitations in content and structure, many participants also struggled to navigate the chatbot confidently on their own. While its structure was designed to be simple and accessible, users often hesitated when they felt uncertain about what the chatbot could do or how to phrase their inputs. Some participants expressed confusion over the range of functions available, while others were unsure how to continue the conversation when they didn’t receive an expected response.
As a result, they often turned to Reach team members for clarification during testing sessions. This reliance highlighted a gap in user guidance and suggested that the chatbot needed clearer instructions and more intuitive interaction flows to help users navigate smoothly. These challenges reminded us that co-creation isn’t just about gathering input — it’s about providing an appropriate medium for that input to be communicated.
Accessibility, timing, and trust all play a role in determining the quality of engagement. While not every activity landed the way we hoped, each experience gave us a clearer understanding of what it takes to design not just with communities, but alongside them.
Chatbot Design and Evolution of Its Features Technical Architecture and Conversational Flow
Our chatbot employs a multi-layered architecture built on GPT-4 (a Large Language Model or LLM — an AI system that generates human-like text) that integrates conversation memory, cultural context, and multimodal communication processing to deliver personalized financial advice that respects collectivist decision-making patterns.
Our chatbot relies on five key technical components: contextual awareness, database integration, multimodality, multilingualism, and a hybrid approach. These work together to enable culturally aware financial guidance and a fluid experience for the user. Several components were integrated using Application Programming Interface (API), which acts like a bridge that allows different software systems to work together. The chatbot was built using Python with the OpenAI API for GPT-4 integration, Firebase for secure data storage, and the Twilio API for WhatsApp integration alongside the python-telegram-bot library for Telegram support. The system handles voice processing through OpenAI’s Whisper API for multilingual speech recognition. The entire system was deployed on the Google Cloud platform to ensure reliability and scalability.
Figure 5. Conversational flow showing the six-stage process from user input to multimodal output
Contextual Awareness
The first version of the chatbot was too prescriptive. For example, it would suggest fixed goals like “Save $50 this month” without knowing anything about the user’s financial situation. When one of the migrant workers tested it, he pointed out: “It told me to save $50, but after I send money home and pay for food and transport, I don’t even have that much left. It makes no sense.” When another worker stated they spent a lot of money on food, the chatbot responded with a suggestion to cook at home instead. As the worker explained: “No time to cooked. If I cook the cost very low” — acknowledging that cooking would save money but it wasn’t feasible given their work schedules and living situation. No matter how well the chatbot functioned, it wouldn’t be useful unless the advice it gave was realistic and grounded in the user’s actual circumstances.
As Figure 6 shows, the chatbot often made suggestions on goals the migrant workers should undertake based on a very superficial understanding of their situation — from the list of expenses the migrant workers provided (a list that is often incomplete) they found food the hardest to manage. We therefore re-examined the chatbot design and included a thorough onboarding process. This time, when first using the chatbot, users were asked about their family responsibilities, monthly expenses, and financial goals. This allowed the chatbot to tailor its suggestions based on each user’s specific situation. One participant shared that the updated version “understands my situation” and “the advice makes sense for my life.” This reinforced the importance of gathering contextual information before offering financial guidance.
Figure 6. Initial iterations of the chatbot made hasty suggestions
This was how we developed contextual awareness within our chatbot. Unlike generic financial chatbots, our system integrated cultural context (debt stages, family obligations, remittance patterns) into every conversation turn. The memory system preserves the answers the users input during the onboarding process, enabling advice that respects collectivist financial decision-making patterns rather than imposing individualistic frameworks. This architecture evolved through iterative testing with migrant workers, incorporating their feedback to create a system that respects cultural financial practices while providing personalized guidance.
Database Integration
One common criticism of LLMs is that they have the tendency to hallucinate — that is, AI sometimes generates false or inconsistent information. This is especially problematic during prolonged conversations. For migrant workers managing tight budgets, such errors might lead to serious financial miscalculations. The LLM might forget what the user has said or think the user said something that they never did.
To overcome this, our chatbot was integrated with a cloud database using Firebase. We logged the different messages the migrant workers sent. For example, if they were goals or expenses, they were saved into their own separate folder. To protect users’ sensitive financial information, all data were encrypted and stored only for the duration of active conversations. This allows for future enhancements for the chatbot to query the database to provide personalized responses and ensure that financial advice remains consistent throughout multiple conversations.
Multimodality
Communication preferences aren’t fixed — they depend entirely on context. When we asked participants directly whether they preferred text messages, voice messages, or both options together, all migrant workers selected both. One participant explained: “Voice is good when I’m working. Text is better when I’m checking my budget on the bus.” Another noted: “I want to record expenses by voice but read my goal progress as text.” There was no one-size-fits-all solution. The mode of communication greatly depended on the different preferences and varying circumstances each migrant worker faced throughout the day.
Our discovery process began when we noticed several migrant workers using voice messages to communicate with one another rather than text messages. They could often be seen holding their phones near their ears in crowded places when attempting to listen to voice messages.
We later held interviews and focus groups to verify this observation. In one interview, a participant explained: “After a long day on site, my eyes are tired. Voice is easier.” This feedback helped steer us toward implementing voice-messaging capabilities using the Whisper API for speech recognition. Our end solution was to include an option for migrant workers to express their preference during the onboarding process, allowing the chatbot to adapt to their situational needs.
Multilingualism
During testing, we discovered a fascinating asymmetry in how the migrant workers preferred to use language. When typing messages, they naturally mixed languages for convenience. They would type things like “taka save korbo” (mixing English save with Bengali words for money and will do) or simply use romanized Bengali because it’s much faster than switching between keyboard layouts on a phone.
However, when receiving messages from the chatbot, these same participants wanted responses in proper Bengali script (বাংলা) or Tamil (தமிழ், n)ot romanized versions. As one participant explained: “When I type, I mix because it’s easy. But when I read your message, I want to see my language properly written. It feels more respectful.”
Figure 7. Evolution of our understanding about communication
This revealed an important design insight: the chatbot needed to understand casual mixed-language inputs while responding in formal, properly written native scripts. Participants weren’t being inconsistent — they had different standards for informal input (where speed mattered) versus formal output (where respect and clarity mattered). The chatbot also needed to handle voice messages, which added another challenge. The text-to- speech system sometimes produced unnatural accents or even switched languages mid-sentence. While we couldn’t fully solve this technical limitation, we did fine-tune our prompts to better handle multilingual inputs.
Relating back to multimodality, workers also noticed when the computer-generated voices didn’t sound quite right. The accent was off, sometimes switching to the wrong language
Figure 8. An option to express preference
entirely. We viewed this as a limitation of the current technology. Given time, it may have been possible to fine-tune the LLM to produce better responses. Our solution ultimately consisted of a mix of providing a few hard-coded responses and altering the system prompts to encourage the LLM to consider that migrant workers may not communicate in a singular language.
Hybrid Approach
Our chatbot also adopted a mix of a rule-based and LLM-based approaches. We discovered that the chatbot was unable to sufficiently capture contextual information about the migrant workers before giving a suggestion (a goal to set) based on feedback from focus group discussions. This meant it might suggest saving a certain amount on food or transport before even understanding whether such a goal was realistic for the particular worker. To counter this, we implemented a rule-based approach using conditionals wherein the user would undergo a predetermined onboarding process.
The users would input information such as their salary, whether they had debt, and other financial information. Only after the onboarding process was complete in its entirety would the chatbot attempt to make a suggestion and move into a more open-ended flow.
Evolution of Features Interface Design
When testing one of our prototypes, we realized the notion of inputting “1” or “2” wasn’t very intuitive for migrant workers. They were sometimes unsure how to proceed when faced with a message. Our idea to counteract this was to implement buttons (illustrated in Figure 9). However, this brought us to our next issue. WhatsApp as a platform is highly regulated so there was a lot of difficulty in implementing buttons. Nevertheless, we deemed it important to test alternative solutions, so we decided to experiment with a different platform — Telegram.
As a less-regulated platform, Telegram allows for an easier developer experience by removing some of the barriers in implementing certain features. We constructed an essentially identical version of our chatbot on Telegram but with buttons instead. While it may seem that typing 1 or 2 is like pressing a button, there is an important distinction in user experience. On WhatsApp, users are expected to type their answers manually, even if prompted with options like “1 for Yes, 2 for No.” This requires a key press and relies on the user interpreting the instructions correctly, then inputting them in the expected format. In contrast, Telegram supports interface buttons that clearly display available options as tappable elements, reducing cognitive load and ambiguity.
“Button is nicer than typing 1,” one participant said during our third focus group when testing the updated interface. However, this apparent solution ultimately led us to a dilemma because everyone used WhatsApp. “It’s how we talk to family back home and friends here in Singapore,” one worker explained. Aside from the button system, we also wanted to implement a reminder system based on the results from Figure 4. However, WhatsApp’s platform has several formal review processes. It requires developers to submit and get approval for every message format before it can be used, and these approval processes can take days or weeks. Given our limited timeline and the importance of responsive iteration, we needed a platform that would allow us to implement changes quickly between focus group sessions. We also learned that migrant workers used different platforms for different purposes.
Figure 9. Participant inputting numbers during onboarding
While WhatsApp was universally familiar for family communication, many participants were already active on Telegram for other community groups and information sharing. We therefore decided to support both platforms: WhatsApp, a platform accessible to more people, and Telegram, where we could build and iterate more rapidly during our co-design process. To gather feedback on this decision, we put it to the test during our focus group discussion. The key differences between platforms included:
• WhatsApp: “Universal” adoption but limited interface options and slow approval processes
• Telegram: Better developer flexibility and button support but lower initial familiarity. When we asked for ratings, 66.7 per cent of participants gave Telegram the highest score of 5 for ease of use, compared to only 33.3 per cent for WhatsApp. Despite differences across platforms, the real insight here was meeting people across the multiple digital spaces they inhabit, and ensuring our development timeline could respond to their feedback in real time.
Latency
Building voice-message processing and responses proved more challenging than expected. Voice-message processing caused a much longer wait time, increasing response time from two or three seconds to between 10 and 15 seconds, creating a noticeable delay that frustrated users during the WhatsApp testing. It was not uncommon to observe migrant workers reporting that the bot was down or not working, when in reality there was just a long delay between responses.
We attempted to tackle this by getting the bot to send a text message containing the transcription of the voice message that was about to be sent, before that voice message was sent. This way, the migrant workers would have something to read while waiting. Relating back to the previous platform issue, we also noticed that migrant workers expressed a preference for Telegram because the chabot would respond faster there than on WhatsApp. Once again, this was resolved by our decision to support both platforms.
Lessons for Prototyping Technological Solutions Democratization of Technology
Because we had no prior experience in building chatbots we learned a lot during the three-month development and prototyping phase of the project. In today’s technological era, open-source tools and cloud computing have significantly lowered the barriers to fast prototyping, enabling teams to build meaningful solutions without requiring deep technical expertise.
Advanced technologies such as large language models can easily be leveraged when using APIs — standardized ways for different software systems to communicate. Modifications to the chatbot’s behaviour, including incorporating features such as voice modality, translation, and interface tweaks, were done within short development cycles of just a few days.
This democratization of technology reconfigures who gets to participate in solution building. It positions communities not simply as passive recipients of pre-built tools, but as co- creators in an iterative, dialogic process. In our project, even individuals with limited technical backgrounds could engage meaningfully with the development of the chatbot — suggesting features, testing prototypes, and providing feedback that directly shaped implementation.
Figure 10. A comparison of WhatsApp (above, manual input) versus Telegram (below, with buttons)
Note: User satisfaction ratings and feature differences based on focus group testing (n=6) Figure 11. An ideal LLM process
Note: The input/output asymmetry indicates how workers prefer to type in mixed/romanized languages but receive responses in native scripts Figure 12. Language communication complexity
Why is this population hard to reach
Humans in the Loop
While LLMs have provided a breakthrough in AI, their capabilities do not automatically translate into useful tools, especially not for marginalized communities. During our testing, we quickly learned how generic LLM models tend to offer one-size-fits-all advice. As we mentioned earlier, preliminary versions of our chatbot suggested saving $50 a month or using the 50/30/20 budgeting rule. However, these suggestions did not reflect the realities of migrant workers who manage debt, send remittances, and deal with irregular incomes.
This disconnect highlighted a key lesson: effective financial tools need to be grounded in the lived realities of their users. In our case, the most important input came from the migrant workers themselves. As members of the very community the chatbot was built for, they understood what financial pressures others like them faced, what digital platforms they used, and how advice would land based on their daily routines. Their insights shaped not only what the chatbot could do, but how it communicated.
Building a useful chatbot requires engaging the community from the start, creating space for honest feedback, and treating users as decision makers, not just participants. For us, that meant adapting activities to work across language barriers and giving migrant workers space to lead on identifying what would be helpful and what wouldn’t.
Importantly, co-design does not end once the first version of the chatbot is built. If the goal is long-term usefulness, the product must evolve with its users. This approach applies far beyond our chatbot. When building technological solutions for marginalized communities, the most meaningful solutions come from involving the community at every stage. Those living the challenges are the best positioned to shape the tools that address them. LLMs may power the system, but it’s the community experts’ lived experiences that make it work.
Trust and Rapport One of our key learnings was that trust and familiarity are critical before meaningful feedback can be gathered. We found that migrant workers were more willing to share honest thoughts, raise concerns, or critique the chatbot only after we’d built a certain level of rapport. Without this foundation, co-creation becomes surface level at best. It is crucial to invest in relationship building as part of the design process, rather than as a side effort.
Adaptive Facilitation
Not all co-design activities are equally effective.
We learned that hands-on, structured tasks like annotating real examples allowed migrant workers to contribute more confidently and clearly. On the other hand, abstract or open- ended tasks that required writing or imagination often fell flat, particularly when participants were fatigued, unsure of what was being asked, or struggling with a language barrier. These sessions reminded us that successful co-design depends not only on the activity itself, but also on when and how it is delivered. Flexibility in facilitation is essential to meet participants where they are.
Technology That Respects and Facilitates
One team member reflected: “I came into this project thinking we were building a tool to ‘help’ migrant workers manage money better. I’m leaving with the understanding that we’re really creating a platform that respects how they already manage money while addressing specific pain points they’ve identified.” The shift in perspective became foundational. The chatbot was no longer conceived as an advisor dispensing expert knowledge, but as a facilitator of reflection, built around the context and aspirations of the user. Instead of framing technology as a consultant that prescribes, we began to see it through the lens of a coach that inquires and facilitates self-discovery and action.
Our chatbot went through three major transformations, each triggered by feedback that challenged what we thought we knew about helping people with money. These trans- formational stages are illustrated in Figure 14.
Figure 13. The Reach team hosted a picnic for the migrant workers
Impact
The know-it-all phase. Our first version sounded like a strict instructor, offering rigid goals like “Save $50 this month,” no matter what a person’s actual circumstances were. During one of our focus groups, one of the migrant workers revealed his salary ranged between $600 to $800 a month, and that after accounting for debt, transport, and meals, he wouldn’t be able to save anything at all.
The curious student phase.
His reaction sparked a more comprehensive redesign. Instead of starting with advice, we began with questions. The new version asked about family responsibilities, current expenses, and dreams for the future. “Now it understands my situation,” another participant said after testing the updated version. “It knows I send money to my wife and am paying for my sister’s education. The advice makes sense for my life.”
The respectful guide phase . The third major change focused on tone and respect. Instead of telling workers what to do, the chatbot learned to ask guiding questions: “You mentioned wanting to save for your children’s education. Based on your current expenses, what feels like a realistic monthly amount to start with?” This approach let workers set their own goals rather than accepting targets someone else had chosen for them.
Lessons Learned
Recommendations Designing meaningful technology for marginalized communities requires a fundamental shift in how we approach design and implementation. Rather than quick fixes or imposed solutions, we need
Figure 14. Three-phase chatbot evolution from generic advice to contextual, respectful financial guidance based on user feedback
sustained commitment to understanding and respecting existing practices.
For researchers and developers entering this space, the most critical investment is time — not just for building technology, but for building relationships. Our experience showed that trust isn’t just a prerequisite for co-creation — it helps establish a foundation that accelerates and makes possible many other processes. This means planning for engagement periods that extend well beyond typical project cycles and recognizing that the importance of forming relationships with families and social networks of users beyond just that of a developer to a user.
When we design financial tools without considering the brother who needs education fees or the mother awaiting remittances, we miss the entire context that shapes workers’ financial decisions. Organizations and practitioners working with vulnerable populations face a particular challenge: how to create genuinely inclusive processes within institutional constraints. Co- creation can’t be a checkbox exercise squeezed into the final weeks of development. It requires creating spaces where users become co- designers from the very beginning—where their expertise in their own lives is valued as highly as any technical knowledge. This might mean rethinking project timelines, budget allocations, and success metrics to prioritize relationship building and iterative design over rapid deployment.
For policymakers, our findings suggest that current approaches to financial literacy often miss the mark by applying universal frameworks to diverse realities. For example, the “50/30/20” budgeting rule isn’t just ineffective for someone sending 80 per cent of their income home — it’s a fundamental misunderstanding of their financial life. Rather than mandating one-size- fits-all solutions, policy should support tools and programs that adapt to the complex realities of transnational financial management.
Perhaps most importantly, we need to recognize that technology designed for social good must be held to a different standard than commercial products. It’s not enough for it to work — it must work in ways that respect and enhance the dignity of its users. Every interface choice, every automated message, every feature carries the weight of either inclusion or exclusion. When our WhatsApp prototype failed to recognize a worker’s input for the third time, it wasn’t just a technical glitch — it was another moment of a system failing someone who’s already navigating multiple systems not designed for them.
The path forward isn’t about scaling quickly or reaching metrics. It’s about deepening our understanding of what meaningful support looks like for different communities. The domestic workers managing household finances across continents will teach us different lessons than construction workers planning for retirement. Each community brings its own wisdom, its own strategies, and its own needs. Our role isn’t to homogenize these differences but to create technology flexible enough to honour them.
Conclusion
When we first began this project, we believed we were building a tool to help migrant workers manage their finance. Now, we understand that we were really learning how to listen and empathize with the people we wanted to serve. This shift in perspective from teaching to learning — from building for to building with — turned out to be the most important feature we developed.
Our work represents a curiosity in exploring how conversational AI can meaningfully support migrant workers living in a context like Singapore. We successfully developed a chatbot prototype through a co-creation process that placed migrant workers not just as users, but as collaborators. The democratization of AI today enables citizen development focusing on design and specification instead of lower-level technical development. Our prototype demonstrated the potential for conversational AI to offer personalized financial guidance. However, the more significant outcome was our evolving understanding of technology as a facilitator of human agency. In addition, we uncovered nuanced preferences such as the flexible use of voice and text modality depending on work context, diversity in language use, and multiple platforms. Amid advancements of generative AI, having humans in the loop is crucial for personalized responses that recognize individual and cultural contexts.
We acknowledge this project as a prototyping phase — an early but promising step. The approach, grounded in co-creation, human- in-the-loop processes, and contextual sensitivity, offers a model for future efforts aiming to democratize access to AI tools in meaningful, grounded ways.
We know that generative AI can produce rapid results. But we also caution, especially when designing for vulnerable populations, that guard rails are essential in any deployment. The risks of misinformation or disempowerment are real and must be proactively mitigated. As we move forward, we carry with us not just technical lessons about multimodality and platform preferences, but a fundamental understanding: meaningful technology emerges not from our assumptions about what people need, but from deep listening to what they already know.
Acknowledgements
We are profoundly grateful to our faculty mentor, Dr. Andrew Koh Tze Ki, whose support, patience, and belief in us guided this project from the very beginning. We also extend our sincere thanks to Larry Yeung Man Ki, whose expertise in participatory design helped shape our approach with care and integrity, and to Lyn-Marie Farley, whose thoughtful coaching on team dynamics and personal growth helped us navigate both challenges and reflections with empathy.
Our deepest appreciation goes to the Reach Alliance and College of Integrative Studies for granting us this opportunity and for fostering an inspiring community of change makers. We are also thankful to Mr. Wong Loke Yeow and Rishma Theru from Development Bank of Singapore (DBS), whose generosity in connecting us to the digital literacy workshops opened the door to many opportunities during this journey. Above all, we owe our deepest gratitude to the migrant workers who shared their time, experiences, and perspectives with us.
Footnotes
1. Zachariah Chan, “An Outsider’s Look into Migrant Workers’ Healthcare Challenges in Singapore,” Healthserve ; Sreyneath Poole, “Migrant Workers Rights in Singapore: Advocacy, Legal Frameworks and Prospects for Change,” Weatherhead East Asian Institute,” December 2022 ; Mick Yang, Heleena Panicker, and Nabilah Said, “No Place to Work,” Kontinentalist, May 2023. 2. Ng Jun Sen, “Migrant Worker Housing: How Singapore Got Here,” TODAY Online, May 2020. 3. Natasha Ganesan and Davina Tham, “High Fees, Unlicensed Agents: The Price Migrant Workers Pay to Work in Singapore,” CNA, July 2024. 4. “List of Migrant Worker Dormitories That Can Accommodate 1,000 or More Residents Licensed (Class 4) in Accordance with the Foreign Employee Dormitories Act (FEDA),” Ministry of Manpower Singapore, May 2023. 5. Rik Barua, Abhijit Shankar Shastry, and Dean Yang, “Financial Education for Female Foreign Domestic Workers in Singapore,”Economics of Education Review 74 (2020): 101935. 6. “Useful Courses to Benefit You and Your Helper,” Singapore Ministry of Manpower. 7. Mety Rahmawati, “Indonesian Worker Protection from Labour Exploitation in Singapore,” Journal Dinamika Hukum 19, no. 1 (2019): 169.
8. Nicky Terblanche, Joanna Molyn, Kevin Williams, and Jeanette Maritz, “Performance Matters: Students’ Perceptions of Artificial Intelligence Coach Adoption Factors,” Coaching: An International Journal of Theory, Research and Practice 16, no. 1 (2022): 100–14. 9. DBS is a leading Singapore bank.