The non-profit settlement sector in Canada supports newcomers in achieving successful integration. This sector faces increasing operational pressures amidst rising immigration targets, which highlights a need for enhanced efficiency and innovation, potentially through reliable AI solutions. The ad-hoc use of general-purpose generative AI, such as ChatGPT, might become a common practice among newcomers and service providers to address this need. However, these tools are not tailored for the settlement domain and can have detrimental implications for immigrants and refugees. We explore the risks that these tools might pose on newcomers to first, warn against the unguarded use of generative AI, and second, to incentivize further research and development in creating AI literacy programs as well as customized LLMs that are aligned with the preferences of the impacted communities. Crucially, such technologies should be designed to integrate seamlessly into the existing workflow of the settlement sector, ensuring human oversight, trustworthiness, and accountability.
翻译:加拿大的非营利性安置服务机构致力于协助新移民实现成功融入。随着移民目标的不断提高,该领域面临着日益增长的业务压力,凸显出对提升效率与创新的需求,而可靠的AI解决方案可能成为关键途径。新移民与服务提供者为应对此需求,可能会普遍采用通用生成式AI(如ChatGPT)进行临时性应用。然而,这些工具并非针对移民安置领域定制,可能对移民及难民群体产生不利影响。本文旨在探究此类工具对新移民可能构成的风险:首先警示对生成式AI的无防护使用,其次推动进一步研发工作,包括建立AI素养教育项目以及开发符合受影响群体偏好的定制化大语言模型。关键在于,此类技术设计需确保能够无缝融入现有安置服务流程,同时保障人工监督、可信度与问责机制。