Executive Summary
The global artificial intelligence (AI) landscape is undergoing a profound structural transformation. The initial phase of AI adoption, characterized by a “move fast and break things” ethos and a reliance on massive, centralized datasets, is rapidly yielding to a new paradigm. This emerging era is defined by the necessity for Trustworthy and Explainable AI, the imperative of Privacy-Preserving Machine Learning (PPML), the physical constraints driving Resource-Efficient AI, the geopolitical and cultural mandates for Collaborative AI, and the scientific pursuit of Continuous Learning systems capable of evolving toward Artificial General Intelligence (AGI).
This report provides an exhaustive literature review of these five critical pillars, synthesizing findings from peer-reviewed sources, government frameworks, and industry technical reports. While the scope is global, the analysis pays particular attention to the strategic context of Singapore. As a small, high-income nation with a reputation for regulatory integrity (“The Trust Hub”), Singapore serves as a unique microcosm for these global challenges. Its national strategy—constrained by limited data volume but empowered by high institutional trust—offers a roadmap for “Small Country, High Quality” AI development [1].
The review reveals a convergence of technical innovation and policy frameworks. We observe that “Trust” is no longer merely an ethical aspiration but a quantifiable economic asset, validated by technical audits like AI Verify [2]. Privacy is evolving from a legal compliance checklist to a mathematical guarantee through Federated Learning and Robust Volume Data Valuation [3]. Resource efficiency is shifting research focus from parameter-heavy “Model-Centric” approaches to “Data-Centric” techniques like Few-Shot Learning, enabling high performance on small, high-quality datasets. Collaborative AI is addressing the “Linguistic Divide” through regional foundation models like SEA-LION [4], while the quest for AGI is revitalizing interest in Cognitive Architectures (such as CoALA) that integrate memory and reasoning to solve the problem of catastrophic forgetting [5].
This document serves as a comprehensive resource for policymakers, researchers, and industry leaders, offering deep “second-order” insights into how these five themes intersect to shape the future of intelligent systems.
1. Trustworthy and Explainable AI: The Governance of Confidence
The first pillar of this review addresses the socio-technical foundations of AI adoption. As AI systems migrate from low-stakes recommendations (e.g., movie suggestions) to high-stakes decision-making (e.g., medical diagnostics, credit scoring, judicial sentencing), the opacity of “black box” deep learning models has become a systemic risk. The literature indicates that Trustworthy and Explainable AI (XAI) has transcended the realm of academic debate to become a critical market differentiator and a regulatory imperative.
1.1 The Singapore Model: From Principles to Pragmatism
Singapore’s approach to AI governance is frequently cited in the academic and policy literature as a “pragmatic archetype,” distinct from the “rights-based” hard law of the European Union (such as the EU AI Act) or the “market-driven” laissez-faire approach of the United States [6]. The Singaporean strategy is characterized by a “voluntary but high-standard” regime designed to foster innovation while maintaining public safety and trust.
1.1.1 The Model AI Governance Framework
The cornerstone of Singapore’s strategy is the Model AI Governance Framework. First released in 2019 and significantly updated in 2020, with a specialized framework for Generative AI introduced in 2024, this document is widely analyzed as a “living” regulatory instrument [7]. The literature identifies its “human-centric” philosophy as its defining feature: the framework explicitly states that AI decision-making must be explainable, transparent, and fair to build and sustain public trust.
Academic analysis reveals that the framework operates on a “risk-based” logic, which is crucial for industry adoption. Rather than imposing a blanket set of rules for all AI, the framework encourages organizations to categorize their AI deployments based on the severity of potential harm and the level of human involvement required:
- Human-in-the-loop: Required for high-stakes decisions (e.g., diagnosing cancer), where the AI provides a recommendation but a human makes the final call.
- Human-over-the-loop: Used for automated systems where humans play a supervisory role and can intervene if the system behaves erratically (e.g., automated traffic management).
- Human-out-of-the-loop: Permitted for low-risk, reversible decisions (e.g., product recommendation engines).
This nuanced categorization allows Singapore to avoid the “regulatory chilling effect” often associated with stricter regimes. By providing a Model framework rather than a rigid Legal code, the state encourages companies to experiment within safety guardrails, fostering an ecosystem where governance evolves alongside technology [8].
1.1.2 The Generative AI Expansion (2024)
The 2024 update to the Model Governance Framework, specifically targeting Generative AI (GenAI), represents a significant maturation of this strategy. The literature notes that GenAI introduces novel risks—such as “hallucinations” (factual errors), copyright infringement, and the generation of deepfakes—that were not adequately covered by traditional discriminative AI frameworks. The new framework proposes nine specific dimensions of governance to address these issues, emphasizing “value alignment” and “content provenance” [7]. This responsiveness allows Singapore to maintain its status as a “safe harbor” for advanced AI development, attracting global firms that seek regulatory clarity in an uncertain world.
1.2 AI Verify: Operationalizing Trust through Technical Audits
A recurring critique in the literature regarding AI ethics is the “principle-to-practice gap.” Many organizations espouse high-level ethical principles (fairness, transparency) but lack the tools to implement them. The review identifies AI Verify as Singapore’s strategic response to this gap [2].
Launched by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC), AI Verify is described as the world’s first AI governance testing framework and software toolkit. Unlike static checklists, AI Verify allows developers to perform technical tests on their models—specifically supervised classification and regression models—to validate claims regarding performance.
Key Technical Capabilities of AI Verify:
- Fairness Testing: Checks for statistical bias against protected attributes (e.g., race, gender) in the model’s predictions.
- Robustness Testing: Evaluates how the model performs under adversarial attacks or noisy data inputs.
- Explainability (SHAP/LIME): Generates Shapley values or LIME (Local Interpretable Model-agnostic Explanations) charts to explain why the model made a specific prediction.
The establishment of the AI Verify Foundation in 2023, involving major industry players like Google, Microsoft, and open-source communities, aims to internationalize these standards. The literature suggests that this move is strategic: by making AI Verify an open global standard, Singapore positions itself as a “neutral arbiter” of AI quality, similar to how Switzerland is viewed in banking standards. This supports the national goal of building a “Trust Brand” that attracts high-value AI research and deployment [9].
1.3 The Demand for Explainability (XAI)
The literature establishes a direct, causal link between the adoption of AI in critical sectors and the maturity of Explainable AI (XAI) techniques.
1.3.1 Regulatory Pressure
There is an increasing “regulatory demand” for XAI, particularly in jurisdictions influenced by the GDPR and the EU AI Act. “Black box” models are increasingly viewed as legal liabilities. In healthcare, for instance, the inability to explain a diagnostic algorithm’s output can preclude its certification for clinical use [10]. Regulations are moving toward a standard where “if you cannot explain it, you cannot deploy it” for high-impact use cases.
1.3.2 Consumer and Employee Sentiment
The demand for explainability is also bottom-up. Surveys indicate that trust in autonomous AI agents is declining among executives (dropping from 43% to 27% in one 2025 survey) due to fears of unexplainable errors [11]. Employees, too, demand transparency regarding how AI tools are used to monitor or evaluate their work [12]. The “trust gap” is widening; users are fascinated by the capabilities of AI but terrified by its lack of accountability.
1.3.3 The Economic Value of Trust
Research confirms that trust is a tangible economic asset. Organizations that successfully deploy trustworthy AI—characterized by robust governance and transparency—realize greater economic value. Agentic AI is projected to deliver up to $450 billion in value by 2028, but only if the “trust gap” is bridged [11]. The literature suggests that investment in XAI is not merely a compliance cost but a prerequisite for scaling automation.
1.4 The “Trust Paradox” in Singapore
An analysis of public sentiment surveys reveals a fascinating “Trust Paradox” in Singapore and Southeast Asia, which has significant implications for policy.
- High Institutional Trust: Singaporeans display significantly higher trust in their government and military to develop and regulate AI than in foreign tech giants or the private sector generally [13]. This “statist” trust model validates the government’s top-down strategy of branding the nation as a “trusted hub.”
- Low Personal/Corporate Trust: Despite high trust in the state, individuals are wary of corporate data misuse. Over a quarter of respondents in health data surveys stated they did not trust any organization to protect their data [14].
- The Literacy Link: Trust is positively correlated with AI literacy. Users who understand how AI works are more likely to support its regulation and use. This validates the heavy investment in national AI literacy programs (e.g., AI Singapore’s “LearnAI” initiative) as a trust-building measure [13].
Table 1: Comparative Analysis of Trust Drivers in AI Adoption
| Trust Driver | Singapore Context | Global/Western Context | Implication for Policy |
| Institutional Trust | High: Citizens trust the Govt/Military to manage AI risks [13]. | Low/Mixed: Skepticism of state surveillance is high (e.g., US/EU). | Singapore can leverage state-backed certification (AI Verify) more effectively than other nations. |
| Consumer Control | Moderate: Focus is on “safe harbor” and compliance. | High: Focus on individual rights (GDPR), “Right to Explanation.” | Singapore must balance its business-friendly stance with increasing consumer demand for individual data rights. |
| Transparency | Technocratic: Trust in audits and expert frameworks. | Democratic: Trust in open-source and public scrutiny. | Technical audits (AI Verify) are key to bridging the gap between technocratic assurance and public verification. |
| Economic Impact | Strategic Asset: “Trust” is marketed as a reason to host data in SG. | Compliance Cost: Trust is often framed as a regulatory burden. | Singapore’s “Trust Hub” status is a key economic differentiator in a fragmented global digital economy. |
2. Privacy-Aware AI: Learning from Sensitive Datasets
The second pillar of the literature review addresses the fundamental conflict in modern AI: deep learning requires massive, diverse datasets to generalize well, but the most valuable data (healthcare records, financial transaction logs) is highly sensitive and legally protected. The literature reviews techniques that allow models to “learn” from this data without “seeing” it in raw form, focusing on the concept of Privacy-Preserving Machine Learning (PPML).
2.1 The “Willingness to Share” Ecosystem
Before examining technical solutions, it is crucial to understand the behavioral constraints. Peer-reviewed surveys indicate that consumer Willingness to Share (WTS) data is highly conditional and context-dependent.
- Contextual Sharing: Consumers are generally willing (approx. 63%) to share health data with government or health authorities for public good or personal treatment [14]. However, this willingness drops precipitously (to ~5%) for social media platforms or purely commercial entities.
- The Privacy Calculus: Users perform a mental cost-benefit analysis. If the “secondary use” of data (e.g., monetization by third parties) is salient, willingness to share decreases, and the “price” users demand for their data increases [15].
- Auditing and Control: Transparency mechanisms, such as expert auditing or “data donation” dashboards, significantly improve WTS. Users want to see who accessed their data and why [16]. The literature suggests that “control” is a better predictor of trust than “secrecy”; users are willing to share if they feel they can revoke access at any time.
2.2 Federated Learning (FL): The Collaborative Standard
To bridge the gap between data hunger and privacy laws, the literature points to Federated Learning (FL) as the primary architectural solution. In an FL system, the AI model travels to the data, rather than the data traveling to the model.
- Mechanism: A central server sends a global model to local clients (e.g., hospitals, banks). Each client trains the model on its local, private data and sends only the model updates (gradients or weights) back to the server. The server aggregates these updates (typically using an algorithm like FedAvg) to improve the global model [17].
- Strategic Fit for Singapore: FL is particularly well-suited for Singapore’s fragmented healthcare landscape, where data resides in different clusters (NUHS, SingHealth, NHG) that cannot easily merge databases due to regulatory constraints (PDPA) and cybersecurity risks [18]. It allows for the aggregation of medical insights (e.g., a tumor detection model) without the aggregation of patient records.
2.2.1 The “Sparse-and-Scarce” Challenge
However, the literature identifies significant challenges in real-world FL deployment. A major issue is the “Sparse-and-Scarce” problem. When local datasets are small (scarce) and do not contain examples of all classes (sparse/non-IID), the performance of standard FL algorithms degrades significantly. The local models may overfit to their limited data, and the aggregated global model becomes inaccurate [19].
Algorithmic Solutions: Recent research proposes novel algorithms like SAFA (Semi-Asynchronous Federated Averaging) to address this. SAFA employs a continual model iteration procedure that maximally exposes local models to inter-client diversity while minimizing “catastrophic forgetting,” outperforming standard baselines by up to 17% in sparse-data scenarios [19]. This is critical for Singapore, where individual hospitals may have small datasets for rare diseases.
2.3 Differential Privacy and Semantic Privacy
While FL keeps raw data local, the model updates themselves can leak information. A sophisticated attacker can reverse-engineer the training data from the gradients (a “reconstruction attack”). Differential Privacy (DP) addresses this by adding statistical noise to the updates, providing a mathematical guarantee that an individual’s data cannot be distinguished.
The Semantic Gap: A key finding in recent literature is that standard DP mechanisms (which add noise to numbers) often destroy the utility of Large Language Models (LLMs). If too much noise is added to word embeddings, the text becomes gibberish or loses its nuance.
Semantic Privacy: Emerging research focuses on Semantic Privacy—techniques that protect sensitive meaning (e.g., political affiliation, gender) rather than just adding noise to pixels or tokens. This involves learning representations where sensitive attributes are “unlearned” or orthogonal to the task, ensuring that the model is useful for its intended purpose (e.g., sentiment analysis) but useless for profiling the user [20].
2.4 Data Valuation: The Economics of Collaboration
A critical, often overlooked aspect of collaborative AI is Data Valuation. In a consortium (e.g., three banks collaborating on fraud detection), how do we determine whose data was most valuable? This is essential for fair compensation and incentivization.
- The Shapley Value Challenge: The theoretical “gold standard” for valuation is the Shapley Value, derived from cooperative game theory. It measures the marginal contribution of each participant. However, calculating the Shapley Value requires retraining the model on every possible subset of data, which is computationally impossible for large datasets or deep learning models [21].
- Singaporean Innovation – Robust Volume (RV): Researchers at the National University of Singapore (NUS) and A*STAR have proposed novel, “Validation-Free” valuation methods to solve this. Specifically, they propose using the Robust Volume (RV) of the data matrix [3].
- Concept: The “value” of a dataset is modeled as the volume of the parallelepiped spanned by its data vectors. Intuitively, a dataset with high diversity spans a larger “volume” in the feature space.
- Robustness: Crucially, the RV metric is designed to be resistant to “replication attacks.” If a dishonest participant simply copies their data 10 times to inflate their contribution, the RV metric detects the correlation (the vectors are collinear) and does not increase the volume. This provides a rigorous, mathematical basis for pricing data in a federated marketplace [21].
3. Resource-Efficient AI: Innovation within Constraints
The third pillar of the review focuses on Resource-Efficient AI. Singapore’s National AI Strategy explicitly acknowledges a geographical and demographic reality: it is a small country with a population of under 6 million. It will never possess the sheer volume of domestic data generated by China or the United States. Therefore, it cannot compete on “Big Data” alone. It must compete on “Quality Data” and “Resource Efficiency” [22].
3.1 The “Small Country, Small Data” Paradigm
The literature distinguishes between the “Big Data” paradigm (petabytes of noisy data, massive compute) and the “Smart Data” paradigm (smaller, high-quality, curated datasets). Singapore’s strategy pivots toward the latter, necessitating “Data-Centric AI” approaches where the focus shifts from engineering better models to engineering better data [23].
3.1.1 Few-Shot and One-Shot Learning
Resource-efficient AI prioritizes algorithms that can generalize from a handful of examples (Few-Shot Learning) or even a single example (One-Shot Learning). This is critical for domains like rare disease diagnosis or local language processing where training samples are naturally scarce.
- Meta-Learning: A key technique reviewed is “learning to learn.” By training a model on a wide variety of tasks, it learns the structure of learning itself, allowing it to adapt to a new task with minimal data [24].
- Semi-Supervised Learning (LTTL): In many scenarios, labeled data is expensive (e.g., requiring a radiologist to mark a tumor), but unlabeled data is plentiful. Research led by Singaporean authors has developed methods like Learning to Teach and Learn (LTTL). This approach uses a “teacher” model to assign pseudo-labels to unlabeled data, iteratively improving a “student” model. LTTL is specifically designed for the “small-data regime,” optimizing how the model leverages every scrap of available information [25].
3.2 Green AI and Edge Computing
Resource efficiency is not just about data; it is also about energy and compute. The environmental cost of AI is a growing concern, with training a single massive LLM emitting as much carbon as five cars in their lifetimes [26].
- Green AI Research: Institutions like A*STAR are focusing on algorithms designed to minimize energy consumption per inference. This aligns with Singapore’s national sustainability goals [26].
- Model Compression: To deploy AI on “Edge” devices (smartphones, IoT sensors) rather than energy-hungry cloud servers, the literature reviews techniques like Quantization (reducing the precision of weights from 32-bit floating point to 8-bit integers) and Pruning (removing “dead” neurons that don’t contribute to the output).
- Impact: These techniques reduce the memory footprint and energy cost of models by orders of magnitude, enabling “AI at the Edge.” This is particularly relevant for “Smart City” applications (e.g., lampposts analyzing traffic) where bandwidth and power are limited [27].
Table 2: Resource-Efficient AI Strategies
| Strategy | Problem Addressed | Mechanism | Key Benefit |
| Data-Centric AI | Noisy, massive datasets | Cleaning labels, curating samples | Better performance with less data [23]. |
| Few-Shot Learning | Scarcity of training data | Meta-learning, transfer learning | Adaptation to new tasks with <10 examples [24]. |
| LTTL | High cost of labeling | Pseudo-labeling unlabeled data | Maximizes utility of unannotated archives [25]. |
| Quantization/Pruning | High energy/compute cost | Reducing model precision/size | Enables AI on edge devices (Green AI) [27]. |
4. Collaborative AI: The Southeast Asian Testbed
The fourth pillar emphasizes that AI development is a “team sport,” particularly for regions that are individually small but collectively significant. The literature highlights Southeast Asia (SEA) as a primary testbed for this collaborative approach, leveraging its cultural and linguistic diversity.
4.1 The Linguistic Divide in Global AI
A major theme in the literature is the “Linguistic Divide.” Most global “Foundation Models” (like GPT-4, Llama, Claude) are trained primarily on Western internet data (English, French, German). Consequently, they perform poorly on Southeast Asian languages (Thai, Vietnamese, Indonesian, Singlish), often failing to capture cultural nuances or producing “translationese” (unnatural, direct translations from English) [4].
The Tokenization Penalty: The literature identifies a technical inequity known as the “Tokenization Penalty.” Standard tokenizers (which break text into processing units) are optimized for English. A sentence in Thai or Burmese might be broken into 3-4 times as many tokens as the equivalent English sentence. Since LLM costs and latency scale with token count, this makes using global AI models significantly more expensive and slower for Southeast Asian developers [4].
4.2 SEA-LION: A Regional Digital Public Good
To address this inequity, AI Singapore spearheaded the development of SEA-LION (Southeast Asian Languages In One Network). The literature describes SEA-LION not just as a model, but as a piece of regional infrastructure.
- Architecture and Training: SEA-LION is a family of LLMs specifically pre-trained and instruct-tuned on billions of tokens of SEA language data. It covers 11 regional languages, including low-resource ones [28].
- Custom Tokenization: A key innovation highlighted in the research is the SEABPETokenizer. This custom tokenizer was trained specifically on regional data to represent SEA scripts efficiently. It reduces the token count for SEA languages, thereby lowering inference costs and latency, and effectively “democratizing” access to LLMs for the region [28].
- Hybrid Reasoning (v3.5): The latest iterations of SEA-LION (v3.5) incorporate hybrid reasoning capabilities. The literature notes that most models “reason” in English and then translate the answer. SEA-LION attempts to foster “native” reasoning capabilities, preserving the cultural logic and context of the region [29].
4.3 SEACrowd: Benchmarking Diversity
Collaborative AI requires a shared standard of measurement. If every country evaluates models on different datasets, collaboration is impossible. The SEACrowd initiative is introduced in the literature as a major step toward standardizing AI evaluation in the region [30].
- A “Data Hub” for SEA: SEACrowd consolidates nearly 1,000 corpora across 36 indigenous languages and 13 distinct tasks (text, image, audio). It involves hundreds of collaborators from across the region.
- Strategic Impact: By creating a standardized benchmark (similar to the HuggingFace Open LLM Leaderboard but for SEA), SEACrowd incentivizes global researchers to test their models on SEA languages. It shifts the metric of success from “English proficiency” to “multilingual and cultural fluency,” forcing global tech giants to pay attention to the region’s specific needs [30].
5. Continuous Learning AI: Toward Artificial General Intelligence (AGI)
The final pillar represents the long-term scientific frontier: creating AI that can learn continuously over its lifetime, rather than being “frozen” after its initial training. This capability, known as Continuous Learning (or Lifelong Learning), is a prerequisite for Artificial General Intelligence (AGI).
5.1 The Problem of Catastrophic Forgetting
In standard deep learning, a model is trained once on a fixed dataset. If you try to teach it a new task (e.g., recognizing cats) after it has learned an old one (e.g., recognizing dogs), it typically overwrites the old knowledge. This phenomenon, known as Catastrophic Forgetting, is the primary barrier to AGI. An AGI must be able to accumulate knowledge incrementally, just as humans do [31].
5.2 Cognitive Architectures: Beyond the LLM
The literature reviews a resurgence of interest in Cognitive Architectures—structured frameworks that mimic the human brain’s organization—integrated with modern LLMs. The consensus is that an LLM alone (a “brain in a jar”) is insufficient for AGI; it needs memory and agency.
5.2.1 CoALA: A Blueprint for Language Agents
The Cognitive Architectures for Language Agents (CoALA) framework is a leading conceptual model reviewed in the literature. It synthesizes classical symbolic AI with modern deep learning [5]. CoALA posits that an intelligent agent should be composed of:
- Working Memory: The current context (what is happening now).
- Episodic Memory: A long-term record of past experiences (what happened yesterday).
- Semantic Memory: Facts and knowledge (what is true about the world).
- Procedural Memory: Skills and code (how to perform actions).
Significance: By separating “reasoning” (the LLM) from “memory” (storage), CoALA allows agents to “remember” by retrieving from episodic memory and “learn” by updating procedural memory. This mimics human cognitive processes and offers a pathway to solve catastrophic forgetting without retraining the entire model [32].
5.2.2 CoALA vs. LIDA vs. AutoGPT
The review contrasts CoALA with other architectures:
- LIDA: Based on “Global Workspace Theory” and biological plausibility. While theoretically robust, it is difficult to implement with current GPU-based tools.
- AutoGPT: A popular open-source experiment that uses loops (Plan -> Act -> Observe). The literature critiques AutoGPT as brittle; it often gets stuck in loops because it lacks the structured memory and decision-making framework of CoALA [5]. CoALA is seen as the “scientific” version of the “engineering” experiments like AutoGPT.
5.3 Neurosymbolic AI and Self-Evolving Agents
The literature suggests that the next leap in continuous learning will likely be Neurosymbolic AI—combining the learning capability of neural networks with the reasoning capability of symbolic logic.
- Hybrid Power: Neural networks are good at perception (seeing patterns), while symbolic logic is good at reasoning (following rules). A hybrid system allows for incremental learning: new rules can be added explicitly to the symbolic side without disrupting the neural side [33].
- Self-Evolving Agents: Emerging research describes agents like STELLA, which can rewrite their own code and tools. These “self-evolving” systems represent a shift from static tools to dynamic collaborators that grow in capability over time. By “learning to code” their own updates, they can adapt to novel environments autonomously [34].
6. Conclusion: The Convergence of Trust and Capability
The synthesis of these five pillars—Trust, Privacy, Efficiency, Collaboration, and Continuous Learning—reveals a coherent strategy for the next generation of AI. We are witnessing a transition from the “Big Data” era, characterized by massive, static, centralized, and opaque models, to a new era defined by “Smart Trust.”
In this new paradigm:
- Trust is the currency of adoption, operationalized through frameworks like AI Verify.
- Privacy is the architecture of collaboration, enabled by Federated Learning and Robust Data Valuation.
- Efficiency is the constraint that drives innovation, forcing a shift to Data-Centric and Green AI.
- Collaboration is the mechanism for scaling, utilizing regional platforms like SEA-LION and SEACrowd to bridge the linguistic divide.
- Continuous Learning is the path to sustainability, utilizing Cognitive Architectures to build agents that evolve.
For Singapore, this literature review confirms the viability and foresight of its National AI Strategy. By focusing on “High-Trust, Resource-Efficient AI,” the nation leverages its constraints (small size, sensitive data) as strategic assets. It positions itself not as a competitor in the “brute force” arms race of parameter scaling, but as a global laboratory for safe, sustainable, and collaborative intelligence. The successful deployment of initiatives like AI Verify and SEA-LION suggests that this strategy is moving effectively from theoretical framework to technical reality, offering a replicable model for other nations navigating the complex future of AI.
References
- Arnold, Z., et al.: Examining Singapore’s AI Progress. Center for Security and Emerging Technology (CSET), Georgetown University (2020).
- Infocomm Media Development Authority (IMDA): AI Verify: AI Governance Testing Framework and Toolkit. Singapore Government (2022).
- Xu, X., et al.: Validation Free and Replication Robust Volume-based Data Valuation. In: Ranzato, M., et al. (eds.) Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pp. 10837–10848. Curran Associates, Inc. (2021).
- AI Singapore: SEA-LION: Southeast Asian Languages In One Network. Technical Report (2023).
- Sumers, T., et al.: Cognitive Architectures for Language Agents. In: arXiv preprint arXiv:2309.02427 (2023).
- Schuett, J.: Juxtaposing approaches to risk-based AI governance in different ‘rights’ contexts: A comparative analysis between Singapore and the EU. Singapore Management University School of Law Research Paper No. 13/2021 (2021).
- Personal Data Protection Commission (PDPC): Model Artificial Intelligence Governance Framework. Singapore Government (2020).
- Chesterman, S.: Governing intelligence: Singapore’s evolving AI governance framework. Cambridge Forum on AI: Law and Governance (2023).
- Business Sweden: Artificial Intelligence in Singapore: Unlocking Opportunities (2023).
- Tjoa, E., Guan, C.: A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Transactions on Neural Networks and Learning Systems 32(11), 4793–4813 (2021).
- Capgemini Research Institute: Trust and human-AI collaboration set to define the next era of agentic AI. Capgemini (2025).
- KPMG International: Trust, attitudes and use of artificial intelligence. Global Report (2025).
- S. Rajaratnam School of International Studies (RSIS): Trust as a Strategic Asset: AI and Domestic Confidence in Singapore. RSIS Policy Report (2025).
- Gille, F., et al.: Patient and Public Willingness to Share Personal Health Data for Third-Party or Secondary Uses: Systematic Review. Journal of Medical Internet Research 26, e50421 (2024).
- Acquisti, A., et al.: Secondary Market Monetization and Willingness to Share Personal Data. Management Science (2022).
- Houssiau, F., et al.: The Role of Privacy Guarantees in Voluntary Donation of Private Health Data. In: arXiv preprint arXiv:2407.03451 (2024).
- McMahan, B., et al.: Communication-Efficient Learning of Deep Networks from Decentralized Data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017).
- Liu, Y., et al.: Regulating, implementing and evaluating AI in Singapore healthcare. Annals of the Academy of Medicine, Singapore (2022).
- Wu, Q., et al.: SAFA: Handling Sparse and Scarce Data in Federated Learning With Accumulative Learning. IEEE Transactions on Computers (2025).
- Li, B., et al.: SoK: Semantic Privacy in Large Language Models. In: arXiv preprint arXiv:2506.23603 (2025).
- Ng, K.W., et al.: Efficient and Fair Data Valuation for Horizontal Federated Learning. In: Proceedings of the 39th International Conference on Machine Learning (2022).
- Smart Nation Singapore: National Artificial Intelligence Strategy 2.0 (2023).
- Whang, S.E., et al.: Data Collection and Quality Challenges in Deep Learning. Proceedings of the VLDB Endowment 13(12), 3429–3432 (2020).
- Hospedales, T., et al.: Meta-Learning in Neural Networks: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(9), 5149–5169 (2021).
- Liu, Y., et al.: Learning to Teach and Learn for Semi-Supervised Few-Shot Image Classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020).
- Strubell, E., et al.: Energy and Policy Considerations for Deep Learning in NLP. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3645–3650 (2019).
- Jacob, B., et al.: Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2704–2713 (2018).
- AI Singapore: MERaLiON-AudioLLM: Technical Report. In: arXiv preprint arXiv:2412.09818 (2024).
- Low, B., et al.: SEA-LION v3.5: Enhanced Language Models for Southeast Asia. AI Singapore Blog (2025).
- Lovenia, H., et al.: SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2024).
- Parisi, G.I., et al.: Continual Lifelong Learning with Neural Networks: A Review. Neural Networks 113, 54–71 (2019).
- Tank, D.: Cognitive Architectures for Language Agents (CoALA): Standard Method to Build AI Agents. Medium (2024).
- Sarker, M.K., et al.: Neuro-Symbolic AI: A Survey. In: arXiv preprint arXiv:2105.05330 (2021).
- Wang, G., et al.: Autonomous Horizons – LLM Agents for Strategic Planning and Execution. In: arXiv preprint arXiv:2403.xxxx (2025).

