Open Access

Re-coding Community: Designing AI-Native Platforms for Trust, Belonging, and Collective Agency

4 Miami, USA

Abstract

The article is devoted to the analysis of fundamental challenges associated with the progressive erosion of trust and the weakening of collective agency in digital communities, and to the formation of an integrated paradigm for designing AI-native platforms aimed at overcoming these effects. The relevance of the study is determined by the paradoxical configuration of the current technological landscape: while generative artificial intelligence (GenAI) is being rapidly deployed in the corporate sector (71% of companies report the use of corresponding solutions by mid-2024), there is simultaneously a high level of anxiety and concern among users (82% in 2025), which limits scaling opportunities, hinders the formation of sustainable practices of joint action, and undermines the accumulation of social capital. The aim of the work is to develop a conceptual Architecture of Hybrid AI-Based Community Governance (HCA-Architecture), capable of institutionalizing structural trust and expanding collective agency through the redistribution of roles between human participants and AI agents. The methodological basis of the study is an interdisciplinary synthesis that combines a systematic literature review in leading scientific databases (Scopus, WoS, ACM, IEEE) with a comparative analysis of empirical data on decentralized forms of governance (DAO) and practices of human–algorithm interaction. Within the proposed approach, a model of the AI-Native Community Wheel (AICF) is constructed, which provides a framework for describing and calibrating key mechanisms of coordination, attention allocation, and infrastructural trust. In the final part of the work, it is demonstrated that the proposed framework makes it possible to recode the algorithmic incentives of digital platforms: from a logic of maximizing attention retention and monetization to a logic of maximizing collective coordination, reliability of interactions, and the reproduction of trust, which forms a necessary condition for the sustainable development of digital public spheres. The presented results and the developed architecture are intended for application in research in the field of Human–Computer Interaction, in the design and development of Web3 platforms, in practices of algorithmic governance, and in the architecting of DAO systems, where formalized mechanisms for maintaining trust and distributed agency are required under conditions of high algorithmic mediation.

Keywords

References

📄 AI 2025 statistics: Where companies stand and what comes next. (2025). Aristek Systems. https://aristeksystems.com/blog/whats-going-on-with-ai-in-2025-and-beyond/ (Retrieved December 7, 2025)
📄 As generative AI gains ground, consumers choose the innovators they trust. (2025). Deloitte. https://www.deloitte.com/us/en/about/press-room/connectivity-mobile-trends-survey.html (Retrieved December 7, 2025)
📄 A dynamical measure of algorithmically infused visibility. (2025). PubMed Central (PMC). https://pmc.ncbi.nlm.nih.gov/articles/PMC12646761/ (Retrieved December 7, 2025)
📄 Trust and AI weight: Human-AI collaboration in organizational psychology. (2025). Frontiers in Organizational Psychology. https://www.frontiersin.org/journals/organizational-psychology/articles/10.3389/forgp.2025.1419403/full (Retrieved December 7, 2025)
📄 Algorithmic amplification for collective intelligence. (2025). Knight First Amendment Institute. https://knightcolumbia.org/content/algorithmic-amplification-for-collective-intelligence (Retrieved December 7, 2025)
📄 The state of AI in 2025: Agents, innovation, and transformation. (2025). McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai (Retrieved December 7, 2025)
📄 AI governance: Themes, knowledge gaps and future agendas. (2025). Emerald Publishing. https://www.emerald.com/intr/article/33/7/133/178343/AI-governance-themes-knowledge-gaps-and-future (Retrieved December 7, 2025)
📄 The ethics of artificial intelligence: Issues and initiatives. (2020). European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf (Retrieved December 7, 2025)
📄 Understanding algorithmic bias and how to build trust in AI. (2025). PwC. https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html (Retrieved December 7, 2025)
📄 Social capital in the creation of AI perception. (n.d.). ResearchGate. https://www.researchgate.net/publication/340211362_Social_capital_in_the_creation_of_AI_perception (Retrieved December 7, 2025)
📄 Development and validation of the sense of online community scale. (2025). ERIC – Education Resources Information Center. https://files.eric.ed.gov/fulltext/EJ1410763.pdf (Retrieved December 7, 2025)
📄 DAOs and token-based governance: A case study analysis. (2025). Medium. https://medium.com/@syedhasnaatabbas/daos-and-token-based-governance-a-case-study-analysis-76e439e90c0d (Retrieved December 7, 2025)
📄 Proceedings of the JPS Conference. (2025). JPS Journals. https://journals.jps.jp/doi/abs/10.7566/JPSCP.44.011008 (Retrieved December 7, 2025)
📄 The algorithmic hand: Artificial intelligence, democracy, and collective action at scale. (2025). ePrints Soton. https://eprints.soton.ac.uk/506915/1/2025-02_The_Algorithmic_Hand_-_FINAL.pdf (Retrieved December 7, 2025)
📄 Democracy for DAOs: An empirical study of decentralized governance and dynamics—Case study Internet Computer SNS ecosystem. (2025). arXiv. https://arxiv.org/html/2507.20234 (Retrieved December 7, 2025)
📄 Sovereign snapshot – AI in a tribal context: A brief review of the literature. (n.d.). University of Oklahoma. http://www.ou.edu/nativenationscenter/research/sovereign-snapshot-ai-in-a-tribal-context.html (Retrieved December 7, 2025)
📄 Flywheel: A new digital marketing model. (n.d.). ResearchGate. https://www.researchgate.net/publication/380042282_Flywheel_A_New_Digital_Marketing_Model (Retrieved December 7, 2025)
📄 Ensuring Indigenous Peoples' rights in the age of AI. (2025). United Nations – DESA. https://social.desa.un.org/issues/indigenous-peoples/news/ensuring-indigenous-peoples-rights-in-the-age-of-ai (Retrieved December 7, 2025)
📄 Adaptive onboarding: eLearning as a customized tool. (n.d.). eLearning Industry. https://elearningindustry.com/adaptive-onboarding-elearning-as-a-customized-tool (Retrieved December 7, 2025)
📄 The benefits of an adaptive, personalized onboarding strategy. (n.d.). eLeaP LMS. https://www.eleapsoftware.com/the-benefits-of-an-adaptive-personalized-onboarding-strategy/ (Retrieved December 7, 2025)
📄 A better way to build a brand: The community flywheel. (2025). McKinsey & Company. https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/a-better-way-to-build-a-brand-the-community-flywheel (Retrieved December 7, 2025)
📄 INTO transforms the flywheel effect into reality with its new Web3 engine. (2025). Medium. https://medium.com/@intoverse/into-transforms-the-flywheel-effect-into-reality-with-its-new-web3-engine-fb853b56c097 (Retrieved December 7, 2025)
📄 DAO-AI: How agentic systems learn to vote. (2025, November). Medium – DeXe Protocol. https://dexenetwork.medium.com/dao-ai-how-agentic-systems-learn-to-vote-38aece6f55a9 (Retrieved December 7, 2025)

Similar Articles

1-10 of 50

You may also start an advanced similarity search for this article.