Users’ perceptions of algorithmic communicative agency in the United Arab Emirates: An exploratory study of the impact of recommendation and trending systems on digital public discourse
Main Article Content
Abstract
Digital platforms serve as crucial spaces for public communication today, influenced heavily by algorithmic recommendation systems that determine visibility and engagement. This study focuses on user perceptions of algorithmic communicative agency within the context of the United Arab Emirates, utilizing an exploratory survey approach with samples from general users and digital content creators. Rather than directly auditing algorithms, it emphasizes perceived influence, as users’ beliefs about algorithms often have a greater impact on their trust and engagement than their technical understanding. The survey assessed various dimensions, including perceived agency, diversity of exposure, polarization, and the credibility of algorithmically generated content. Results show that both user groups see platform algorithms as active agents affecting visibility and discourse. Increased perceptions of algorithmic agency correlate with reduced diversity of information, heightened polarization, and greater conformity in expression, alongside rising concerns about toxic interactions. The credibility associated with algorithmically promoted content is viewed ambivalently, revealing a complex relationship between perceived legitimacy and concerns about bias. Content creators, in particular, feel a stronger impact from algorithms regarding their visibility and reach. This research underscores the need for user-centered insights in discussions about platform regulation and digital literacy, laying groundwork for future expansive research on algorithmic influence in the Arab public sphere.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
This open-access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license.
You are free to: Share — copy and redistribute the material in any medium or format. Adapt — remix, transform, and build upon the material for any purpose, even commercially. The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms: Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
No additional restrictions You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
How to Cite
Share
References
Ahmad, A., Azzeh, M., Alnagi, E., Abu Al-Haija, Q., Halabi, D., Aref, A., & Abu Hour, Y. (2024). Hate speech detection in the Arabic language: Corpus design, construction, and evaluation. Frontiers in Artificial Intelligence, 7, 1345445. https://doi.org/10.3389/frai.2024.1345445
Burnett, A., Knighton, D., & Wilson, C. (2022). The self-censoring majority: How political identity and ideology impact willingness to self-censor and fear of isolation in the United States. Social Media + Society, 8(3). https://doi.org/10.1177/20563051221123031
Dogruel, L., Masur, P., & Joeckel, S. (2022). Development and validation of an algorithm literacy scale for internet users. Communication Methods and Measures, 16(2), 115–133. https://doi.org/10.1080/19312458.2021.1968361
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Haroon, M., Wojcieszak, M., Chhabra, A., Liu, X., Mohapatra, P., & Shafiq, Z. (2023). Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the National Academy of Sciences of the United States of America, 120(50), e2213020120. https://doi.org/10.1073/pnas.2213020120
Huang, Y., & Liu, L. (2025). The impact of algorithm awareness on the acceptance of personalized social media content recommendation: An extended TAM approach. Acta Psychologica, 259, 105383. https://doi.org/10.1016/j.actpsy.2025.105383
Ibrahim, H., AlDahoul, N., Lee, S., Rahwan, T., & Zaki, Y. (2023). YouTube’s recommendation algorithm is left-leaning in the United States. PNAS Nexus, 2(8), pgad264. https://doi.org/10.1093/pnasnexus/pgad264
Knight First Amendment Institute at Columbia University. (2023). Understanding social media recommendation algorithms.
Liu, N., Hu, X. E., Savas, Y., Baum, M. A., Berinsky, A. J., Chaney, A. J. B., Lucas, C., Mariman, R., de Benedictis-Kessner, J., Guess, A. M., Knox, D., & Stewart, B. M. (2025). Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube. Proceedings of the National Academy of Sciences of the United States of America, 122(8), e2318127122. https://doi.org/10.1073/pnas.2318127122
Ludwig, K., Grote, A., Iana, A., Alam, M., Paulheim, H., Sack, H., Weinhardt, C., & Müller, P. (2023). Divided by the algorithm? The (limited) effects of content- and sentiment-based news recommendation on affective, ideological, and perceived polarization. Social Science Computer Review, 41(6), 2188–2210. https://doi.org/10.1177/08944393221149290
Metzler, H., & Garcia, D. (2024). Social drivers and algorithmic mechanisms on digital media. Perspectives on Psychological Science, 19(5), 735–748. https://doi.org/10.1177/17456916231185057
Stark, B., Stegmann, D., & Magin, M. (2020).The rise of intermediaries: How algorithms reshape public discourse. AlgorithmWatch.
Swart, J. (2021).Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society, 7(2). https://doi.org/10.1177/20563051211008828
van der Breggen, M. M., Gonçalves, J., & Boeren, D. (2025).Polarization by recommendation: Analyzing YouTube’s polarization dynamics around Dutch political parties. Journal of Information Technology & Politics, 1–15. https://doi.org/10.1080/19331681.2025.2544653
Zarouali, B., Boerman, S. C., & de Vreese, C. H. (2021). Is this recommended by an algorithm? The development and validation of the algorithmic media content awareness scale (AMCA-scale). Telematics and Informatics, 62, 101607. https://doi.org/10.1016/j.tele.2021.101607
Zoizner, A., & Levy, A. (2025). How social media users adopt the toxic behaviors of ingroup and outgroup accounts. Journal of Computer-Mediated Communication, 30(6), zmaf018. https://doi.org/10.1093/jcmc/zmaf018