To evaluate the Bubblemetrix Index, a multidimensional instrument was constructed and pre-tested, with the initial version including five factors: “Echo Chamber Exposure”, “Algorithmic Curation Awareness”, “Source Diversity Perception”, “Ideological Homogeneity & Confirmation Bias”, and “Repetitive Interaction & Engagement”. As preliminary data indicated a lack of consistency in the “Echo Chamber Exposure” factor, and the removal of certain items did not yield acceptable reliability, this factor was excluded from the instrument. The final instrument demonstrated good pre-test metric properties.

A central theoretical implication of the results is that user agency alone cannot account for the observed patterns of ideological reinforcement on social media platforms. While prior research has variously emphasised users’ capacity to seek diverse information or the constraining role of algorithmic curation, the present findings suggest that these dimensions are closely intertwined. Users with high levels of algorithmic awareness, that is, those who explicitly recognise personalisation mechanisms, are not necessarily less vulnerable to ideological encapsulation. On the contrary, one of the most prominent user profiles identified in this study combines strong algorithmic awareness with elevated confirmation bias and repetitive engagement. This counterintuitive pattern challenges optimistic assumptions underlying media literacy–based approaches to mitigating polarisation and calls into question policy strategies that prioritize user education as a standalone solution (Bechmann & Nielbo, 2018; Dubois & Blank, 2018).

This study set out to examine how algorithmic personalisation shapes electoral perceptions by focusing on users’ experiences within filter bubbles. Through the development and application of the Bubblemetrix framework, the research provides empirical evidence that filter bubbles are structured, differentiated, and politically consequential phenomena. Rather than affecting users uniformly, algorithmic reinforcement emerges through specific configurations of awareness, engagement, and ideological alignment.

One of the most significant conclusions is that algorithmic awareness does not function as a protective factor against ideological encapsulation. Users who recognise personalisation mechanisms may nonetheless remain deeply embedded in self-reinforcing informational environments. This finding directly challenges policy approaches that prioritise media literacy and transparency as sufficient tools for mitigating polarisation (Bechmann & Nielbo, 2018; Dubois & Blank, 2018). While such measures are important, they are insufficient to counter structurally embedded dynamics of amplification.

The study further demonstrates that perceived source diversity does not necessarily weaken filter bubble effects. Exposure to multiple outlets can coexist with pronounced ideological homogeneity when algorithmic systems consistently prioritise engagement-congruent content. This insight has direct implications for regulatory strategies that equate diversity with pluralism without addressing how content is algorithmically selected and amplified (Bruns, 2019; Kaiser & Rauchfleisch, 2020).

From a governance standpoint, the findings underscore the need to reconceptualise filter bubbles as systemic risks rather than individual shortcomings. Approaches centred exclusively on user responsibility overlook the structural role of platform architectures in shaping political discourse. Effective mitigation therefore requires interventions that address engagement-driven ranking systems and feedback loops that entrench ideological segmentation.

The identification of distinct user profiles further suggests that regulatory approaches should account for heterogeneity in vulnerability and engagement patterns. Highly engaged users exhibiting strong ideological reinforcement may require more robust safeguards than users who interact selectively with political content. This observation raises broader questions about proportionality, responsibility, and personalisation in platform governance.

Beyond its substantive findings, the study offers a methodological contribution by demonstrating how psychometric instruments and person-centred analyses can inform policy debates. The Bubblemetrix framework provides a means of assessing algorithmic risks in a manner that is empirically grounded and normatively relevant, supporting the development of evidence-based governance strategies.

https://en.wikipedia.org/wiki/Latent_and_observable_variables

In statistics, latent variables (from Latin: present participle of lateo ‘lie hidden’[citation needed]) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured.[1] Such latent variable models are used in many disciplines, including engineering, medicine, ecology, physics, machine learning/artificial intelligence, natural language processing, bioinformatics, chemometrics, demography, economics, management, political science, psychology and the social sciences.