Content Warning: Racism, classism, eating disorders, self-harm, body dysmorphia, misogyny.
If I were to try and explain the use of social media algorithms to someone from the year 2000, or even 2010, it would sound dystopian. But systems that analyse our online interactions, providing perfectly curated feeds of content individual to each user, have become entrenched in our daily lives. As Yuval Noah Harari says in the video below, with sufficient biometric data, a technology company can now ‘create an algorithm that knows you better than you know yourself’.
With such a reliance on these technologies, we rarely take a minute to stop and reflect on the consequences for our well-being. In this piece, I’ll explore how algorithms perpetuate beauty standards on social media platforms like TikTok, the effect this can have on the mental health of young women today, and, most importantly, what we can do about it.
Beauty standards manifest themselves through an algorithm in two discernible ways.
The first comes from a social media company’s agendas and policies, which revolve around the brand image the platform is trying to embody. This was recently exemplified by leaked documents confirming allegations that TikTok used their algorithm to suppress certain creators. According to The Intercept, moderators were instructed by the company to filter out users who they viewed as ‘unattractive, poor or disabled’. In doing so, the platform favoured content deemed more likely to be successful – i.e posts featuring those who conformed to a narrowly defined margin of visual acceptability.
The more insidious way in which beauty standards are upheld, however, is largely driven by us as users. Our feeds (products of the algorithm) function as reflections of our own unconscious biases seeping into our online engagements. These conceptions of what we regard as ‘beautiful’, or not, are regularly informed by Eurocentric standards, meaning features like dark skin are viewed as less desirable. It is no surprise, therefore, that women of colour often internalise these biases from an early age. The result of this is that our feeds build on and unintentionally reinforce negative beliefs about ourselves, only exposing us to posts by users who conform to Eurocentric beauty standards.
The effects of these standards are particularly compounded by harmful online trends and tags. An early example was the ‘thigh gap’ fad circa 2014, spreading on Tumblr, a part of ‘thinspiration’ content that still plagues the Internet.
Today, problematic trends concerning physical characteristics are able to spread far more quickly by using the algorithm as a vehicle; recent obsessions include back profiles and hip dips. Under specific hashtags, users express distress or dissatisfaction at their characteristics, and in doing so, (inadvertently) spread negative body images. These are features that many viewers may not have otherwise even been aware that they had, or that they were supposed to feel ashamed about. What is all the more tragic is the proliferation of supposed ‘solutions’ to these manufactured ‘problems’, some of which verge from simply bizarre to dangerous. Examples include subliminals to reshape your facial structure, pricey facial creams to cure skin conditions, and corsets to constrict your waist and achieve an hourglass look. #nosejobcheck on TikTok, for example, has nearly 700 million views at the time of writing, a hashtag under which people share their rhinoplasties.
Of course, pathologising every aspect of the female body to capitalise on the insecurities of young women is by no means a new phenomenon.
In ‘The Beauty Myth’ (1990), Naomi Wolf refers to the endless pursuit of an unattainable physical ideal as the ‘ideology of self-improvement’, a concept which seems all the more pertinent in 2020. But these age-old societal pressures are rendered far more potent by the pervasive nature of today’s technology. With a personalised feed, we are not only shown relevant content that we enjoy, but content that we have been shown to engage with for whatever reason. A feature that someone may have not even paid attention to previously can accumulate into an insecurity, as it continually appears on their feed. These insecurities are reinforced by echo chambers contained within the comment sections of posts, where other users also express concerns about said feature, propose ‘solutions’ and validate each other’s toxic thought processes.
Even though many of these posts will be intended as self-deprecating jokes, the reality is that they can still negatively influence someone else’s beliefs about themselves and lead to low self-esteem. In some instances, the process of falling down this online rabbit hole contributes to body dysmorphic disorder, and other mental health illnesses like depression and anxiety. When taking into consideration that many viewing these posts will be adolescents, the issue becomes all the more worrying. Reports of mental health conditions among this demographic have shown to be on the rise in the last few years, and multiple studies have demonstrated a link to social media use.
The solution to these issues seems deceptively simple – take a break from social media, and if you see a post you think negatively affects you, a) don’t interact with it (i.e by watching, liking or sharing it) and b) click ‘not interested’.
While this approach makes perfect sense, it’s not entirely satisfactory. Firstly, it ignores the realities of a modern world where it is increasingly difficult for everyone, particularly young people, to opt out of digital technologies. Secondly, it fails to account for fundamental human nature, or rather, our tendency to magnify our flaws and disregard our attributes. Algorithms play right into our insecurities by enabling us to fixate on our supposed faults. Especially considering sociocultural pressures on women to strive for beauty, the decision to disengage with content we know affects us is a psychological challenge.
And what about preventative measures from the platforms themselves? Well, looking to social media companies – who prioritise profit margins over ethical standards – for answers is ultimately an exercise in futility. If we can learn one thing from Microsoft’s AI Bot experiment, it’s that these technologies are obviously not inherently malicious; they only reflect what we put out. So, it falls to users to reconstruct the current narrative and to dismantle hegemonic beauty standards. We need to advocate for cultural change both online and offline that enables young women to appreciate that their self-worth is not dependent on the extent to which they conform to these ideals.
There are many creators who help to cultivate more diverse and welcoming environments, and who use their platforms to promote body positivity and neutrality. Jameela Jamil began the ‘I Weigh’ movement, which is about ‘radical inclusivity, so that no one feels alone’. Radhika Sanghani created the #sideprofileselfie, which enables normalisation of all kinds of side profiles, especially those of WOC. Danae Mercer posts unedited photos, and raises awareness as to how they can be manipulated to produce false or unrealistic depictions of people. These are only a few names of many. By collectively uplifting these creators, we can help to drown out negative voices in a sea of positivity.
There is also enormous potential to use hashtags tactically, spamming uplifting content onto areas of social media usually reserved for toxic trends. This reduces the exposure of negative content; when users click on the tag, they will be confronted with a wider context that reminds them that the trend may not be healthy, or even realistic.
Of course, individual streams of content will still be up to individual users. But, by creating an informative campaign focused on the power of clicking ‘not interested’, we can educate people about gradually overhauling their feeds. Hopefully, the next young woman who sees a toxic post will know she has the option to scroll away. After all, algorithms and social media are here to stay: it’s time we use them to our advantage.
Written by Shayahi Nathan
Edited by Eden Szymura
Get Help and Support: