The Mandela Effect: can we trust our memories?

0
The Mandela Effect: can we trust our memories?
The Mandela Effect: can we trust our memories?

Human lives can be summed up by memories since they serve as our reliable connections to the people and things around us. Humans have always relied on memories to give meaning to or give names to things, such as Independence Day, which is often a major day in the history of any country and is celebrated by remembering famous people and events in honor of the country’s freedom.

The account of the country’s independence, however, becomes unreliable or false if a certain segment of the population in the country holds the opinion that the country’s independence never took place or if a group of people remembers certain events prior to the day of independence incorrectly. This is referred to as the Mandela Effect.

The Mandela Effect, as Holly Schiff, a licensed clinical psychologist in Greenwich, Connecticut, puts it, “occurs when many different people incorrectly remember the same thing, so basically a collective false memory,” is a phenomenon characterized by shared and consistent false memories. The Mandela Effect is prevalent in popular culture and is real.

Named for the well-known South African politician and anti-apartheid activist Nelson Mandela. which many people today still cling to, died in prison in the 1980s, despite plenty of evidence to the contrary. After being freed in 1990, Nelson Mandela ran for president of South Africa, which he eventually won in 1994. He died in office in 2013. The Mandela Effect has been found to be characterized by:

Occurrence of false memories

False contextualization of an event that occurred

Failure to remember words spelled correctly

Distortion of existing memories and

Failure to find any answers.

A new study shows that the internet has played a huge role in the spread of the Mandela Effect by sharing information, which has allowed misconceptions and false memories to keep spreading. These false memories, also known as “deepfakes,” are altered representations of events or memories created using digital or artificial intelligence (AI) produced data and imagery. Nowadays, deep fakes are all over the internet and social media, especially with the development of AI. With the use of the internet and social media, deep fakes spread extensively and are afterward regarded as truths because they display altered images or events as the truth.

Although the majority of things or occurrences that are subject to the Mandela effect are often important enough  to alter the course of human history if they were real, they could be terrifying enough to cause  terror in those who find it difficult to accept the truth.

There are numerous examples of tales that have been changed as a result of being widely shared as fact, similar to the Mandela example. One frequent instance is a line from the Snow White fairytale where, although many people mistakenly think it says “Mirror, mirror on the wall,” the correct statement is “Magic mirror on the wall,” and several movies based on the story use the former rather than the latter.

The risks of deep fakes are generally underappreciated because a significant portion of the material shared or taught nowadays comes from the internet and is frequently modified or changed. The Mandela Effect, which they  named Cognitive Bias, is said to be created by the way our brains are wired, which suggests that our minds are frequently influenced by things like what other people think and our own pre-existing ideas. Cognitive bias is when someone accepts a particular explanation of an event just because it is presented by a well-known source—even though it is not accurate.

This begs the question, “can we trust the information on the internet to be true?” or “How do we spot deep  fakes?” The internet today, which is tilting more towards AI-generated information, can not be entirely trusted to  provide a true account of certain events or true information. Recent reports have shown that the popular AI  chatbot ChatGPT has been found to provide wrong answers to certain questions. Regardless of the call for  regulation on this new technology, tech companies keep venturing into these waters and building products and features around this technology. The recent is, Google’s announcement of its AI option Bard.

Deepfakes can alter human memories and how they remember things, increasing the spread of the Mandela  Effect. Deepfakes could create fake news stories which will be believed to be real, fake reports of terror attacks or natural disasters which could throw people into a state of panic with the aim of driving a socio-political agenda or government campaign.

Deepfakes could even help sway public opinion about political candidates, many social media platforms, such as  Facebook and Twitter, are launching several artillery to combat false news and stories during or after political campaigns. For example, during the 2018 U.S. midterm elections, a video went viral, which was a deepfake video of Barack Obama calling former president Donald Trump a bad name. The video, which was entirely fake, was created by Jordan Peele, an Oscar-winning director. The effect of this video was that many of Donald Trump’s  supporters believed the video and took to social media to express their displeasure, which could have resulted in  an all-out protest or worse.

The dangers of deepfakes are numerous; deepfakes can be used in many other ways, like altering certain footage of history to fit a particular narrative. It can also be used to drive cyberbullying or propaganda  campaigns, for example, if a particular fanbase of a celebrity finds doctored images targeted at defaming another celebrity. Deepfakes could also be used to fabricate scientific evidence, which could be used to back a false hypothesis. False information is spreading today about climate despite the verified evidence that it is caused by human activity.

Information on the internet can be cross-checked to see if they are deepfakes or false. How to spot a deep fake are:

Facial distortions or transformations

Verify the source of the information by trying to look for another version of the same story. Compare both versions of the story.

Zoom in on the image or rewatch the video to check the mouth and lip movement for altercations. Subject it to audio analysis, which would help detect AI-generated voices or altered voices.

Look out for inconsistencies in environmental surroundings, such as lighting and reflections which do not match the subject’s.

Check the video’s metadata, which usually provides the timing and place of recording.