Artificial intelligence

Is the world of false growing in an AI-driven consumer space?

Given the greyness of society, it should come as no surprise that AI has its detractors and pitfalls.

The world isn’t black and white. Grey areas always play a role; nothing is 100% good or bad.


AI, for instance, presents an array of benefits to businesses and the adhering consumer spaces. The various technologies at play (e.g., machine learning and generative models) can help companies deliver more streamlined, personalised products, services, and content on a mass scale.

Given the greyness of society, though, it should come as no surprise that AI has its detractors and pitfalls.


Look no further than what AI actually is. It’s artificial intelligence. That means it’s manufactured and not organic.


While AI can undoubtedly generate tangible, 100% real results, it is–by its very nature–artificial.


AI’s artificial state of being isn’t a knock against the technology–it’s merely a fact.

Moreover, it’s a factor we must always weigh whenever we implement it into our day-to-day practices. We should also consider these notions before relying on AI as a failsafe solution for various functions and procedures.

AI’s artificial state of being isn’t a knock against the technology–it’s merely a fact.

Can artificial intelligence create an artificial existence?

You don’t need to go too far back in time to when we could trust that the things we saw were 100% authentic. Photoshopping and even more antiquated image-editing methods never took away from the fact that an original image existed.


In recent years, AI advancements have flipped the above sentiments on their head. The related technologies can emulate our current reality in a manner that pulls the wool over even the keenest of eyes.


We’ve arrived at a phase in AI’s evolution where people can’t be 100% certain that what they’re hearing or seeing is genuine or if it’s an AI/machine learning fabrication.


None of this is to be doom and gloom. This level of precision in emulating “real things” can be–and is being–used for society’s betterment.


However, inherent artificiality raises inevitable questions about deception.

AI advancements have flipped the above sentiments on their head. The related technologies can emulate our current reality in a manner that pulls the wool over even the keenest of eyes.

What makes AI fabrication ethical and acceptable?

Fabrication and deception aren’t the same. People won’t feel hoodwinked by an AI fabrication when aware of it.


Look no further than the entertainment industry. We’ve seen the dawn of “Deepfakes” over the past decade, wherein younger iterations are generated of older actors to match their appearances in iconic roles from decades ago. Star Wars is the most immediate example that comes to mind, using AI fabrication to generate younger versions of Carrie Fisher and Mark Hammil in recent additions to the IP.


Nowadays, though, Hollywood is taking their usage of AI fabrication to a new–and potentially scary–place. Entertainment executives have begun utilising algorithms and machine learning to parse vast pools of data. They’re using these tools to help them more quickly grasp the required components to generate award winners and blockbuster hits.


That said, as long as use cases have AI playing a pivotal role as a supplemental tool instead of a primary driver of creative content, the human touch won’t be lost.


Below, we’ll explore other scenarios where AI fabrication exists in a grey area and how it can be used for good. We’ll also examine how it could be applied to create a fake existence.

Is Smart Content too smart?

Personalised (or smart) content can transform depending on who’s viewing, listening to, or reading it. Both TikTok and Netflix are testing this technology. The ability to trick users with this technology is evident–however, such innovation can just as easily empower people. 


Algorithms catering content to someone’s unique tastes and needs isn’t a brand-new concept. Search engines have been doing this for a while. However, the content itself is now changing based on who’s consuming it. 

AI generated image, © Midjourney artist Danny Song

Creating realistic images, videos, and voices that blur the lines of reality is undoubtedly intriguing and even inspiring at first glance.

AI generated image, © Midjourney artist Danny Song

Voice Emulation blurring the lines between real and fake

Machine learning has made it possible to mimic someone’s voice with the help of mere audio snippets. While this tool would be entirely valuable in fixing misspoken lines in television or movies without performing reshoots, such a tool can easily be abused and used to manipulate consumers.

Generating fake images that look 100% authentic

Take a quick visit to the “This Person Does Not Exist” website. All faces on the website look 100% authentic, but they’re entirely AI-generated. Whichfaceisreal.com performs a similar function, proving how AI can raise questions about what’s actually real or artificial.


In both instances, tell-tale signs of AI-generated faces exist–but the tech remains convincing. Moreover, misuses of the tech remain concerning.

The complex case of deepfake text

OpenAI–an American artificial intelligence research laboratory– created an AI model called GPT2, which generates text and content matching its data’s tone and style to emulate information in a newsfeed, fictional work, or any other writing form.


We’ve recently seen the evolution of this innovation in the form of ChatGPT, which has opened Pandora’s box.


Unsurprisingly, the research wasn’t released publicly initially since the realistic results caused concerns about misuse.

The advent of fake videos being presented as real

Anyone with a thirst for innovation or interest in digital transformation will naturally be enamoured by the capabilities of the new generation of AI. Creating realistic images, videos, and voices that blur the lines of reality is undoubtedly intriguing and even inspiring at first glance.


Yet, these technologies could easily be used maliciously. Look no further than the Deepfake tech we discussed earlier that utilises computer-generated video and audio to fabricate something that never happened.


Recent examples of nefarious Deepfake use include Scarlett Johansson’s head (along with other known figures) being put into a pornographic movie.


Lawmakers are also worried this technology can spread misinformation over the internet.


Look no further than this video of Barack Obama to see how audio/video can be altered to make it seem like an authoritative figure is saying something they never said. National security and personal reputations are at stake.

China’s state-run press agency, Xinhua, has implemented AI anchors resembling regular humans who report the daily news. From experienced experts to the general population, viewers struggle to distinguish that these anchors aren’t human.

Will AI send us spiralling into a world of fake?

Only time will tell where AI will take us as a society. Most importantly, we must remain aware of the potential setbacks and how AI fabrications can manipulate people. Consumers, almost more than anyone, must remain vigilant as unethical companies can leverage these technologies to drive our spending.

We’ve recently seen the evolution of this innovation in the form of ChatGPT, which has opened Pandora’s box.

Share this article