• Mon. Jun 17th, 2024

Artificial generative intelligence risks a return to cultural colonialism


Apr 25, 2023
Artificial generative intelligence risks a return to cultural colonialism


Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.

The explosive rise in the popularity of AI chatbots has brought us to a pivotal moment that will have lasting effects on our collective worldview. ChatGPT is now the fastest-growing app in history, with over 100 million users and over 10 million queries a day. Fortune 500 companies are adopting it without consideration of future implications, creating a flywheel effect that multiplies its reach through the companies that adopt the technology.

Generative AI can have unintended consequences, and one of the most significant is the power it bestows upon the culture that creates it. The unprecedented popularity of ChatGPT has ironically rendered OpenAI a for-profit, closed-source company with tighter control over ChatGPT and opaque details regarding its learning approach, as evidenced by the recent release of ChatGPT-4. As this technology proliferates, so does its ability to propagate misinformation with alarming confidence. Left unchecked, its implicit biases pose a risk of undoing decades of progress toward a diverse and multicultural society.

The warning lights are flashing; it is imperative that those producing this technology acknowledge and take responsibility for its potentially profound impact on society.

ChatGPT is biased — just ask it

AI is not a neutral entity. ChatGPT, for instance, acknowledges that its responses are inherently biased and laden with implicit prejudices. This is primarily due to the fact that the dataset used to develop it is replete with human biases and prejudices.


GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.

Register Here

ChatGPT was developed through algorithms designed by humans to learn from the digital world, which tends to reflect the more interconnected and digital societies. This manifestation inevitably will lead to further exacerbation of the digital divide. Additionally, Open AI employees must manually patch the algorithm to prevent ChatGPT from producing inappropriate responses. The judgment of the algorithm’s creators guides it to behave in a way that makes them feel comfortable and confident. Consequently, the algorithm’s views on political issues, the state of society and technology and preferences for activities (such as affluent pursuits, like vacationing in Hawaii or national parks in California, as hypothetical examples) are not created from a universal perspective but rather from specific points of view. In short, ChatGPT’s value system reflects the value systems of the people who developed it.

Why is this problematic? Because it imposes a particular set of principles on individuals around the globe who use the technology. People have vastly different lives, economies, cultures and world understandings not reflected in the output. ChatGPT aggregates information from a vast amount of language data from different authors to create its answers to questions. Any time lots of data is aggregated and statistically learned, it tends to minimize the representation of marginalized groups. We risk erasing the details of the global tapestry by unifying it under a singular algorithmic vision.

Dr. Songyee Yoon, NCSoft president and chief strategy officer.

It’s not just the function of the tool that pushes toward this outcome, it’s also the interface. The modern digital age has conditioned us to embrace the safety of our online interactions, predisposing us to believe in benign intent from the services that power our everyday life. When was the last time you read and understood what was in a EULA you were being asked to sign? Or who made the edits to the Wikipedia page where you gathered information? The attractive interfaces of search engines, social media platforms and now generative AI applications build a sense of trust in the user, which suppresses our curiosity about the commercial or ethical intent of the service.

Nobody would have predicted that ChatGPT or its competitors would explode in popularity so quickly. There needs to be more time to build awareness around the risks inherent in their wide adoption. With recent tech layoffs in the field of AI ethics and safety (like those at Microsoft), and less transparency moving forward with the development of ChatGPT from OpenAI, there are no clear guardrails asking us how ChatGPT can impact the world. Widespread public adoption, combined with a lack of company transparency and self-regulation, poses the threat that the global population will inadvertently catalyze a monolithic worldview that overrides the myriad expressions of human culture.

ChatGPT and digital colonialism

Cultural assimilation and propaganda have long been used as tools to impose certain values on populations. Colonized countries were often made to dismiss their cultural heritage and adopt the rules and standards of their suzerains.

With ChatGPT’s global reach, the risk is that its product will be seen as fact and the standard to follow, tacitly propagating the values and worldviews of its designers. This is not a new phenomenon in human history. For example, when the British introduced Western medicine into India, they discouraged and invalidated older practices of traditional Indian medicine. During the colonization of North America, many of the indigenous population were forced to abandon their native language and adopt Christianity in education and commerce, which in turn, homogenized communities and population groups.

ChatGPT has an even greater capability to erase the cultural identity of non-western views through its ease of accessibility at a massive global scale. When a student in another continent uses ChatGPT to ask a question, the answer that comes back will instantaneously sift through algorithms created from an aggregated, singular point of view. The tool does not have the nuance to distinguish the user’s personal experience, core familial values or different worldviews. Yet, the presuppositions to which those results lead will have a profound cultural impact. Users who don’t have the same western upbringing risk unintentionally putting themselves in a position to be brainwashed.

However, creating countless echo chambers and isolating individuals within them is not the solution. Rather than haphazardly promoting technology at maximum speed, we need to prepare the world and its audience to use and appreciate it critically. The Silicon Valley motto of “move fast and break things” should not apply here. The world’s culture is too fragile and precious to be broken under that motto, and the potential damage would be irreversible. We must recognize these risks, proceed with caution, and work to ensure that AI is developed responsibly to create a more equitable and diverse society. The weaponization of these tools may be unintentional, but our remediation of their problems cannot be ignored.

Society is unprepared for the road ahead

We have fought for generations to embrace diversity and respect each other’s cultural heritage. We have celebrated that virtue arises through a plurality of forms. However, AI technology like ChatGPT risks inadvertently reversing all we have accomplished by offering perspective through one dominant lens. As such, it becomes imperative for those producing this technology to acknowledge and take responsibility for the unintended but potentially profoundly harmful impact on society. Will they? Do they have enough personal motivation and societal incentive to do so? That is the big question and concern.

Companies developing chatbots and other generative AI products and governments regulating these technologies must adopt a communal standard of ethics and transparency regarding the development of these types of tools so that users are able to fully understand the implications of their power. The diverse collective of the modern world that we’ve all fought so hard to create is at stake.

Dr. Songyee Yoon is NCSoft’s president and chief strategy officer.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *