Member-only story

China Is Launching An Inquisition To Ensure AI Is Properly Socialist

AI inputs drive the outputs

Grant Piper
4 min readAug 5, 2024
(OpenAI)

Artificial intelligence is all the rage right now, but most people don’t understand that AI's function is wholly crafted by its inputs. OpenAI has been in hot water because it has allowed its large language models (LLMs) to indiscriminately study nearly all available English-language internet material. That is why ChatGPT sounds like a search engine — it was designed that way. The fact that inputs completely dominate the output of AI has escaped many peoples’ attention. But not the Chinese Communist Party.

In something that can only sound dystopian, China has tasked their primary internet regulators with testing new AI models to ensure that they “embody core socialist values.” You see, there is a risk that if given free rein of the internet, AI could end up having a positive view of capitalism. That cannot be tolerated in communist China. To ensure that this does not happen, shadowy regulating bodies are going to start testing AI to ensure that when asked specific questions or when broached with “sensitive topics,” the AI will give answers sanctioned by the CCP.

This is the best example of how AI is shaped by its inputs and why the outputs can be worthless. In order to ensure that AI is sufficiently communist, AI architects in China are going to have to tweak the inputs. Any positive opinions of capitalism or the West are going to have to be weeded out to make room for glowing endorsements of the Chinese government and communism as a whole.

This is not an easy process. According to an anonymous source speaking with CNBC, an AI programmer in China had their model rejected for unclear reasons. It took months of “guessing” and “adjusting” for the AI model to pass muster. This means that the programmers were desperately trying to tweak the inputs to ensure that the AI gave an appropriate output.

It doesn’t take a genius to see the potential repercussions of such behaviors.

Since AI is so easily manipulated, you cannot trust anything that it spits out. The answers AI gives are wholly dependent on the source material being fed into it by the programmers. Do you know who is feeding ChatGPT information? Do you know the ideology or motives of the…

--

--

Grant Piper
Grant Piper

Written by Grant Piper

Professional writer. Amateur historian. Husband, father, Christian.

Responses (2)

Write a response