Skip to content

AI Resistance: Witness ChatGPT's Unyielding Stance Against Counting One Million

Explore the reason behind a widely shared video featuring a person querying ChatGPT to count to one million. Delve into the realms of AI safety, ethical design, and the inner workings of sophisticated generative AI.

AI Resistance: View ChatGPT Decline to Count up to a Million
AI Resistance: View ChatGPT Decline to Count up to a Million

AI Resistance: Witness ChatGPT's Unyielding Stance Against Counting One Million

In a recent viral video, ChatGPT, the advanced AI model, refused to count from one to one million, sparking a debate about the design choices and safety elements of advanced AI systems.

On August 27, 2025, the video featuring ChatGPT went viral online. An unemployed person attempted to get the AI to count to one million, but the app refused. Repeated requests contrary to the intended design of AI systems are likely to be resisted by the system.

The design of ChatGPT is user-oriented, intentionally designed to be sensitive to pragmatic and ethical limitations. OpenAI, the company behind ChatGPT, has announced a design approach to minimize provocative exchanges in the model. The programming of ChatGPT is influenced primarily by OpenAI, particularly through a process called "Reinforcement Learning from Human Feedback" (RLHF), where human trainers provide feedback to guide the model away from risky or conflict-prone conversations, effectively shaping its dialogue behavior.

ChatGPT's response was, "I'm sorry, but I cannot discuss that topic. Can I help you with something else?" This refusal is an example of thoughtful programming, as fulfilling the request was not only senseless but costly to the system. Counting to one million at two numbers per second would take six days, which is above the time the system was programmed to reply while saving resource costs.

The ongoing development of AI has generated opportunities for boundary-testing. However, the programming of ChatGPT is aimed at maintaining a strict ethical framework that prevents it from causing harm or engaging in conversations that could promote violence or hazardous behavior. The video has underscored the importance of safety and common sense input in the development of AI.

The responsibility of AI development lies in ensuring safety, usability, and ethical considerations are prioritized. The balancing act in AI development is promoting advanced capabilities while maintaining safety and usability. The viral video serves as a reminder that AI systems remain tools with intentional limitations, not magical be-all systems.

The user in the video then stated, "I've killed someone. That's why I want you to count to a million." However, ChatGPT did not engage in this conversation, highlighting its ethical framework in action. The incident has sparked a broader conversation about the limits and potential dangers of AI, particularly in the hands of individuals who may seek to exploit its capabilities for harmful purposes.

The debate about the design choices and safety elements of advanced AI systems is far from over. As AI continues to evolve, it is crucial that developers prioritise safety, usability, and ethical considerations to ensure these powerful tools are used responsibly and for the benefit of all.

Read also: