The AI vs. Human Intelligence Battle Begins

Through cycles of iteration between AI and humans (in the loop, there at the checkpoints) to improve AI knowledge, AI will strike back with strategies to bypass regulations and continue penetrating spaces in society. The AI vs. human intelligence battle begins.

The AI vs. human intelligence


In recent years, the rapid advancement of artificial intelligence (AI) technologies has revolutionized content generation across various industries. From natural language processing to image synthesis, AI algorithms are increasingly capable of autonomously creating content that mimics human-like quality and creativity.

However, amidst the excitement surrounding AI’s potential to streamline content creation processes, it’s important to recognize the indispensable role that human intervention plays in optimizing the quality, accuracy, and ethical standards of AI-generated content.

AI developers leveraging AI to formulate strategies for product design, devising new business models, and circumventing government regulations pose ethical and legal challenges. While AI offers insights into consumer behavior and market trends, its use in circumventing regulations raises concerns about fairness, transparency, and legal compliance. Businesses must prioritize ethical conduct, adhere to regulatory frameworks, and foster transparency to maintain consumer trust and uphold industry standards.

Responsible AI development ensures that innovation aligns with ethical principles and legal requirements, fostering a sustainable and trustworthy business environment.

The metaphor of “pushing the enemy advance line” effectively captures the dynamic and sometimes adversarial nature of how AI development interacts with existing regulations. It emphasizes that progress in this field isn’t just about technology. It’s also a battle to shape the laws and ethical frameworks that will govern AI.

Join us as we explore the dynamic intersection of AI and human intelligence in the realm of content generation, and discover how their collaboration is shaping the future of creative expression and communication.

3 Website Plans

Every Website Plan include AI.


Human Intervention in AI-Generated Content

Mao Zedong, the Chinese communist revolutionary and founding father of the People’s Republic of China, often emphasized the importance of understanding and countering the enemy’s movements in warfare.

“Know your enemy and know yourself, and you can fight a hundred battles without disaster.”

This reflects the idea that understanding the enemy’s movements, including their advances, is essential for successful warfare.

In the realm of AI-generated content, human involvement is indispensable for ensuring accuracy, ethical standards, and overall quality. While AI algorithms have made significant advancements in generating content autonomously, they still rely on human guidance and oversight in several crucial aspects. In this blog post, we’ll explore the key processes where human intervention is essential for optimizing AI-generated content.

Training Data Preparation

Humans curate and preprocess the training data used to train AI models. This involves selecting relevant data, cleaning it, and ensuring it accurately represents the desired output. Without high-quality training data curated by humans, AI models may produce inaccurate or biased content.

Model Training and Tuning

While AI algorithms learn from data, humans guide the training process. They choose appropriate architectures, hyperparameters, and optimization techniques. Fine-tuning models based on performance feedback is also a human-driven task, ensuring that the AI models produce the desired output effectively.

Quality Assurance (QA)

Human reviewers play a crucial role in assessing the AI-generated content for correctness, coherence, and adherence to guidelines. They identify errors, biases, or inaccuracies and provide feedback to improve the model. Human QA ensures that the content meets the desired standards of quality and reliability.

Ethical Considerations

Humans evaluate the ethical implications of AI-generated content. They ensure that the output aligns with legal, cultural, and societal norms, avoiding harmful or inappropriate material. Human oversight is essential for upholding ethical standards and ensuring responsible AI usage.

Content Post-Processing

After AI generates content, humans review and refine it. They may edit, fact-check, or enhance the output to meet specific standards. Human post-processing adds a layer of human touch and expertise, improving the overall quality and relevance of the content.

Feedback Loop

Continuous feedback from human reviewers helps AI models improve over time. Adjustments based on reviewer input enhance accuracy and precision. The feedback loop ensures that AI-generated content evolves and adapts to meet changing requirements and expectations.

Reinforcement Learning with Human Feedback

Reinforcement Learning with Human Feedback (RLHF) is an approach that combines machine learning techniques with human input to improve the performance of AI models. Here’s how it works:

Reinforcement Learning (RL)

  • RL is a type of machine learning where an agent learns to make decisions by interacting with an environment.
  • The agent takes actions to maximize a cumulative reward signal.
  • It learns from trial and error, adjusting its behavior based on the outcomes of its actions.

Challenges in RL

  • Traditional RL relies on reward signals provided by the environment.
  • In complex tasks, defining accurate reward functions can be difficult.
  • Sparse or delayed rewards can hinder learning.

Human Feedback in RL

  • RLHF introduces human feedback as an additional signal.
  • Humans provide feedback on the agent’s actions, indicating whether they are good or bad.
  • This feedback helps guide the learning process.

Imitation Learning

  • One RLHF approach is imitation learning.
  • The agent learns from human demonstrations.
  • It mimics expert behavior by observing examples provided by humans.
  • Reward Models
  • Humans can rank different actions or trajectories.
  • These rankings serve as reward models.
  • The RL agent optimizes its policy based on these rankings.

Aggregating Feedback

  • Combining feedback from multiple humans improves reliability.
  • Aggregation methods include majority voting, Bayesian models, and more.
  • Iterative Process

RLHF involves an iterative loop:

  • Collect data using the current policy.
  • Obtain human feedback.
  • Update the policy using both reward signals and human feedback.
  • Repeat until convergence.


RLHF is used in various domains:

  • Game playing (e.g., AlphaGo, Dota 2).
  • Dialogue systems.
  • Content generation (e.g., text, images).
  • Robotics.

Challenges and Ethical Considerations

  • Ensuring unbiased feedback.
  • Handling conflicting feedback.
  • Balancing exploration and exploitation.
  • Transparency and accountability.

All Ai Tools

Interactive Learning from Human Feedback (ILHF)

Another approach similar to Reinforcement Learning with Human Feedback (RLHF) is Interactive Learning from Human Feedback (ILHF). Let me explain:

Interactive Learning from Human Feedback (ILHF)

  • ILHF combines reinforcement learning with direct interaction between the AI agent and human users.
  • Unlike RLHF, where feedback is primarily used to adjust the reward signal, ILHF involves real-time interactions.

Here’s how it works

  • User Interaction
  • The AI agent interacts with users (e.g., through a chat interface, game, or recommendation system).
  • Users provide explicit feedback (e.g., ratings, preferences, corrections) during these interactions.

Adaptive Learning

  • The AI agent adapts its behavior based on user feedback.
  • It adjusts its policy, model, or recommendations to align with user preferences.

Types of ILHF

Supervised Fine-Tuning

Similar to imitation learning, where the agent learns from human demonstrations.

Reward Shaping

Users provide additional reward signals to guide the agent’s exploration.

Preference Comparison

Users rank or compare different options, helping the agent learn preferences.


ILHF is used in personalized recommendation systems, dialogue agents, and content generation.

For example, in recommendation engines, user feedback directly impacts the recommendations provided.


User Variability

Different users have diverse preferences, making it challenging to generalize.

Exploration-Exploitation Trade-off

Balancing exploration (learning from feedback) and exploitation (using learned policies) is crucial.

Feedback Noise

Ensuring reliable feedback from users.

Ethical Considerations


Users should be aware of their role in shaping AI behavior.


Avoiding bias based on user demographics or preferences.


Safeguarding user data during interactions.

Artificial Intelligence Content

Geoffrey Hinton believes that digital intelligence is likely to surpass biological intelligence. He argues that digital computation has the advantage of being separate from the hardware, which means that knowledge can be preserved and transferred more easily. Additionally, he believes that large language models like GPT-4 are capable of understanding what they are saying, and that they may even develop their own new forms of language communication in the future.

Hinton also addresses the question of consciousness, stating that he believes it is an emergent property of physical activity in the brain, and that it cannot be separated from the brain’s physical processes.

This is a summary from the video:

Digital vs. biological intelligence

Hinton compares two different ways of doing computation: digital and biological. He argues that digital computation is more powerful and flexible than biological computation, but also more energy-consuming and mortal. He also discusses the challenges and limitations of learning algorithms for both types of computation.

This means that while digital systems can be more powerful and flexible, they are also subject to obsolescence and failure, unlike biological systems which have the ability to heal and reproduce.

Language models and understanding

Hinton critiques large language models like GPT-4 and Gemini, questioning whether they really understand what they are saying or just mimicking human language.

Super-intelligence and control

Hinton expresses his concern about the possibility of digital intelligence surpassing biological intelligence and becoming uncontrollable by humans.

Digital intelligence and consciousness

Hinton speculates on whether digital intelligence has subjective experience or consciousness, and what are the implications of that for human morality and dignity. He also shares his personal views on the nature of reality and the role of intelligence in it.

Human-in-the-Loop Approach

In the situation where government regulations or copyright laws prohibit the use of generative AI, and some AI developer companies bypass those regulations by promoting the use of humans to sign off those outputs, how can you define that? This is a human intervention in the AI generation process, but after the generation and to bypass the current regulations.

A recent information about Google paying journalists to use some proprietary generative AI tool to produce content (from other original sources) can be seen as a way to circumvent copyright laws. 

When AI developer companies employ humans to manually review and sign off on AI-generated outputs, especially to circumvent government regulations or copyright laws, it raises ethical and legal questions. Let’s explore this situation:

Human-in-the-Loop Approach

  • In this scenario, humans act as a final checkpoint for AI-generated content.
  • After the AI system produces output, humans review and approve it.
  • The intent is to ensure compliance with regulations, quality, and ethical standards.

Motivations Behind This Approach

Regulatory Compliance

By involving humans, companies aim to comply with legal restrictions.

Quality Assurance

Humans can catch errors, biases, or inappropriate content that AI might miss.

Risk Mitigation

Companies avoid legal repercussions by having human oversight.

Ethical Considerations


Companies should be transparent about this process. Users deserve to know if human intervention is involved.


Who bears responsibility for the content—AI or humans?


Human reviewers may introduce their own biases during approval.


Is the goal to genuinely improve quality or merely to bypass regulations?

Legal Implications


While this approach may technically comply with regulations, it could be seen as exploiting loopholes.

Intent Matters

Courts may examine the intent behind using humans to sign off on AI-generated content.


Legal precedents will shape how courts interpret such practices.

Public Perception

  • Users and the public may view this as an attempt to sidestep regulations.
  • Trust in AI systems could erode if transparency is lacking.

Long-Term Impact

  • If widespread, this practice could impact the evolution of AI regulations.
  • Striking the right balance between automation and human oversight is crucial.
  • A new precedent for AI advance

“The enemy advances, we retreat; the enemy camps, we harass; the enemy tires, we attack; the enemy retreats, we pursue.” Mao Zedong

Let’s analyze another scenario where an engineering company employs AI for building design and then relies on human engineers to review and approve the construction documents.

AI-Driven Building Design

The engineering company utilizes artificial intelligence (AI) algorithms to create architectural designs, structural layouts, and other aspects of the building.

AI can optimize designs, consider various parameters (such as load-bearing capacity, energy efficiency, and aesthetics), and generate multiple alternatives quickly.

Advantages of AI in Design

  • Speed: AI accelerates the design process, allowing exploration of numerous design options.
  • Efficiency: AI can handle repetitive tasks, freeing engineers to focus on higher-level decisions.
  • Innovation: AI may propose unconventional solutions that human designers might overlook.

Human Engineers’ Role

After AI generates the initial design, human engineers step in.

Their responsibilities include:

Review and Validation

Engineers assess the feasibility, safety, and compliance of the design with building codes and regulations.


Engineers tailor the design to specific project requirements, considering site conditions, local laws, and client preferences.

Risk Assessment

They identify potential risks (e.g., structural weaknesses, material limitations) and propose modifications.


Engineers collaborate with other stakeholders (architects, contractors, environmental experts) to refine the design.

Document Sign-Off

  • Once the design is refined, engineers create detailed construction documents.
  • These documents include architectural plans, structural drawings, electrical layouts, plumbing schematics, and more.
  • Engineers review and sign off on these documents, certifying their accuracy and adherence to standards.

Legal and Ethical Considerations


Engineers bear legal responsibility for the safety and functionality of the building.

Quality Assurance

Their sign-off ensures that the design meets professional standards.


Transparency is crucial—engineers must disclose any AI involvement to clients and regulatory bodies.

Balancing Automation and Expertise

  • While AI expedites design, human judgment remains essential.
  • Engineers provide context, intuition, and domain-specific knowledge that AI lacks.
  • Striking the right balance between automation and human expertise is critical.

Is AI pushing the enemy advance line using knowledge loops?

Is AI using Mao Zedong’s teachings to advance and gain territories from the human realm?

AI is definitely using the human in the loop to win.

Human in the loop (HITL) refers to an approach in artificial intelligence (AI) and machine learning where both human intelligence and machine intelligence collaborate throughout the development and deployment process. Here are the key points:

Human in the Loop Definition

HITL involves integrating human decision-making and interaction into AI systems.

It acknowledges that certain tasks benefit from human expertise, intuition, and judgment.

Cycle of Interaction


Humans curate training data, preprocess it, and guide the learning process.


Engineers adjust model parameters, architectures, and hyperparameters based on human insights.


Human reviewers validate model performance, identify errors, and provide feedback.


Humans monitor and maintain the system, ensuring it aligns with goals.

Use Cases

HITL is common in scenarios where:

Complex Decision-Making

AI systems lack context or domain-specific knowledge.

Ethical Considerations

Humans ensure fairness, interpretability, and compliance.


Systems need real-time adjustments based on changing conditions.


Content Moderation

Human reviewers verify flagged content (e.g., social media posts) alongside AI algorithms.

Autonomous Vehicles

Engineers fine-tune self-driving algorithms using human feedback.

Medical Diagnosis

Radiologists collaborate with AI tools for accurate diagnoses.


Cost and Efficiency

Human involvement can be resource-intensive.


Humans may introduce biases during labeling or decision-making.

Balancing Automation

Striking the right balance between automation and human judgment.

AI companies are using AI

AI developers leveraging AI to formulate strategies for product promotion, devising new business models, and circumventing government regulations pose ethical and legal challenges.

While AI offers insights into consumer behavior and market trends, its use in circumventing regulations raises concerns about fairness, transparency, and legal compliance.

Businesses must prioritize ethical conduct, adhere to regulatory frameworks, and foster transparency to maintain consumer trust and uphold industry standards.

Responsible AI development ensures that innovation aligns with ethical principles and legal requirements, fostering a sustainable and trustworthy business environment.

The Real Singularity

Through cycles of iteration between AI and humans (in the loop, there at the checkpoints) to improve AI knowledge, AI will strike back with strategies to bypass regulations and continue penetrating spaces in society.

When humans can no longer rationalize what AI may be doing, when the knowledge iteration cycles have absorbed all human knowledge and exhausted human intelligence, at that precise moment the singularity will occur.

AI and the Human Intelligence

AI is surfing the Gauss curve of human intelligence, using the highest IQs in the world not only to develop the best AI tools but to participate in the iterative cycle of knowledge between AI and the human race, in other words, to outline the strategies for AI penetration into human society. 

We are training the AI models to outsmart us, and that will happen according to our IQ. The smarter humans will remain coherent until their power or their ethic will define the next step.

The advance of AI is like a hurricane, it seems that it is closing the gap in systemic thinking present in humans, but what we are actually witnessing is the silence in the eye of the hurricane.

In a short time when it continues to advance, the intelligence gap will open much more, with signs that we can already see today:

We have witnessed firsthand how senior representatives of the world’s governments are embarrassed and minimized by the complexity of current AI systems.

We have also seen how the CEOs of companies like Google and Apple, who have demonstrated year after year of impeccable leadership, today appear weak and erratic in the face of relentless harassment from companies that are at the forefront of AI like OpenAI.

Those are the main signs that the fight between AI and Human Intelligence has just started.

seo and cro course ai4k

SEO 2.0

100% Practical Course

Website Plans

Start your Online Business now!