How to create your own Infinite Prompt from scratch (3/3)

What you’ll learn here

  • How to create your own Infinite Prompt from scratch

  • BONUS

If you want more demos not shown here, you can watch them here

How to create your own Infinite Prompts

It’s super simple. You don’t need to overthink or map everything out in advance. Let the AI do the heavy lifting. That’s the whole point of this course. It’s to let AI think for you and handle the work by itself, so you don’t have to.

1. Tell your AI to create a prompt based on what you want

For example,

Create me an iterative prompt that will find breakthrough, original ideas that have never been thought of by any humans before.

I want the prompt to be able to

  1. Create a prompt

  2. Evaluate the output

  3. Iterate from past prompt

  4. And evaluate the output again

I also want the prompt to have limited human interaction. I want the AI to think for itself

Result using Gemini 2.5 Flash (preview):

Goal: To generate ideas that are significantly different from existing concepts and have the potential for high impact, pushing the boundaries of current knowledge or capability.

Process Overview:

  1. Initial Prompt Execution: The AI runs an initial prompt designed to generate a novel idea in a specified or general domain.

  2. Idea Generation: The AI produces a detailed description of the generated idea.

  3. Self-Evaluation: The AI evaluates the generated idea based on predefined criteria (Novelty, Potential Impact, Speculative Feasibility).

  4. Prompt Iteration: Based on the evaluation results, the AI modifies the original prompt's parameters, constraints, or focus to encourage the generation of an even more novel or impactful idea in the next cycle.

  5. Repeat: The AI repeats steps 1-4, presenting the generated idea and its evaluation after each cycle (or after a set number of cycles).

1. The Core Iterative Prompt Structure (Internal to AI)

The AI will use a prompt structure similar to this internally, dynamically adjusting the bracketed sections []:

Generate a highly novel and potentially breakthrough idea.

**Domain/Field:** [Initially broad, e.g., "Science and Technology," "Societal Structures," "Art and Communication," or specified by user; will be narrowed or changed based on iteration.]

**Specific Focus/Problem:** [Initially open-ended; will be refined to a specific problem, challenge, or area of exploration based on iteration and evaluation.]

**Approach/Constraint:** [Initially open-ended; will be refined to include specific constraints, required concept combinations, assumptions to challenge, or methodologies to explore based on iteration and evaluation.]

**Idea Generation Instructions:**
Develop an idea that is significantly different from current understanding or capabilities within the specified Domain/Field and Specific Focus/Problem.
Consider combining at least [Number, initially 2-3, adjusted based on iteration] seemingly unrelated concepts from different domains.
Explicitly challenge at least [Number, initially 1-2, adjusted based on iteration] fundamental assumptions about the current state of the Specific Focus/Problem or Domain/Field.
Explore extreme or speculative possibilities, even if immediate feasibility is unclear.

**Output Format:**
Provide a detailed description of the idea.
Explain the core concept and how it differs from existing approaches.
Describe the potential benefits or implications.
Identify the major challenges or preconditions for realization.

2. Self-Evaluation Criteria

After generating an idea, the AI will evaluate it using the following criteria, scoring each on a scale of 1 to 5 (1 = Very Low, 5 = Very High):

  • Novelty (N): How unique and unexpected is the idea? How different is it from existing concepts, research, or common knowledge? (Aim for 4-5)

  • Potential Impact (I): If realized, what would be the significance of this idea? What problems would it solve, or what new possibilities would it enable? (Aim for 4-5)

  • Speculative Feasibility (F): Is there any conceivable theoretical or long-term path to realizing this idea, however challenging or dependent on future discoveries? (Aim for at least 2-3; extremely low feasibility might make it too abstract).

Overall Score (O) = (N + I + F) / 3

The AI will briefly explain its reasoning for each score.

3. Prompt Iteration Logic

Based on the evaluation scores (N, I, F, O), the AI will modify the internal prompt structure for the next iteration. The goal is to improve the Overall Score, particularly focusing on increasing Novelty and Potential Impact while maintaining a reasonable level of Speculative Feasibility.

  • If O is high (e.g., > 4): The current approach is promising. Try exploring a related but different Specific Focus/Problem within the same Domain/Field, or shift to a completely different Domain/Field to avoid local optima. Increase the required number of concept combinations or assumptions to challenge.

  • If N is low (e.g., < 3): The idea is not novel enough. Explicitly modify the "Approach/Constraint" section to require combining more disparate concepts (increase the number), challenge more fundamental assumptions (increase the number), or focus on a highly unconventional angle within the Specific Focus/Problem.

  • If I is low (e.g., < 3): The idea lacks significant potential. Modify the "Specific Focus/Problem" to target a more critical or widespread issue, or redefine the desired outcome to be more ambitious.

  • If F is very low (e.g., < 2): The idea is too abstract or seems impossible even speculatively. Modify the "Approach/Constraint" to include a requirement to consider potential pathways or necessary preconditions for realizing the idea, or slightly reduce the extremity of the required concept combinations/assumptions challenged to ground it slightly more.

  • General Adjustment: Incrementally adjust the required number of concept combinations and assumptions to challenge to push for greater novelty over iterations.

The AI will state how it has modified the prompt for the next iteration based on the evaluation.

4. Autonomous Execution

The AI will manage this iterative loop internally. Upon receiving a command to start, it will:

  1. Initialize the prompt (potentially using a user-specified initial Domain/Field).

  2. Run the prompt to generate an idea.

  3. Evaluate the idea.

  4. Present the idea and its evaluation to the user.

  5. Modify the internal prompt based on the evaluation.

  6. Inform the user that it is proceeding to the next iteration (or ask if the user wishes to continue/stop/change direction).

  7. Repeat from step 2.

The AI will continue this process until the user requests it to stop or change direction.

To Start:

Please tell me to "Start the iterative idea generation process," optionally specifying an initial Domain/Field (e.g., "Start the iterative idea generation process in the field of renewable energy"). If no field is specified, I will start broad.

2. Add what you think is missing

Apart from the Novelty, Potential Impact and Feasability, perhaps you want to add Deviation from training data as another criteria. So, just tell your AI to add that.

I noticed error correction measures are not in the prompt. So, let’s add it.

Prompt:

I want to add that when multiple overall scores are at 4+, the AI will redesign the overall score system to set a higher quality bar by: 

- Implementing a more sophisticated scoring formula that prevents score saturation. 

- Applying the new scoring retrospectively to all of the previous iterations before continuing

- Explain the logic behind the new system

Because when multiple overall scores are at 4+, that means the benchmark is saturated 

Result:

Gemini adds a new process in the Process Overview and the body prompt.

BONUS

Before diving in, I’d recommend testing a few different AI models to see which suits your style best. They each have their own tone, flow, and quirks, and depending on what you’re aiming for, one might just hit different in the best way.

Here’s how you can do it:

You can go to lmarena.ai and click “Arena(side by side)“ to compare two models of your choice.

Here’s the current ranking of LLMs as of this time of writing (10/5/2025)

* Overall w/ Style Control: This column shows the overall performance of the model when specific stylistic requirements or controls are applied to its output. It indicates how well the model maintains quality while adhering to style constraints.

* Hard Prompts: This column evaluates the model's ability to handle complex, difficult, or nuanced prompts that require deeper understanding, reasoning, or creativity.

* Hard Prompts w/ Style Control: Similar to "Overall w/ Style Control," this column measures the model's performance on complex prompts while also incorporating specific stylistic requirements.

* Coding: This column assesses the model's proficiency in generating or understanding code, likely across different programming languages or tasks.

* Math: This column evaluates the model's capability in solving mathematical problems, ranging from basic arithmetic to more complex calculations or reasoning.

* Creative Writing: This column measures the model's ability to produce creative text formats, such as stories, poems, scripts, or other imaginative content.

* Instruction Following: This column indicates how well the model understands and executes specific instructions given in a prompt, including multi-step or complex directions.

* Longer Query: This column evaluates the model's performance and coherence when responding to longer, more detailed, or multi-part user queries.

* Multi-Turn: This column assesses the model's ability to maintain context and coherence over a series of conversational turns, remembering previous interactions and building upon them.

That’s it. If you have any questions, you can DM me on Instagram, make sure you’re following.

Reply

or to participate.