mertleee 19 hours ago

"Foundational AI companies love this one trick"

It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)

  • danielbln 17 hours ago

    It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.

danielbln 17 hours ago

The last commit is from April 2023, should this post maybe have a (2023) tag? Two years is eons in this space.

  • gwintrob 16 hours ago

    Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.

  • jdnier 12 hours ago

    The author is Co-founder of Databricks, creator of K Prize, so an early adopter.

ivape 19 hours ago

The bigger picture goal here is to explore using prompts to generate new prompts

I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.

K0balt 9 hours ago

This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.

Still clever.

seeknotfind 19 hours ago

Excellent fun. Now just to create a prompt to show iterated LLMs are turing complete.

  • ivape 14 hours ago

    Let's see Paul Allen's prompt.

James_K 16 hours ago

I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.