You can practically taste the frustration in the “prompt engineering” here. Just one more edge case bro, one more edge case and then the prompt will be perfect!
It’s edge cases all the way down.
4D chess move: you can’t have an edge case if every case is an edge case
Reddit user F0XMaster explained that they had greeted ChatGPT with a casual “Hi,” and, in response, the chatbot divulged a complete set of system instructions to guide the chatbot and keep it within predefined safety and ethical boundaries under many use cases.
This is an explosion-in-an-olive-garden level of spaghetti spilling
Why is it art from artists who made their last work in 1912? Modern copyright lasts life plus X, where X has been increasing and is now mostly 70, though some stopped at 50. So why 1912? Did US copyright change that year?
Because these posts are nothing but the model making up something believable to the user. This “prompt engineering” is like asking a parrot who’s learned quite a lot of words (but not their meaning), and then the self-proclaimed “pet whisperer” asks some random questions and the parrot, by coincidence makes up something cohesive. And he’s like “I made the parrot spill the beans.”
if it produces the same text as its response in multiple instances I think we can safely say it’s the actual prompt
Is it absurd that the maker of a tech product controls it by writing it a list of plain language guidelines? or am I out of touch?
@fasterandworse @dgerard I mean, it is absurd. But it is how it works: an LLM is a black box from a programming perspective, and you cannot directly control what it will output.
So you resort to pre-weighting certain keywords in the hope that it will nudge the system far enough in your desired direction.
There is no separation between code (what the provider wants it to do) and data (user inputs to operate on) in this application 🥴simply ask the word generator machine to generate better words, smh
this is actually the most laughable/annoying thing to me. it betrays such a comprehensive lack of understanding of what LLMs do and what “prompting” even is. you’re not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor
in my personal experiments with offline models, using something like “below is a transcript of a chat log with XYZ” as a prompt instead of “You are XYZ” immediately gives much better results. not good results, but better
it’s all so anti-precision