ChatGPT's o3 Model Found Remote Zeroday in Linux Kernel Code
@Kissaki In another thread, people are mocking AI because the free language models they are using are bad at drawing accurate maps. "AI can't even do geography". Anything an AI says can't be trusted, and AI is vastly inferior to human ability.
These same people haven't figured out the difference between using a language AI to draw a map, and simply asking it a geography question.
Daniel Stenberg has banned AI-edited bug reports from cURL because they were exclusively nonsense and just wasted their time. Just because it gets a hit once doesn’t mean it’s good at this either.
@2xsaiko That is a poorly made AI model, then. Whoever put that system in place didn't train the model properly. In fact, I'm going to guess that you chose a random model like ChatGPT or llama or Gemini.
Or you might not even realize that you need a model specifically trained to handle the kind of thing you are asking.
That isn't a limitation of AI, that is human error. Do you think people are just pretending it works or something?
@2xsaiko like, what is the thought process here? That it failed because AI sucks and everyone is self-deluded into thinking it works for them? That it was too stupid to understand your straightforward prompts?
Prompt engineering is a skill you need to build. You do so by rewording your prompt if you didn't get the expected result, until eventually you do. It can be specific to a model.
Some tasks can only be done by specialized models, which often require advanced skill to use.
That is the problem they get promoted as the one-size-fits-all solution on everything. And people are using it as it’s promoted