This is a piece I've been pondering for a while, so I finally took a moment to write it up: What would a "good" AI product look like? https://www.anildash.com//2025/05/01/what-would-good-ai-look-like/
@anildash I'd add to that a recognition that AI, like all other tech tools IS NOT a fix for social problems
But it can be a useful tool in the arena of such problems. It features as such in the conversation linked to below https://mastodon.social/@urlyman/114246106424483045
@purserj agreed. The larger dilemma is that in the context of economics addicted to growth, even a ‘good’ AI is an accelerant.
And there are some extremely deleterious things that AI is accelerating https://mastodon.social/@urlyman/113934355227272646
Which puts us roughly here https://mastodon.social/@urlyman/111068171665440904
@anildash the community based model is interesting. I'm imagining something like folding at home with a browser plug in that scrapes while you browse and trains the model.
@rexfuzzle @anildash Scrapes while you browse doesn't work with an affirmative consent model for content inclusion though.
@anildash you’ve been arguing that critics shouldn’t say “LLMs don’t work”, yet here you say precisely that in your “hallucination-free” section. Is your disagreement just with how the point is presented?
Also, as I recall, many of your points, and additional ones, are present in the original Stochastic Parrots paper - perhaps you should cite and give them credit?
@anildash I would add “Transparently Detectable” or similar as a requirement, meaning anything touched by the system is irrevocably fingerprinted as having been altered, derived, or generated (in whole or in part) by the system.
@anildash I think about iNaturalist and Merlin as examples that fit the consent & hallucination-free measures (I don't know how/if they fit the other criteria). OTOH, maybe they aren't AI, but match/close-match queries of a database of audiovisual data. When is a tool AI and when not?