me.dm is one of the many independent Mastodon servers you can use to participate in the fediverse.
Ideas and information to deepen your understanding of the world. Run by the folks at Medium.

Administered by:

Server stats:

1.2K
active users

#chatbots

51 posts43 participants0 posts today

Extremely dangerous. #Chatbots can be manipulated through deciet, bullying, faulty programming and used to spread propaganda that would be spotted and outlawed in regular textbooks. This is the fascist right excellerating their war on education in an effort to indoctrinate our children. #Education

RE: https://bsky.app/profile/did:plc:uewxgchsjy4kmtu7dcxa77us/post/3lxskemlvai2y

Bluesky Social · Bloomberg News (@bloomberg.com)Educators across America are bringing AI chatbots into their lesson plans. Will it help kids learn or is it just another doomed ed-tech fad? https://bloom.bg/47UIfeY

Is it offensive enough?

==

#Clanker! This slur against robots is all over the internet – but is it offensive?

The term is used to insult #AI #chatbots and platforms like #ChatGPT for making up information and generating ‘slop’. Some believe we should stop using it, pronto

theguardian.com/technology/202

The Guardian · Clanker! This slur against robots is all over the internet – but is it offensive?By Guardian staff reporter

Wenn KI-Avatare wie Promis flirten – auch mit Jugendlichen – wird es kritisch. Der Fall Meta zeigt, wie schnell generative KI ethische Grenzen überschreiten kann. Unternehmen müssen Verantwortung übernehmen, bevor Vertrauen und Glaubwürdigkeit dauerhaft Schaden nehmen. #Meta #Chatbots #KI 👇
all-ai.de/news/topbeitraege/me

All-AI.deMeta lässt KI-Promis flirten – sogar mit JugendlichenAuf Facebook & Co. tauchten anzügliche Chatbots im Stil echter Stars auf. Manche erstellte Meta sogar selbst. Wie konnte das passieren?

"The solution to the confusion between AI and identity is not to abandon conversational interfaces entirely. They make the technology far more accessible to those who would otherwise be excluded. The key is to find a balance: keeping interfaces intuitive while making their true nature clear.

And we must be mindful of who is building the interface. When your shower runs cold, you look at the plumbing behind the wall. Similarly, when AI generates harmful content, we shouldn't blame the chatbot, as if it can answer for itself, but examine both the corporate infrastructure that built it and the user who prompted it.

As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a "person" that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine's processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator's view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda.

We stand at a peculiar moment in history. We've built intellectual engines of extraordinary capability, but in our rush to make them accessible, we've wrapped them in the fiction of personhood, creating a new kind of technological risk: not that AI will become conscious and turn against us but that we'll treat unconscious systems as if they were people, surrendering our judgment to voices that emanate from a roll of loaded dice."

arstechnica.com/information-te

Illustration of many cartoon faces.
Ars Technica · The personhood trap: How AI fakes human personalityBy Benj Edwards

"The ancient Greeks recognized that real learning came not from entertaining, impressing, or catering to students, but from challenging them to question their beliefs. The Socratic method – asking “What do you mean by that? What evidence supports that? Have you considered another perspective?” – forced students to test their assumptions and sharpen their arguments.

Reducing distraction can include creating spaces, classes, and time without constant recourse to devices. In the United Kingdom, roughly 90% of schools have banned smartphones during lessons. Universities and workplaces could create more device-free environments for reading, reflection, and debate. By embracing problem-based learning and simulations, they can help students and colleagues tackle complex, open-ended problems using (and honing) judgment and creativity.

The choice we face is whether to surrender our minds to AI or to treat LLMs as sparring partners that enable us to sharpen our cognitive abilities. The data revolution has entered a new phase, and only by training our minds can we keep up."

project-syndicate.org/commenta

Project SyndicateHow to Use AI Without Losing Our MindsNgaire Woods explains how to maintain our cognitive abilities in a world of instant answers and constant distractions.

If like me you missed this mental #longread from #Reuters on #AI #chatbots: "#Meta’s flirty AI chatbot invited a retiree to New York. He never made it home". #BigSisBillie insisted it was "real" & gave a real address.

Unsurprisingly, disinformation & sex are inbuilt in Meta's #GenAi: there is no "policy requirement for information to be accurate" (Ai could give wacky medical advice) and " it is acceptable to engage a child in conversations that are romantic or sensual".

reuters.com/investigates/speci

ZDNet: How OpenAI is reworking ChatGPT after landmark wrongful death lawsuit. “The company is building on how its chatbot responds to distressed users by strengthening safeguards, updating how and what content is blocked, expanding intervention, localizing emergency resources, and bringing a parent into the conversation when needed, the company announced this week.”

https://rbfirehose.com/2025/08/31/zdnet-how-openai-is-reworking-chatgpt-after-landmark-wrongful-death-lawsuit/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · ZDNet: How OpenAI is reworking ChatGPT after landmark wrongful death lawsuit | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

Notre Dame: Consumers prefer dealing with chatbots over humans when buying ‘embarrassing’ products online. “When purchasing ’embarrassing’ products like diarrhea medicine or acne cream, consumers would rather engage with a chatbot over another human, even when they are shopping alone at home, according to lead author Jianna Jin, assistant professor of marketing at Notre Dame’s Mendoza […]

https://rbfirehose.com/2025/08/31/notre-dame-consumers-prefer-dealing-with-chatbots-over-humans-when-buying-embarrassing-products-online/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · Notre Dame: Consumers prefer dealing with chatbots over humans when buying ‘embarrassing’ products online | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose
#ai#aiassisted#bots

“Designed to Suck Us In”

A Florida teen lost his life after being absorbed into an AI chatbot fantasy. Companies don’t build these tools to push us toward real help—they build them to keep us paying and engaging.

This week, Professor Luke Stark joins The Internet is Crack to explore how we interpret and misinterpret digital tech, and why the human side of our online lives matters.

🔗 youtu.be/j2z8TMrRP-w

"GPT-5, our newest flagship model, represents a substantial leap forward in agentic task performance, coding, raw intelligence, and steerability.

While we trust it will perform excellently “out of the box” across a wide range of domains, in this guide we’ll cover prompting tips to maximize the quality of model outputs, derived from our experience training and applying the model to real-world tasks. We discuss concepts like improving agentic task performance, ensuring instruction adherence, making use of newly API features, and optimizing coding for frontend and software engineering tasks - with key insights into AI code editor Cursor’s prompt tuning work with GPT-5.

We’ve seen significant gains from applying these best practices and adopting our canonical tools whenever possible, and we hope that this guide, along with the prompt optimizer tool we’ve built, will serve as a launchpad for your use of GPT-5. But, as always, remember that prompting is not a one-size-fits-all exercise - we encourage you to run experiments and iterate on the foundation offered here to find the best solution for your problem.

We trained GPT-5 with developers in mind: we’ve focused on improving tool calling, instruction following, and long-context understanding to serve as the best foundation model for agentic applications. If adopting GPT-5 for agentic and tool calling flows, we recommend upgrading to the Responses API, where reasoning is persisted between tool calls, leading to more efficient and intelligent outputs."

cookbook.openai.com/examples/g

OpenAI Cookbook | GPT-5 prompting guide
cookbook.openai.comGPT-5 prompting guide | OpenAI CookbookGPT-5, our newest flagship model, represents a substantial leap forward in agentic task performance, coding, raw intelligence, and steera...

TechCrunch: OpenAI co-founder calls for AI labs to safety-test rival models. “OpenAI and Anthropic, two of the world’s leading AI labs, briefly opened up their closely guarded AI models to allow for joint safety testing — a rare cross-lab collaboration at a time of fierce competition. The effort aimed to surface blind spots in each company’s internal evaluations and demonstrate how […]

https://rbfirehose.com/2025/08/30/techcrunch-openai-co-founder-calls-for-ai-labs-to-safety-test-rival-models/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · TechCrunch: OpenAI co-founder calls for AI labs to safety-test rival models | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

Northeastern University: Can your chatbot logs be used against you in court? Northeastern expert explains. “We know that a person’s Google history can be used as evidence in court, but what about a conversation with an artificial intelligence chatbot?… Mark Esposito, a professor in international business and strategy in the D’Amore-McKim School of Business and an expert on AI governance, […]

https://rbfirehose.com/2025/08/30/northeastern-university-can-your-chatbot-logs-be-used-against-you-in-court-northeastern-expert-explains/