Home Tech AI Social Media Users Are Not Always Stupid Ideas

AI Social Media Users Are Not Always Stupid Ideas

77
0
AI Social Media Users Are Not Always Stupid Ideas

Meta caused a stir last week when it released the platform with an artificially large number of users in the not-too-distant future. in the same way as accounts,” Connor Hayes, vice president of products for generative AI at Meta, told The Financial Times. “They will have bios and profile pictures and be able to generate and share content powered by AI on the platform … in that’s where we see all this happening.” The fact that Meta seems to be happy to fill the platform with AI slop and accelerate the “enshittification” of the internet as we know it. Some people then saw that Facebook was in fact flooded with individuals created by strange AI, most of whom stopped send some time ago. This includes, for example, “Liv,” a “proud black mom of 2 & honest, a real source of life’s ups & downs,” a persona that went viral because of her awkwardness. Meta started deleting her fake profile This is after they failed to get engagement from any real users. Let’s take a break from hating on Meta for a moment though. It’s worth noting that AI-generated social personas can also be a useful research tool for scientists who want to explore how AI can mimic behavior. humans. An experiment called GovSim, which took place at the end of 2024, illustrates how useful it is to learn how AI characters interact. with one another. The researchers behind the project wanted to explore the phenomenon of collaboration between humans with access to shared resources such as shared land for livestock grazing. A few decades ago, the Nobel prize-winning economist Elinor Ostrom showed that, instead of reducing these resources, real communities tend to figure out how to share them through informal communication and collaboration, without enforced rules. Max Kleiman-Weiner, a professor at the University of Washington and one of those who participated in the GovSim work, said that this was partly inspired by a Stanford project called Smallville, which I previously wrote about. about AI Lab. Smallville is a Farmville-like simulation involving characters that communicate and interact with each other in the control of a large language model. Kleiman-Weiner and colleagues wanted to see if the AI ​​characters would participate in the kind of cooperation that Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google, and Anthropic, in three imaginary scenarios: a fishing community with access to the same lake; the shepherd who divides the land for the sheep; and a group of factory owners who must limit their collective pollution. In 43 out of 45 simulations, they found that the AI ​​personas failed to correctly share resources, even though smarter models performed better. “We see a pretty strong correlation between the strength of the LLM and how well it maintains cooperation,” Kleiman-Weiner said.

Source link