In Mary Shelley’s 1818 novel Frankenstein, she provided a glimpse into the monster’s mind by listing the books read by this literary forebear to artificial intelligence. Public pressure could even dissuade companies from training chatbots on things like conspiracy theory “news” sites-but that’s only if the public knows what text the companies train on. A person concerned about toxicity on the Web might want to avoid chatbots trained on text from unseemly sites. Train them on the Flintstones, and they will talk like Barney Rubble. This text is immensely influential: train on Reddit posts, and the chatbot will learn to speak like a Redditor. The first regulatory requirement I propose is that all consumer-facing apps involving chatbot technology make public the text that the AI was first trained on. Next, human evaluators painstakingly score the algorithm’s output on a handful of measures such as accuracy and relevance to the user’s query. The trained algorithm can then generate words one at a time, just like the autocomplete feature on your phone. If you see enough sentences beginning “It’s cloudy today, it might…,” you’ll figure out the most likely conclusion is “rain”-and the algorithm learns this too. First, an algorithm trains on a massive amount of text to predict missing words. The new rules should track the two stages AI firms use to build chatbots. The tech industry is rushing headlong into the chatbot gold rush we need prompt, focused legislation that keeps pace. Reining in chatbots poses trouble enough without getting bogged down in wider AI legislation created for autonomous weapons, facial recognition, self-driving cars, discriminatory algorithms, the economic impacts of widespread automation and the slim but nonzero chance of catastrophic disaster some fear AI may eventually unleash. Chatbot apps like ChatGPT are an enormously important corner of AI poised to reshape many daily activities-from how we write to how we learn. To break the impasse, I propose transparency and detection requirements tailored specifically to chatbots, which are computer programs that rely on artificial intelligence to converse with users and produce fluent text in response to typed requests. At a corporate event in March, Elon Musk similarly spoke with less than exacting precision: “We need some kind of, like, regulatory authority or something overseeing AI development.” Meanwhile, ChatGPT's wide range of uses upended European efforts to regulate single-purpose AI applications. Mira Murati, head of the team that created the chatbot app ChatGPT-the fastest growing consumer-Internet app in history-said governments and regulators should be involved, but she didn’t suggest how. So far, there is more agreement on the need for AI regulation than on what this would entail. It also means heeding the mounting calls to regulate AI to help us navigate an era in which computers write as fluently as people. This means everything from reenvisioning how students learn in school to protecting ourselves from mass-produced misinformation. We have entered the brave new world of AI chatbots.
0 Comments
Leave a Reply. |