Microsoft chatbot racist11/25/2023 Essentially, it’s about setting boundaries, limits that an AI can’t cross. The idea is that humans will always remain in command. We made safety our number one priority from the outset, and as a result, Pi is not so spicy as other companies’ models. Amazing people, in an extremely hardworking environment, with vast amounts of computation. But the bottom line is, we have one of the strongest teams in the world, who have created all the largest language models of the last three or four years. On the how-I mean, like, I’m not going to go into too many details because it’s sensitive. None of the jailbreaks, prompt hacks, or anything work against Pi. Yeah, so obviously I don’t want to make the claim-You know, please try and do it! Pi is live and you should try every possible attack. How do you make sure your large language model doesn’t say what you don’t want it to say? Tell me how you’ve achieved that, because that’s usually understood to be an unsolved problem. You can’t get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbor’s window. You can’t get Pi to produce racist, homophobic, sexist-any kind of toxic stuff. Now we have models like Pi, for example, which are unbelievably controllable. I think that what people lose sight of is the progression year after year, and the trajectory of that progression. So two years ago, the conversation-wrongly, I thought at the time-was “Oh, they’re just going to produce toxic, regurgitated, biased, racist screeds.” I was like, this is a snapshot in time. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable. I want to coldly stare in the face of the benefits and the threats. This is a completely biased way of looking at things. I think that we are obsessed with whether you’re an optimist or whether you’re a pessimist. How are you able to maintain your optimism? I can’t help thinking that it was easier to say that kind of thing 10 or 15 years ago, before we’d seen many of the downsides of the technology. Even back in 2009, when I started looking at getting into technology, I could see that AI represented a fair and accurate way to deliver services in the world. Can you connect the dots?įor me, the goal has never been anything but how to do good in the world and how to move the world forward in a healthy, satisfying way. You’ve since spent 15 years in AI and this year cofounded your second billion-dollar AI company. Your early career, with the youth helpline and local government work, was about as unglamorous and un–Silicon Valley as you can get. The following interview has been edited for length and clarity. The difference is that now he just might be in a position to make the changes he’s always wanted to-for good or not. He says he brings many of the values that informed those efforts with him to Inflection. When he was 19 he dropped out of university to set up Muslim Youth Helpline, a telephone counseling service. It’s true that Suleyman has an unusual background for a tech multi-millionaire. And yet he remains earnest and evangelical in his convictions. Some of his claims about the success of online regulation feel way off the mark, for example. Many will scoff at Suleyman's brand of techno-optimism-even naïveté.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |