"You don't have to be super creative to imagine how that could go wrong," he says. Regulators might want to start considering laws forcing AI programs to disclose that they are machines when engaged with a human, Brynjolfsson says: "It's just an unfair fight because you can spin up a program and generate a million bots that are arguing some case, and humans can't keep up."īrynjolfsson also points to the sort of autonomous weaponry that's already being developed by the world's superpowers, so-called "slaughterbots" that experts warn could easily be used toward horrific ends. In the short term, Brynjolfsson says that as chatbots like LaMDA become more common, people could start to use them maliciously: Hackers or other bad actors could create millions of realistic bots that pass as human, and use them to disrupt political and economic systems around the world. There are still plenty of reasons to be concerned about the future of AI and its impact on humans. "If you paint a smiley face on a rock, a lot of people will have this feeling in their heart that that rock is kind of happy." "Humans are very susceptible to anthropomorphizing things," he says. Brynjolfsson says that's unsurprising: Our brains are wired to imbue non-human objects or animals with human consciousness as a means of forming social connections. Google's own technologists are adamant that the company's chatbot has not become sentient, and that the software is simply advanced enough to mimic and predict human speech patterns in a way that's meant to feel real. "The model then spits that text back in a rearranged form without actually 'understanding' what saying." "As with the gramophone, these models tap into a real intelligence: the large corpus of text that is used to train the model with statistically-plausible word sequences," Brynjolfsson wrote. In a tweet on June 12, Brynjolfsson wrote that the Google engineer's belief in LaMDA's sentience was "the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside." But, what will - and should - it look like? Our brains are hard-wired to see sentient AI, even if it doesn't yet exist Regardless of which camp you fall into, it feels safe to agree that an actual sentient artificial intelligence is a fascinating possibility. Others disagree: Tesla and SpaceX CEO Elon Musk, for example, has called AI "a fundamental risk to the existence of human civilization." Some notable tech names - including Meta CEO Mark Zuckerberg - insist that the advancement of AI could be a very positive development for humanity, particularly in areas like health care and transportation. Having an AI pretend to be sentient is going to happen way before an AI is actually sentient." "There's a bunch of breakthroughs that have to happen," Erik Brynjolfsson, a senior fellow at Stanford's Institute for Human-Centered AI and director of the school's Digital Economy Lab, tells CNBC Make It. But you don't have to worry: Most AI experts agree that an actual sentient computer program is likely still a few decades away.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |