ChatGPT maker OpenAI lays out plan for dealing with dangers of AI

[ad_1]

OpenAI, the artificial intelligence company behind ChatGPT, laid out its plans for staying ahead of what it thinks could be serious dangers of the tech it develops, such as allowing bad actors to learn how to build chemical and biological weapons.

OpenAI’s “Preparedness” team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts and policy professionals to monitor its tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous. The team sits between OpenAI’s “Safety Systems” team, which works on existing problems like infusing racist biases into AI, and the company’s “Superalignment” team, which researches how to make sure AI doesn’t harm humans in an imagined future where the tech has outstripped human intelligence completely.

The popularity of ChatGPT and the advance of generative AI technology has triggered a debate within the tech community about how dangerous the technology could become. Earlier this year, prominent AI leaders from OpenAI, Google and Microsoft warned the tech could pose an existential danger to human kind, on par with pandemics or nuclear weapons. Other AI researchers have said the focus on those big, frightening risks, allows companies to distract from the harmful impacts the tech is already having. A growing group of AI business leaders say the risks are overblown, and companies should charge ahead with developing the tech to help improve society — and make money doing it.

OpenAI has threaded a middle ground through this debate in its public posture. Chief executive Sam Altman said he believes there are serious longer-term risks inherent to the tech, but that people should also focus on fixing current problems. Regulation to try to prevent harmful impacts of AI shouldn’t make it harder for smaller companies to compete, Altman has said. At the same time, he has pushed the company to commercialize its technology and raised money to fund faster growth.

Madry, a veteran AI researcher who directs MIT’s Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, joined OpenAI earlier this year. He was one of a small group of OpenAI leaders who quit when Altman was fired by the company’s board in November. Madry returned to the company when Altman was reinstated five days later. OpenAI, which is governed by a nonprofit board whose mission is to advance AI and make it helpful for all humans, is in the midst of selecting new board members after three of the four board members who fired Altman stepped down as part of his return.

Despite the leadership “turbulence,” Madry said he believes OpenAI’s board takes seriously the risks of AI that he is researching. “I realized if I really want to shape how AI is impacting society, why not go to a company that is actually doing it?”

The preparedness team is hiring national security experts from outside the AI world who can help the company understand how to deal with big risks. OpenAI is beginning discussions with organizations including the National Nuclear Security Administration, which oversees nuclear technology in the United States, to ensure the company can appropriately study the risks of AI, Madry said.

The team will monitor how and when its AI can instruct people to hack computers or build dangerous chemical, biological and nuclear weapons, beyond what people can find online through regular research. Madry is looking for people who “really think, ‘How can I mess with this set of rules? How can I be most ingenious in my evilness?’”

The company will also allow “qualified, independent third-parties” from outside OpenAI to test its technology, it said in a Monday blog post.

Madry said he didn’t agree with the debate between AI “doomers” who fear the tech has already attained the ability to outstrip human intelligence, and “accelerationists” who want to remove all barriers to AI development.

“I really see this framing of acceleration and deceleration as extremely simplistic,” he said. “AI has a ton of upsides, but we also need to do the work to make sure the upsides are actually realized and the downsides aren’t.”

[ad_2]

Source link

1,163 thoughts on “ChatGPT maker OpenAI lays out plan for dealing with dangers of AI

  1. На этом сайте https://kinokabra.ru/ вы можете бесплатно и без регистрации узнать релизными датами кинопроизведений в новом киногоду. Следите за грядущими релизами и не забудьте подписаться на обновления для быстрого доступа к свежей информации о картинах. Узнайте первыми важные сведения кинематографии, ознакомьтесь с новыми проектами от известных режиссеров, насладитесь боевиками для истинных поклонников экшена. Ознакомьтесь новые драмы, которые появятся на экранах в открытии ближайшего будущего. Подготовьтесь к смеху с комедийными фильмами. Научитесь летать на крыльях фантастики с ожидаемыми фантастическими релизами. Погрузитесь в мир ужасов с новыми фильмами ужасов. Почувствуйте атмосферу любви с новыми мелодрамами. Детские мечты станут реальностью с анимацией для детей и их родителей. Узнайте о жизни известных персон с биографическими фильмами. Фэнтези миры приглашают вас в свой обитель. Актерские новинки знаменитых артистов ждут вас на экранах. Развлекайте детей и родных и близких с фильмами для всей семьи. Отправляйтесь в увлекательные путешествия с кино о путешествиях. Романтические истории ждут вас в фильмах о любви. Погрузитесь в реальность с документальными работами. Не пропустите график выхода фильмов – узнайте, какие фильмы выйдут. Чтение и экран – картины по книгам. Магические миры с элементами фэнтези ждут вас в ожидаемых релизах. Супергеройские фильмы представлены на экранах в новом году. Тайны и загадки ожидают вас в триллерах на горизонте. Отрыв от реальности с кино о прошлых событиях. Путешествия по природе в центре внимания с документальными картиными о природе. Из мира видеоигр к большому экрану с фильмами по видеоиграм. Современные технологии и современная наука на большом экране. Документальные ленты о животных и природе. Звуки музыки. Семейное кино на горизонте. Поход в космос с кино о космосе. Спортивные подвиги на экранах с новыми фильмами о спорте. Комбинированные жанры и комедийные драмы в кино. Кино для ценителей искусства. Загадочные события с фильмами о загадочных событиях. Приключенческие миры в анимационных фильмах.

    ___________________________________________________

    Не забудьте добавить наш сайт в закладки:

  2. Just finished reading your article and wow, I’m blown away! The way you broke down the topic was not only informative but super engaging. It’s rare to find content that’s this captivating. Have you considered doing a follow-up piece? I’d love to dive deeper into this subject!

  3. Kudos on the article! 👌 The insights provided are valuable, and I think incorporating more images in your future articles could make them even more engaging. Have you considered that? 📸