HomeNewsPresident Sally Kornbluth and OpenAI CEO Sam Altman discuss the longer term...

President Sally Kornbluth and OpenAI CEO Sam Altman discuss the longer term of AI

How is the sphere of artificial intelligence developing and what does it mean for the longer term of labor, education and humanity? MIT President Sally Kornbluth and OpenAI CEO Sam Altman talked about all this and more in a wide-ranging discussion on the MIT campus on May 2nd.

The success of OpenAI's large ChatGPT language models has helped spur a wave of investment and innovation in the sphere of artificial intelligence. ChatGPT-3.5 became the fastest-growing consumer software application in history following its release in late 2022, with a whole bunch of hundreds of thousands of individuals using the tool. Since then, OpenAI has also introduced AI-driven image, audio and video generation products and partnered with Microsoft.

The event, held at a packed Kresge Auditorium, captured the present excitement around AI with an eye fixed toward the longer term.

“I feel most of us remember the primary time we saw ChatGPT and thought, 'Oh my God, that is so cool!'” Kornbluth said. “Now we’re attempting to determine what the following generation of all this will likely be.”

For his part, Altman welcomes the high expectations for his company and the sphere of artificial intelligence typically.

“I feel it's great that for 2 weeks everyone was freaking out about ChatGPT-4, after which by the third week everyone was like, 'Come on, where's GPT-5?'” Altman said. “I feel that claims something really big about people’s expectations and aspirations and why all of us have to work to make things higher.”

The problems with AI

To begin their discussion, Kornbluth and Altman discussed the numerous ethical dilemmas that AI presents.

“I feel we've made surprisingly good progress in aligning a system around a set of values,” Altman said. “Although people prefer to say, 'You can't use this stuff because they keep spewing toxic waste,' GPT-4 behaves the way in which you wish it to, and we are able to make it follow certain values, not entirely good, but higher , than I expected at that time.”

Altman also identified that individuals don't agree on exactly how an AI system should behave in lots of situations, complicating efforts to create a universal code of conduct.

“How will we determine what values ​​a system must have?” asked Altman. “How will we determine what a system should do? To what extent does society place limits on user trust in these tools? Not everyone will use them the way in which we wish, but that's just the case with tools. I feel it's essential to provide people lots of control… but there are some things that a system just shouldn't do, and we want to barter together what those are.”

Kornbluth agreed that it’ll be difficult to do things like eliminate bias in AI systems.

“It’s interesting to take into consideration whether we are able to make our models less biased than we humans are,” she said.

Kornbluth also raised privacy concerns related to the large amounts of knowledge required to coach today's large language models. Altman said society has grappled with these concerns for the reason that early days of the Internet, but AI makes such considerations more complex and carries greater risks. He also sees entirely recent questions raised by the prospect of powerful AI systems.

“How will we navigate the trade-off between privacy, utility and security?” asked Altman. “It's a brand new thing for society to take a look at deciding what trade-offs all of us make individually and what advantages come from letting someone train the system throughout their life.” I don't know what the answers will likely be develop into.”

As for concerns about privacy and energy consumption related to AI, Altman said he believes advances in future versions of AI models will help.

“What we expect from GPT-5 or 6 or whatever is that it’s the perfect possible reasoning engine,” Altman said. “It's true that the one way we are able to do this right away is thru training.” That process is about learning about learn how to do very, very limited reasoning, or whatever you need to call it, however the proven fact that it's data can remember or that it stores data Everything in its parameter space, I feel we'll look back and say, “That was a weird waste of resources.” I expect in some unspecified time in the future we'll determine learn how to free the reasoning engine from the necessity for tons “Be capable of separate the information or the storage of the information in (the model) and treat them as separate things.”

Kornbluth also asked how AI may lead to job displacement.

“One of the things that annoys me most about people working on AI is after they get up with a straight face and say, 'This won’t ever lead to job cuts.' This is just a further thing. “It’s all going to be great,” Altman said. “This will eliminate lots of current jobs, and it’ll change the way in which many current jobs work, and it will create entirely recent jobs. “That at all times happens with technology.”

The promise of AI

Altman believes that advances in AI will make it worthwhile to deal with any current problems in the sphere.

“If we spent 1 percent of the world's electricity training a strong AI, and that AI helped us determine learn how to get to non-carbon energy or improve deep carbon capture, that will be an enormous win,” said Altman.

He also said that the applying of AI that interests him most is scientific discoveries.

“I consider that (scientific discoveries) are the central driver of human progress and that that is the one way we are able to drive sustainable economic growth,” Altman said. “People usually are not completely happy with GPT-4. They want things to recover. Everyone wants life to be more, higher and faster, and science is the strategy to get there.”

Kornbluth also asked Altman for his advice for college students considering their careers. He urged students to not limit themselves.

“The most vital lesson to learn as you start your profession is that you could figure all the things out one way or the other, and nobody has all of the answers originally,” Altman said. “You just stumble through things, have a high iteration speed and check out to tackle the issues which are most interesting to you, are with probably the most impressive people and have the boldness that with the iteration you’ll successfully get to the precise thing.” … You can do greater than you think that, faster than you think that.”

The advice was part of a bigger message Altman had to stay optimistic and work to create a greater future.

“The way we teach our young people who the world is totally fucked and that solving problems is hopeless, that each one we are able to do is sit in the dead of night in our bedrooms and take into consideration how terrible we’re really profound unproductive phase,” Altman said. “I hope that MIT is different than many other college campuses. I assume that’s the case. But you all must make it your life's mission to fight it. Prosperity, abundance, a greater life next 12 months, a greater life for our kids. That's the one way forward. This is the one strategy to have a functioning society… and I hope you all fight against the anti-progress attack, the anti-“people deserve an amazing life” streak.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read