vlog

Skip to content
NOWCAST vlog News at 6am Weekday Mornings
Live Now
Advertisement

VP Harris meets with CEOs about artificial intelligence risks

VP Harris meets with CEOs about artificial intelligence risks
Chat GP T is spreading fast. Language models assist in day to day tasks from writing html code, providing recipes or giving relationship advice. Observers call it exciting and terrifying. So are language models another digital distraction or are they the next evolution? This is clarify on the topic of information overload. The scientist Conrad Gessner described the modern world as confusing and harmful. It's common to see this sentiment used in reference to social media streaming services and digital attachment. Yet Gesner never felt the anxiety of checking likes on *** selfie. He died of the plague in 15 65 the information overload he feared came from the printing press throughout history. Gesner worries have been applied to the newest invention of the day, from the radio to the V H S player and now chat GP T chat GP T is one of the language models that arose in the 20 tens. In short language models are *** type of machine learning program that predicts the likelihood of *** sequence of words. For example, the auto suggestions that appear when you're typing or texting when artificial intelligence is paired with some other software like *** chat bot it can mimic human writing. These systems work together and use complex parameters to react to human inputs. Parameters can be thought of as controls. The more controls, the *** I is given the better it can understand the nuances and complexities of human language. The developers of GP T three Open *** I gave their chat bot chat GP T 175 billion parameters. The launch of GP T four saw the parameters increase from 175 billion to 100 trillion. *** 57,000% increase in parameters. Shows that this tech is advancing exponentially technology as sophisticated as this is certain to have some utility in our everyday lives. But in what ways can we use it as *** phd student at Carnegie Mellon University. Victor Rodriguez has found opportunities to experiment with Chad GP T in his work. And so my research is on studying people's risk perceptions and how those perceptions lead to the decisions they make. And what I use it for is when I have to write emails, emails can be *** little tedious. You might spend *** little too much time. Like how should I say this? What words should I use? How should I present this information? So I just tell they write me an email, this is the topic, these are the people, this is what I want to get across and then it gives me an email and then I'll and then I'll take that, edit it and then put that out there and instead of taking 10, 20 minutes to write an email, it can take me like five minutes. Victor is not alone in his quest to increase productivity. Chat. GP T has popped up in workplaces, schools, study sessions and even producers meetings. Recently, the Writers Guild of America proposed that writers should be able to use *** I without worrying about credits or residuals. Writers can use *** I generated text and polish it themselves the same way Victor uses it to write his emails. Usage in this way makes chat GPT more of *** tool that refines writing instead of *** database to copy and paste from the utility doesn't stop there. 90% of coding is googling. Like we spend *** lot of time googling. This whole database is like back exchange github and other ones like that where they share code and we do that already. We already spent *** lot of time on these forums and Googling how to do stuff or even youtube. So what this does actually is they could actually show you the code so it can tell you OK, one way to do this and it'll break down the code for you, add the numbers in and tell you how you know how, how you should organize it and it's pretty accurate in that regard. Actually, it's been helping me *** lot with my data analysis just as calculators aided in our ability to solve equations. Chat GP T is doing the same in the areas of writing. Of course, not everyone is so enthusiastic and the advancement has some problematic applications like academic integrity. N Y U has already banned Chat GP T in its syllabuses and professors mentioned it specifically during their first week of classes. The Stanford Daily conducted *** poll and found that 17% of students admitted to using Chat GP T in some capacity. New York City's Education Department has also banned Chat GP T and *** student at Princeton invented software that can detect if an essay was written by *** I. Academics are only one of the areas of caution areas like misinformation, unintended bias and copyright are all ethical gray areas. Yet, while some educators are wrestling with the use of *** I, others are requiring it innovation and entrepreneurship. Professor at the University of Pennsylvania, Ethan Molik calls chat GP T an emerging skill going so far as to hold students responsible for any inaccuracies created by the *** I. In spite of the challenges, he saw positive results in the classroom. For instance, non native English speakers said it alleviated the stress of writing assignments and said they were even taken more seriously as *** result and it's easy to focus on like the negatives. But at the same time, there's *** lot of positives that will come out of it just like the Google searching that came before it the due diligence to fact check and not abuse these technologies relies on the user. However, as new tech emerges, it will take conversations from diverse groups to ensure we maximize its benefit and rein in any potential harm it can do. If Gesner were alive in modern times, he would see that the printing press did not overwhelm our minds. In fact, it only opened them. Can chat GP T do the same.
Advertisement
VP Harris meets with CEOs about artificial intelligence risks
Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.The popularity of AI chatbot ChatGPT — even President Joe Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.“We’re at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,” Conner said.The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.Harris said in a statement after the closed-door meeting that she told the executives that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.” The message was also that they can work together with the government.Video above: Experts weigh in on whether AI will ever become as smart as humansBiden, who stopped by Thursday's event, "has been extensively briefed on ChatGPT and knows how it works,” White House press secretary Karine Jean-Pierre told reporters.ChatGPT has led a flurry of new “generative AI” tools adding to ethical and societal concerns about automated systems trained on vast pools of data.Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it’s stealing from copyrighted works.Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing,” said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.“From what I know of tech culture, that just isn’t done,” she said.Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy to provide greater transparency after the fact.“It’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people’s safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and Britain’s competition watchdog said Thursday it’s opening a review of the AI market.In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.“This would be a way for very skilled and creative people to do it in one kind of big burst,” Frase said.___O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.

Vice President met on Thursday with the heads of Google, Microsoft and two other companies developing as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology without putting people's rights and safety at risk.

The popularity of AI chatbot ChatGPT — even President Joe Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.

Advertisement

But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and .

The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.

“We’re at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,” Conner said.

The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.

Harris said in a statement after the closed-door meeting that she told the executives that “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.” The message was also that they can work together with the government.

Video above: Experts weigh in on whether AI will ever become as smart as humans

, who stopped by Thursday's event, "has been extensively briefed on ChatGPT and knows how it works,” White House press secretary Karine Jean-Pierre told reporters.

ChatGPT has led a flurry of new “generative AI” tools adding to ethical and about automated systems trained on vast pools of data.

Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it’s stealing .

Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing,” said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

“From what I know of tech culture, that just isn’t done,” she said.

Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won’t be easy to provide greater transparency after the fact.

“It’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”

While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.

When the EU for AI rules in 2021, the focus was on reining in high-risk applications that threaten people’s safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.

But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.

Elsewhere in Europe, Italy over a breach of stringent European privacy rules, and Britain’s competition watchdog said Thursday it’s of the AI market.

In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.

Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.

“This would be a way for very skilled and creative people to do it in one kind of big burst,” Frase said.

___

O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.