Normal view

There are new articles available, click to refresh the page.
Today — 16 May 2024Technology

How to Hit Pause on AI Before It’s Too Late

16 May 2024 at 15:22
Demonstrator holding "No AI" placard

Only 16 months have passed, but the release of ChatGPT back in November 2022 feels already like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these large language models. Our world, and in particular the world of AI, has decidedly changed.

But the real prize of human-level AI—or artificial general intelligence (AGI)—has yet to be achieved. Such a breakthrough would mean an AI that can carry out most economically productive work, engage with others, do science, build and maintain social networks, conduct politics, and carry out modern warfare. The main constraint for all these tasks today is cognition. Removing this constraint would be world-changing. Yet many across the globe’s leading AI labs believe this technology could be a reality before the end of this decade.

[time-brightcove not-tgx=”true”]

That could be an enormous boon for humanity. But AI could also be extremely dangerous, especially if we cannot control it. Uncontrolled AI could hack its way into online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create tailor-made manipulations for large numbers of people. Even worse, military personnel in charge of nuclear weapons could be manipulated by an AI to share their credentials, posing a huge threat to humanity.

It would be a constructive step to make it as hard as possible for any of that to happen by strengthening the world’s defenses against adverse online actors. But when AI can convince humans, which it is already better at than we are, there is no known defense.

For these reasons, many AI safety researchers at AI labs such as OpenAI, Google DeepMind and Anthropic, and at safety-minded nonprofits, have given up on trying to limit the actions future AI can do. They are instead focusing on creating “aligned” or inherently safe AI. Aligned AI might get powerful enough to be able to exterminate humanity, but it should not want to do this.

There are big question marks about aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the best researchers working on aligning superhuman AI left OpenAI in dissatisfaction, a move that does not inspire confidence. Second, it is unclear what a superintelligent AI would be aligned to. If it was an academic value system, such as utilitarianism, we might quickly find out that most humans’ values actually do not match these aloof ideas, after which the unstoppable superintelligence could go on to act against most people’s will forever. If the alignment was to people’s actual intentions, we would need some way to aggregate these very different intentions. While idealistic solutions such as a U.N. council or AI-powered decision aggregation algorithms are in the realm of possibility, there is a worry that superintelligence’s absolute power would be concentrated in the hands of very few politicians or CEOs. This would of course be unacceptable for—and a direct danger to—all other human beings.

Read More: The Only Way to Deal With the Threat From AI? Shut It Down

Dismantling the time bomb

If we cannot find a way to at the very least keep humanity safe from extinction, and preferably also from an alignment dystopia, AI that could become uncontrollable must not be created in the first place. This solution, postponing human-level or superintelligent AI, for as long as we haven’t solved safety concerns, has the downside that AI’s grand promises—ranging from curing disease to creating massive economic growth—will need to wait.

Pausing AI might seem like a radical idea to some, but it will be necessary if AI continues to improve without us reaching a satisfactory alignment plan. When AI’s capabilities reach near-takeover levels, the only realistic option is that labs are firmly required by governments to pause development. Doing otherwise would be suicidal.

And pausing AI may not be as difficult as some make it out to be. At the moment, only a relatively small number of large companies have the means to perform leading training runs, meaning enforcement of a pause is mostly limited by political will, at least in the short run. In the longer term, however, hardware and algorithmic improvement mean a pause may be seen as difficult to enforce. Enforcement between countries would be required, for example with a treaty, as would enforcement within countries, with steps like stringent hardware controls. 

In the meantime, scientists need to better understand the risks. Although there is widely-shared academic concern, no consensus exists yet. Scientists should formalize their points of agreement, and show where and why their views deviate, in the new International Scientific Report on Advanced AI Safety, which should develop into an “Intergovernmental Panel on Climate Change for AI risks.” Leading scientific journals should open up further to existential risk research, even if it seems speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments have an enormous part to play in how AI unfolds. This starts with officially acknowledging AI’s existential risk, as has already been done by the U.S., U.K., and E.U., and setting up AI safety institutes. Governments should also draft plans for what to do in the most important, thinkable scenarios, as well as how to deal with AGI’s many non-existential issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing scientific, industry, and public evaluation.

It is great progress that major AI countries are constructively discussing common policy at biannual AI safety summits, including one in Seoul from May 21 to 22. This process, however, needs to be guarded and expanded. Working on a shared ground truth on AI’s existential risks and voicing shared concern with all 28 invited nations would already be major progress in that direction. Beyond that, relatively easy measures need to be agreed upon, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from training. An international AI agency needs to be set up to guard execution.

It is fundamentally difficult to predict scientific progress. Still, superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a feasible strategy. Let us use the time we have as wisely as possible.

Billionaire Frank McCourt Wants to Buy TikTok. Here’s Why He Thinks He Could Save It

16 May 2024 at 15:21
McCourt

Billionaire Frank McCourt has long argued that the internet needs to be radically changed on an infrastructural level in order to reduce its toxicity, misinformation, and extractive nature. Now, he’s hoping to slide into a power vacuum in pursuit of that goal. McCourt is putting together a bid to buy TikTok from Chinese technology company ByteDance, which faces a U.S. ban at the end of this year unless it sells the wildly popular app.

[time-brightcove not-tgx=”true”]

McCourt’s central thesis lies in the belief that users should have more control over their personal data and online identities. His aim is to assemble a coalition to buy TikTok, so that its most valuable user data would be kept not by a single company, but on a decentralized protocol. McCourt has developed this idea in conjunction with technologists, academics, and policymakers via his nonprofit Project Liberty. His plan has received support from notable luminaries including the author Jonathan Haidt (The Anxious Generation) and Tim Berners-Lee, the inventor of the world wide web.

McCourt did not say how much he thinks TikTok is worth. Other magnates who have expressed interest in bidding for TikTok include Kevin O’Leary and Steve Mnuchin.

But there is no indication that ByteDance plans to sell TikTok; they are challenging the forced sale in the U.S. court system on the grounds of freedom of speech. And McCourt faces many obstacles in folding TikTok into his technological vision while ensuring the app’s profitability—especially because he says he’s not interested in buying the core algorithm that has hypercharged TikTok’s growth. 

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

In an interview with TIME, McCourt explained his vision for the app and the larger internet ecosystem. Here are excerpts from the conversation.

TIME: A couple years ago, you stepped down as CEO from McCourt Global in order to devote most of your time to Project Liberty, whose goal is fixing the internet. How pivotal could buying TikTok be towards that mission?

Frank McCourt: I think it’s a fantastic opportunity to really accelerate things and catalyze an alternative version of the internet where individuals own and control their identity and data. The internet does not have to operate the way it does right now. It’s important to remember that the other big platforms in the U.S. operate with the same architecture as TikTok: of scraping people’s data and aggregating it and then exploiting it. 

When I say data, it sounds abstract. But it’s our personhood; it’s everything about us. And I think it’s well past time that we correct that fundamental flaw in the design of the internet and return agency to individuals.

Let’s say I’m a small business owner who uses TikTok to post content and sell goods. How would my experience improve under your new design?

The user experience wouldn’t change much. We want this to be a seamless thing. Part of our thinking is to keep TikTok U.S. alive, because China has said they’re not sharing the [core] algorithm under any circumstances. And without a viable bidder to move forward without the algorithm, they may shut it down. But we’re not looking for the algorithm.

Many people contend that the core algorithm is essential to TikTok’s value. Do you worry that TikTok wouldn’t be TikTok without it?

What makes TikTok, TikTok, to me, is the user base, the content created by the user base, the brand, and all the tech short of the algorithm. Of course, TikTok isn’t worth as much without the algorithm. I get that. That’s pretty plain. But we’re talking about a different design, which requires people to move on from the mindset and the paradigm we’re in now. 

It will be a version where everyone is deciding what pieces or portions of their data to share with whom. So you still have a user experience every bit as good, but with much better architecture overall. And not only will individuals have agency, but let’s have a broader group of people participating in who shares in the economic value of the platform itself. 

Read More: Why The Billionaire Frank McCourt is Stepping Down As CEO Of His Company To Focus on Rebuilding Social Media

How would that value sharing work? Are you talking about some sort of directed shares program, or a crypto token?

It’s a bit early to have that conversation. That’s why we’ve retained Kirkland & Ellis to advise us, along with Guggenheim Securities. They’re grappling with and thinking through those very issues right now.

So how would users control their data?

Imagine an internet where individuals set the terms and conditions of their data with use cases and applications. And you’ll still want to share your data, because you’ll want to get the benefits of the internet. But you’re sharing it on a trusted basis. The mere act of giving permission to use it is very different than having it be surveilled and scraped.

The blockchain-based decentralized infrastructure you plan to use for TikTok, DSNP, is already running, and the social media app MeWe is currently migrating its tech and data onto it. What have you learned from MeWe’s transition?

That it works. Like any other engineering challenge, you have to go through all the baby steps to get it right. But the migration started in earnest in Q4, and over 800,000 users have migrated. To me, that’s important that we’re not bringing forward a concept: We’re bringing forward a proven tech solution.

In order to finance this bid, you will seek money from foundations, endowments and pension funds and philanthropies. Are you confident that if you get these big investors on board, you’ll be able to return value to them?

I am. This opens up and unlocks enormous value for investors and users. At the same time, it has a tremendous impact for society. I mentioned the pension funds and endowments and foundations as a category of investor that have a longer term horizon, and look at making investments not strictly on the basis of financial ROI. It’s important they be involved, because this is a societal project to fundamentally change how the internet works.  

We want a lot of people involved in this in different ways, shapes and forms, which is another distinguishing characteristic. We don’t need Saudi money to replace Chinese money. We’re trying to bring forward a solution to address the problem at its root cause, not at the symptomatic level.

You committed $150 million to Project Liberty in 2022. Are you prepared to spend in that ballpark again for TikTok?

Update that number: I’ve committed half a billion dollars to Project Liberty. That should be an indication of my level of seriousness about all this, and my level of seriousness about the bid for TikTok U.S.

2023 Was the Worst Year for Internet Shutdowns Globally, New Report Says

16 May 2024 at 10:00
Internet Cut in Manipur, India

Last year, an internet shutdown in the state of Manipur, India, lasted a staggering 212 days when the state government issued 44 consecutive orders to switch off access across all broadband and mobile networks. The shutdown affected a population of 3.2 million, and made it more difficult to document rampant atrocities committed against minorities during bloody violence between the Meitei and Kuki-Zo tribes, which included murder, rape, arson, and other gender-based violence, says Access Now, a digital rights watchdog that publishes an annual report on internet shutdowns around the world. 

[time-brightcove not-tgx=”true”]

Manipur was just one of hundreds of instances where authorities in India used the tactic as “a near-default response to crises, both proactively and reactively,” according to the group’s latest report published May 15. For the sixth consecutive year, India led the global list for imposing the highest number of internet shutdowns after removing access 116 times in 2023. 

What’s more, Access Now deemed 2023 the worst year for internet shutdowns globally, recording 283 shutdowns across 39 countries—the highest number of shutdowns in a single year since it first began monitoring in 2016. It’s a steep 41% increase from the previous year, which saw 201 shutdowns in 40 countries, and a 28% increase from 2019, which previously held the record for the highest number of shutdowns. 

“By nearly every measure, 2023 is the worst year of internet shutdowns ever recorded — highlighting an alarming and dangerous trend for human rights,” the report states.

Read More: How Internet Shutdowns Wreak Havoc in India

173 of the shutdowns in 2023 occurred in conflict zones and corresponded to acts of violence. In the Gaza Strip, for example, the Israeli military “used a combination of direct attacks on civilian telecommunications infrastructure, restrictions on access to electricity, and technical disruptions to shut down the internet,” the report reads. (In a statement to TIME, the IDF said “As part of the IDF’s operations in the Gaza Strip, the IDF is facilitating the restoration of infrastructure in areas affected by the war and is coordinating with local teams to bring infrastructure repair to these locations.”)

And in the Amhara region of Ethiopia, security forces imposed a near-total communications blackout to cause terror and mass displacement through the destruction of property and indiscriminate bombing across the region, according to the report.

The watchdog also points out that while the increase of shutdowns associated with violence during armed conflict was high, in 74 instances across nine countries—including Palestine, Myanmar, Sudan, and Ukraine—warring political parties claimed to deploy shutdowns during protests and politically unstable events as a peacekeeping measure. In India alone, authorities ordered 65 shutdowns in 2023 in specific attempts to address communal violence. Similarly, Pakistan and Bangladesh imposed seven and three shutdowns, respectively, as a way to suppress political dissent during political rallies and election campaigning. 

Read More: Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say

93% of all cases recorded in 2023 occurred without giving the public any advance notice of an impending shutdown; a practice that Access Now says only deepens fear and uncertainty, and puts more people in grave danger.

“We are at a tipping point, so take this as a wake-up call: all stakeholders across the globe — governments, civil society, and the private sector alike — must take urgent action to permanently end internet shutdowns,” Zach Rosson, a data analyst at Access Now, said in a statement.

Yesterday — 15 May 2024Technology

OpenAI’s Co-Founder and Chief Scientist Ilya Sutskever Is Leaving the Company

15 May 2024 at 05:20
ISRAEL-SCIENCE-TECHNOLOGY-AI

OpenAI Chief Scientist and co-founder Ilya Sutskever is leaving the artificial intelligence company, a departure that ends months of speculation in Silicon Valley about the future of a top AI researcher who played a key role in the brief ouster of Sam Altman last year.

Sutskever will be replaced by Research Director Jakub Pachocki, OpenAI said on its blog Tuesday. 

[time-brightcove not-tgx=”true”]

In a post on X, Sutskever called trajectory of OpenAI “miraculous” and said that he was confident the company will build AI that is “both safe and beneficial” under its current leadership. 

The exit removes an executive and renowed researcher who has played a pivotal role in the company since its earliest days, helping guide discussions over the safety of AI technology and at times differing with Altman over strategy. When OpenAI was founded in 2015, he served as its research director after being recruited to join the company by Elon Musk. At that point, Sutskever was already well known in the field for his work on neural networks at the University of Toronto and his work at the Google Brain lab. Sutskever even officiated the wedding of President Greg Brockman at the OpenAI offices.

Sutskever clashed with Altman over how rapidly to develop AI, a technology prominent scientists have warned could harm humanity if allowed to grow without built-in constraints, for instance on misinformation. Jan Leike, another OpenAI veteran who co-led the so-called superalignment team with Sutskever, also resigned. Leike’s responsibilities included exploring ways to limit the potential harm of AI.

Last year, Sutskever was one of several OpenAI board members who moved to fire Chief Executive Officer Altman, a decision that touched off a whirlwind five days at the company: Brockman quit in protest. Investors revolted. And within days, nearly all of OpenAI’s roughly 770 employees signed a letter threatening to quit unless Altman was reinstated.

Adding to the chaos, Sutskever said he regretted his participation in Altman’s ouster. Soon after, the CEO was reinstated. 

After Altman returned to the company in late November, he said in a blog post that Sutskever wouldn’t go back to his former post as a board member, but that the company was “discussing how he can continue his work at OpenAI.”

In the subsequent months, Sutskever largely disappeared from public view, sparking speculation about his continued role at the company. Sutskever’s post on X Tuesday was the first time he shared anything on the social network since reposting a message from OpenAI in December.

Asked about Sutskever at a press conference in March, Altman said he loved him, and that he believed Sutskever loved OpenAI, adding: “I hope we work together for the rest of our careers.”

In a post on X on Tuesday, Altman wrote, “Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

On X, Sutskever posted that he is working on an as-yet-unnamed project that is “very personally meaningful” for him.

The company’s new chief scientist, Pachocki has worked at OpenAI since 2017. Pachocki led the development of the company’s GPT-4 AI model, OpenAI said.

A Group of TikTok Creators Are Suing the U.S. to Block a Potential Ban on the App

TikTok Creators Hold Capitol Hill News Conference

A group of TikTok creators followed the company’s lead and filed their own lawsuit to block the U.S. law that would force Chinese parent ByteDance Ltd. to divest itself of the popular video app by January or face a ban.

Like the May 7 case filed by TikTok, eight creators behind Tuesday’s suit are challenging an ultimatum by the U.S. meant to address national security concerns that the Chinese government could access user data or influence what’s seen on the platform. The creators include a rancher from Texas, a college football coach in North Dakota, a founder of a skincare line in Atlanta and a Maryland book lover who promotes Black authors on the platform.

[time-brightcove not-tgx=”true”]

“Our clients rely on TikTok to express themselves, learn and find community,” Ambika Kumar, a lawyer for the creators, said in a statement. “They hope to vindicate not only their First Amendment rights, but the rights of the other approximately 170 million Americans who also use TikTok. The ban is a pernicious attack on free speech that is contrary to the nation’s founding principles.”

A Justice Department spokesperson said the government looks forward to defending the law in court.

“This legislation addresses critical national security concerns in a manner that is consistent with the First Amendment and other constitutional limitations,” the spokesperson said in a statement.

ByteDance has said it doesn’t have any intention of trying to find a buyer for TikTok as the January deadline approaches. Instead, ByteDance wants the law declared unconstitutional, saying it violates the First Amendment and represents an illegal punishment without due process or a presidential finding that the app is a national security threat.

Read More: What to Know About the Law That Could Get TikTok Banned in the U.S.

TikTok has argued the law will stifle free speech and hurt creators and small business owners who benefit economically from the platform. The company said that in response to data security concerns, it spent more than $2 billion to isolate its U.S. operations and agreed to oversight by American company Oracle Corp.

Professional content creators typically don’t make enough money to provide a living from TikTok itself. The social media company has a fund that pays certain creators based on performance, and it also shares revenue from products tagged and purchased through the app. Instead, creators use the app to gain an audience in the hopes of landing lucrative brand sponsorship deals where they make videos for or plug products of brands, much like on other social media platforms.

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

TikTok’s links to China have faced scrutiny under previous administrations. Former President Donald Trump used an executive order to try to force a sale of the app to an American company or face a ban. But his administration also faced multiple legal challenges—including from creators—and judges blocked the ban from taking place. When Joe Biden became president, he put Trump’s ban under fresh review.

A lobbying push against the law by TikTok Chief Executive Officer Shou Chew failed to convince U.S. lawmakers who worried about the national security threat of China potentially accessing user data and disseminating propaganda to about half the American population. Congress passed the law in April and Biden signed it.

Read More: The Grim Reality of Banning TikTok

Last year, Montana became the first U.S. state to enact a law that would ban residents from using the app. A federal judge sympathized with free-speech arguments by TikTok and creators in blocking the Montana measure while the legal challenges play out.

The Justice Department had no immediate comment on Tuesday’s suit.

DOJ Says Boeing Violated Deal That Avoided Prosecution After 737 Max Crashes

Justice Department Boeing

(WASHINGTON) — Boeing has violated a settlement that allowed the company to avoid criminal prosecution after two deadly crashes involving its 737 Max aircraft, the Justice Department told a federal judge on Tuesday.

It is now up to the Justice Department to decide whether to file charges against the aircraft maker amid increasing scrutiny over the safety of its planes. Prosecutors will tell the court no later than July 7 how they plan to proceed, the Justice Department said.

[time-brightcove not-tgx=”true”]

Boeing reached a $2.5 billion settlement with the Justice Department in January 2021 to avoid prosecution on a single charge of fraud – misleading regulators who approved the 737 Max. Boeing blamed the deception on two relatively low-level employees.

The manufacturing giant came under renewed scrutiny since a door-plug panel blew off a 737 Max jetliner during an Alaska Airlines flight in January. The company is under multiple investigations, and the FBI has told passengers from the flight that they might be victims of a crime.

Boeing didn’t immediately respond to a request for comment.

Glenn Leon, head of the Justice Department criminal division’s fraud section, said in the letter filed in Texas federal court that Boeing failed to make changes to prevent it from violating federal anti-fraud laws — a condition of the the 2021 settlement.

The determination means that Boeing could be prosecuted “for any federal criminal violation of which the United States has knowledge,” including the charge of fraud that the company hoped to avoid with the $2.5 billion settlement, the Justice Department said.

However, it is not clear whether the government will prosecute the manufacturing giant.

“The Government is determining how it will proceed in this matter,” the Justice Department said in the court filing. Prosecutors said they will meet with families of the crash victims on May 31.

Paul Cassell, a lawyer who represents families of passengers who died in the Max crash in Ethiopia, called it a “positive first step, and for the families, a long time coming.”

“But we need to see further action from DOJ to hold Boeing accountable, and plan to use our meeting on May 31 to explain in more details what we believe would be a satisfactory remedy to Boeing’s ongoing criminal conduct,” Cassell said.

Investigations into the 2018 and 2019 crashes pointed to a flight-control system that Boeing added to the Max without telling pilots or airlines. Boeing downplayed the significance of the system, then didn’t overhaul it until after the second crash.

The Justice Department investigated Boeing and settled the case in January 2021. After secret negotiations, the government agreed not to prosecute Boeing on a charge of defrauding the United States by deceiving regulators who approved the plane.

In exchange, the company paid $2.5 billion — a $243.6 million fine, a $500 million fund for victim compensation, and nearly $1.8 billion to airlines whose Max jets were grounded.

Boeing has faced civil lawsuits, congressional investigations and massive damage to its business since the crashes in Indonesia and Ethiopia.

Before yesterdayTechnology

Dublin to New York City Portal Temporarily Shut Down Due to Inappropriate Behavior

14 May 2024 at 14:57
People interact with a livestream video "portal" in NYC

A portal linking New York City to Dublin via a livestream has been temporarily shut down after inappropriate behavior ensued, according to the Dublin City Council. 

Less than a week after the 24/7 visual art installation was put in place, officials have opted to close it down temporarily after people began to flash each other, grind on the portal, and one person even shared pictures of the twin tower attack to people in New York City. Alternatively, the portal had also been the site of reunions with old friends and even a proposal, with many documenting their experience with the installation online. 

[time-brightcove not-tgx=”true”]

The Dublin City Council said that although those engaged in the inappropriate behavior were few and far between, videos of said behavior went viral online. 

“While we cannot control all of these actions, we are implementing some technical solutions to address this and these will go live in the next 24 hours,” the council said in a Monday statement. “We will continue to monitor the situation over the coming days with our partners in New York to ensure that portals continue to deliver a positive experience for both cities and the world.”

The New York City portal is next to the Flatiron Building while Dublin’s is at the crux of North Earl Street and O’Connell Street.  

What is the portal?

The portal was launched on May 8 as a way to bring people together via technology. 

“Portals are an invitation to meet people above borders and differences and to experience our world as it really is—united and one,” said Benediktas Gylys, the Lithuanian artist and founder of The Portal. “The livestream provides a window between distant locations, allowing people to meet outside of their social circles and cultures, transcend geographical boundaries, and embrace the beauty of global interconnectedness.”

The Dublin portal is set to connect with other cities and destinations in Poland, Brazil, and Lithuania, the Dublin City Council said in a May 8 press release. The connection with New York City is expected to remain through autumn, with additional cultural performances starting in mid-May.

Why Biden Is Taking a Hard Line on Chinese EVs

14 May 2024 at 11:21
Biden China EV

The Biden Administration has announced new tariffs on Tuesday for Chinese made electric vehicles, quadrupling the current tariff from 27.5% to 102.5%, as well as new tariffs on solar cells, steel, and aluminum.

These tariffs are expected to raise $18 billion in imports from China.

[time-brightcove not-tgx=”true”]

Currently, China exports very few electric vehicles to the U.S., so it is unlikely that the tariffs will have much of an impact in the short run. In the first quarter of 2024, only one Chinese car maker, Geely, exported EVs to the U.S. and it represented less than 1% of the market share. 

Nevertheless, the Biden Administration says that it worries that in the long run, China’s subsidies of its electric vehicle industry could lead it to claim a larger proportion of the market share. “When the global market is flooded by artificially cheap Chinese products, the viability of American and other foreign firms is put into question,” Treasury Secretary Janet Yellen said during a speech she gave while she visited Beijing in April. 

Since coming into office, President Joe Biden has left the tariffs Trump put in place on China intact, as part of a bid to encourage more American manufacturing. On a Monday call with reporters, Lael Brainard, director of the White House National Economic Council, said that the tariffs would help manufacturing workers in Pennsylvania and Michigan by ensuring that “historic investments in jobs spurred by President Biden’s actions are not undercut by a flood of unfairly underpriced exports from China.”

Some observers have suggested that the tariffs are an attempt to get ahead of Donald Trump, who has expressed support for an across-the-board levy of 60% or more on all Chinese goods.

The announcement also comes during an election year during which tensions between the U.S. and China are very high. Over 83% of Americans have an unfavorable view of China, according to a survey conducted by the Pew Research Center in 2023.

Beijing has responded saying that the new tariffs are in violation of the World Trade Organization’s rules. “Section 301 tariffs imposed by the former US administration on China have severely disrupted normal trade and economic exchanges between China and the US. The WTO has already ruled those tariffs against WTO rules,” said Lin Jian, a Chinese Foreign Ministry spokesperson in a conversation with reporters on Friday.

Ahead of the announcement, senior U.S. officials denied the tariffs are related to the presidential election, the Financial Times reported. “This has nothing to do with politics,” one official said.

Why Protesters Around the World Are Demanding a Pause on AI Development 

13 May 2024 at 23:20
Pause AI protest in London

Just one week before the world’s second-ever global summit on artificial intelligence, protesters of a small but growing movement called “Pause AI” demanded that the world’s governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday.

[time-brightcove not-tgx=”true”]

In London, a group of 20 or so protesters stood outside of the U.K.’s Department of Science, Innovation and Technology chanting things like “stop the race, it’s not safe” and “who’s future? our future” with the hopes of attracting the attention of policy makers. The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI’s Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world.

“[AI companies] have proven time and time again… through the way that these companies’ workers are treated, with the way that they treat other people’s work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted,” said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. 

One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. “I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically,” she says. “I love writing personally… I’ve really loved it. And it is kind of just sad, emotionally.”

Read More: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

She says that her main reason for protesting is because she fears that there could be even more dangerous consequences that come from frontier artificial intelligence models in the future. “We have a host of highly qualified knowledgeable experts, Turing Award winners, highly cited AI researchers, and the CEOs of the AI companies themselves [saying that AI could be extremely dangerous].” (The Turing Award is an annual prize awarded to computer scientists for contributions of major importance to the subject, and is sometimes referred to as the “Nobel Prize” of computing.) 

She’s especially concerned about the growing number of experts who warn that improperly controlled AI could lead to catastrophic consequences. A report commissioned from the U.S. government, published in March, warned that “the rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” Currently, the largest AI labs are attempting to build systems that are capable of outperforming humans on nearly every task, including long-term planning and critical thinking. If they succeed, increasing aspects of human activity could become automated, ranging from mundane things like online shopping, to the introduction of autonomous weapons systems that could act in ways that we cannot predict. This could lead to an “arms race” that increases the likelihood of “global- and WMD [weapons of mass destruction]-scale fatal accidents, interstate conflict, and escalation,” according to the report

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

Experts still don’t understand the inner workings of AI systems like Chat GPT, and they worry that in more sophisticated systems, our lack of knowledge could lead us to dramatically miscalculate how more powerful systems would act. Depending on how integrated AI systems are into human life, they could wreak havoc, and gain control of dangerous weapons systems, leading many experts to worry about the possibility of human extinction. “Those warnings aren’t getting through to the general public, and they need to know,” she says. 

As of now, machine learning experts are somewhat divided about exactly how risky further development of artificial intelligence technology is. Two of the three godfathers of deep learning, a type of machine learning that allows AI systems to better simulate the decision making process of the human brain, Geoffrey Hinton and Yoshua Bengio, have publicly stated that they  believe there is a risk that the technology could lead to human extinction. 

Read More: Eric Schmidt and Yoshua Bengio Debate How Much A.I. Should Scare Us

The third godfather, Yann LeCun, who is also the Chief AI Scientist at Meta, staunchly disagrees with the other two. He told Wired in December that “AI will bring a lot of benefits to the world. But people are exploiting the fear about the technology, and we’re running the risk of scaring people away from it.”

Anthony Bailey, another Pause AI protester, said that while he understands there are benefits that could come from new AI systems, he worries that tech companies will be incentivized to build technologies that humans could easily lose control over, because these technologies also have immense potential for profit. “That’s the economically valuable stuff. That’s the stuff that if people are not dissuaded that it’s dangerous, those are the kinds of modules which are naturally going to be built.” 

❌
❌