Normal view

There are new articles available, click to refresh the page.
Today — 16 May 2024Tech – TIME

How to Hit Pause on AI Before It’s Too Late

16 May 2024 at 15:22
Demonstrator holding "No AI" placard

Only 16 months have passed, but the release of ChatGPT back in November 2022 feels already like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these large language models. Our world, and in particular the world of AI, has decidedly changed.

But the real prize of human-level AI—or artificial general intelligence (AGI)—has yet to be achieved. Such a breakthrough would mean an AI that can carry out most economically productive work, engage with others, do science, build and maintain social networks, conduct politics, and carry out modern warfare. The main constraint for all these tasks today is cognition. Removing this constraint would be world-changing. Yet many across the globe’s leading AI labs believe this technology could be a reality before the end of this decade.

[time-brightcove not-tgx=”true”]

That could be an enormous boon for humanity. But AI could also be extremely dangerous, especially if we cannot control it. Uncontrolled AI could hack its way into online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create tailor-made manipulations for large numbers of people. Even worse, military personnel in charge of nuclear weapons could be manipulated by an AI to share their credentials, posing a huge threat to humanity.

It would be a constructive step to make it as hard as possible for any of that to happen by strengthening the world’s defenses against adverse online actors. But when AI can convince humans, which it is already better at than we are, there is no known defense.

For these reasons, many AI safety researchers at AI labs such as OpenAI, Google DeepMind and Anthropic, and at safety-minded nonprofits, have given up on trying to limit the actions future AI can do. They are instead focusing on creating “aligned” or inherently safe AI. Aligned AI might get powerful enough to be able to exterminate humanity, but it should not want to do this.

There are big question marks about aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the best researchers working on aligning superhuman AI left OpenAI in dissatisfaction, a move that does not inspire confidence. Second, it is unclear what a superintelligent AI would be aligned to. If it was an academic value system, such as utilitarianism, we might quickly find out that most humans’ values actually do not match these aloof ideas, after which the unstoppable superintelligence could go on to act against most people’s will forever. If the alignment was to people’s actual intentions, we would need some way to aggregate these very different intentions. While idealistic solutions such as a U.N. council or AI-powered decision aggregation algorithms are in the realm of possibility, there is a worry that superintelligence’s absolute power would be concentrated in the hands of very few politicians or CEOs. This would of course be unacceptable for—and a direct danger to—all other human beings.

Read More: The Only Way to Deal With the Threat From AI? Shut It Down

Dismantling the time bomb

If we cannot find a way to at the very least keep humanity safe from extinction, and preferably also from an alignment dystopia, AI that could become uncontrollable must not be created in the first place. This solution, postponing human-level or superintelligent AI, for as long as we haven’t solved safety concerns, has the downside that AI’s grand promises—ranging from curing disease to creating massive economic growth—will need to wait.

Pausing AI might seem like a radical idea to some, but it will be necessary if AI continues to improve without us reaching a satisfactory alignment plan. When AI’s capabilities reach near-takeover levels, the only realistic option is that labs are firmly required by governments to pause development. Doing otherwise would be suicidal.

And pausing AI may not be as difficult as some make it out to be. At the moment, only a relatively small number of large companies have the means to perform leading training runs, meaning enforcement of a pause is mostly limited by political will, at least in the short run. In the longer term, however, hardware and algorithmic improvement mean a pause may be seen as difficult to enforce. Enforcement between countries would be required, for example with a treaty, as would enforcement within countries, with steps like stringent hardware controls. 

In the meantime, scientists need to better understand the risks. Although there is widely-shared academic concern, no consensus exists yet. Scientists should formalize their points of agreement, and show where and why their views deviate, in the new International Scientific Report on Advanced AI Safety, which should develop into an “Intergovernmental Panel on Climate Change for AI risks.” Leading scientific journals should open up further to existential risk research, even if it seems speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments have an enormous part to play in how AI unfolds. This starts with officially acknowledging AI’s existential risk, as has already been done by the U.S., U.K., and E.U., and setting up AI safety institutes. Governments should also draft plans for what to do in the most important, thinkable scenarios, as well as how to deal with AGI’s many non-existential issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing scientific, industry, and public evaluation.

It is great progress that major AI countries are constructively discussing common policy at biannual AI safety summits, including one in Seoul from May 21 to 22. This process, however, needs to be guarded and expanded. Working on a shared ground truth on AI’s existential risks and voicing shared concern with all 28 invited nations would already be major progress in that direction. Beyond that, relatively easy measures need to be agreed upon, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from training. An international AI agency needs to be set up to guard execution.

It is fundamentally difficult to predict scientific progress. Still, superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a feasible strategy. Let us use the time we have as wisely as possible.

Billionaire Frank McCourt Wants to Buy TikTok. Here’s Why He Thinks He Could Save It

16 May 2024 at 15:21
McCourt

Billionaire Frank McCourt has long argued that the internet needs to be radically changed on an infrastructural level in order to reduce its toxicity, misinformation, and extractive nature. Now, he’s hoping to slide into a power vacuum in pursuit of that goal. McCourt is putting together a bid to buy TikTok from Chinese technology company ByteDance, which faces a U.S. ban at the end of this year unless it sells the wildly popular app.

[time-brightcove not-tgx=”true”]

McCourt’s central thesis lies in the belief that users should have more control over their personal data and online identities. His aim is to assemble a coalition to buy TikTok, so that its most valuable user data would be kept not by a single company, but on a decentralized protocol. McCourt has developed this idea in conjunction with technologists, academics, and policymakers via his nonprofit Project Liberty. His plan has received support from notable luminaries including the author Jonathan Haidt (The Anxious Generation) and Tim Berners-Lee, the inventor of the world wide web.

McCourt did not say how much he thinks TikTok is worth. Other magnates who have expressed interest in bidding for TikTok include Kevin O’Leary and Steve Mnuchin.

But there is no indication that ByteDance plans to sell TikTok; they are challenging the forced sale in the U.S. court system on the grounds of freedom of speech. And McCourt faces many obstacles in folding TikTok into his technological vision while ensuring the app’s profitability—especially because he says he’s not interested in buying the core algorithm that has hypercharged TikTok’s growth. 

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

In an interview with TIME, McCourt explained his vision for the app and the larger internet ecosystem. Here are excerpts from the conversation.

TIME: A couple years ago, you stepped down as CEO from McCourt Global in order to devote most of your time to Project Liberty, whose goal is fixing the internet. How pivotal could buying TikTok be towards that mission?

Frank McCourt: I think it’s a fantastic opportunity to really accelerate things and catalyze an alternative version of the internet where individuals own and control their identity and data. The internet does not have to operate the way it does right now. It’s important to remember that the other big platforms in the U.S. operate with the same architecture as TikTok: of scraping people’s data and aggregating it and then exploiting it. 

When I say data, it sounds abstract. But it’s our personhood; it’s everything about us. And I think it’s well past time that we correct that fundamental flaw in the design of the internet and return agency to individuals.

Let’s say I’m a small business owner who uses TikTok to post content and sell goods. How would my experience improve under your new design?

The user experience wouldn’t change much. We want this to be a seamless thing. Part of our thinking is to keep TikTok U.S. alive, because China has said they’re not sharing the [core] algorithm under any circumstances. And without a viable bidder to move forward without the algorithm, they may shut it down. But we’re not looking for the algorithm.

Many people contend that the core algorithm is essential to TikTok’s value. Do you worry that TikTok wouldn’t be TikTok without it?

What makes TikTok, TikTok, to me, is the user base, the content created by the user base, the brand, and all the tech short of the algorithm. Of course, TikTok isn’t worth as much without the algorithm. I get that. That’s pretty plain. But we’re talking about a different design, which requires people to move on from the mindset and the paradigm we’re in now. 

It will be a version where everyone is deciding what pieces or portions of their data to share with whom. So you still have a user experience every bit as good, but with much better architecture overall. And not only will individuals have agency, but let’s have a broader group of people participating in who shares in the economic value of the platform itself. 

Read More: Why The Billionaire Frank McCourt is Stepping Down As CEO Of His Company To Focus on Rebuilding Social Media

How would that value sharing work? Are you talking about some sort of directed shares program, or a crypto token?

It’s a bit early to have that conversation. That’s why we’ve retained Kirkland & Ellis to advise us, along with Guggenheim Securities. They’re grappling with and thinking through those very issues right now.

So how would users control their data?

Imagine an internet where individuals set the terms and conditions of their data with use cases and applications. And you’ll still want to share your data, because you’ll want to get the benefits of the internet. But you’re sharing it on a trusted basis. The mere act of giving permission to use it is very different than having it be surveilled and scraped.

The blockchain-based decentralized infrastructure you plan to use for TikTok, DSNP, is already running, and the social media app MeWe is currently migrating its tech and data onto it. What have you learned from MeWe’s transition?

That it works. Like any other engineering challenge, you have to go through all the baby steps to get it right. But the migration started in earnest in Q4, and over 800,000 users have migrated. To me, that’s important that we’re not bringing forward a concept: We’re bringing forward a proven tech solution.

In order to finance this bid, you will seek money from foundations, endowments and pension funds and philanthropies. Are you confident that if you get these big investors on board, you’ll be able to return value to them?

I am. This opens up and unlocks enormous value for investors and users. At the same time, it has a tremendous impact for society. I mentioned the pension funds and endowments and foundations as a category of investor that have a longer term horizon, and look at making investments not strictly on the basis of financial ROI. It’s important they be involved, because this is a societal project to fundamentally change how the internet works.  

We want a lot of people involved in this in different ways, shapes and forms, which is another distinguishing characteristic. We don’t need Saudi money to replace Chinese money. We’re trying to bring forward a solution to address the problem at its root cause, not at the symptomatic level.

You committed $150 million to Project Liberty in 2022. Are you prepared to spend in that ballpark again for TikTok?

Update that number: I’ve committed half a billion dollars to Project Liberty. That should be an indication of my level of seriousness about all this, and my level of seriousness about the bid for TikTok U.S.

2023 Was the Worst Year for Internet Shutdowns Globally, New Report Says

16 May 2024 at 10:00
Internet Cut in Manipur, India

Last year, an internet shutdown in the state of Manipur, India, lasted a staggering 212 days when the state government issued 44 consecutive orders to switch off access across all broadband and mobile networks. The shutdown affected a population of 3.2 million, and made it more difficult to document rampant atrocities committed against minorities during bloody violence between the Meitei and Kuki-Zo tribes, which included murder, rape, arson, and other gender-based violence, says Access Now, a digital rights watchdog that publishes an annual report on internet shutdowns around the world. 

[time-brightcove not-tgx=”true”]

Manipur was just one of hundreds of instances where authorities in India used the tactic as “a near-default response to crises, both proactively and reactively,” according to the group’s latest report published May 15. For the sixth consecutive year, India led the global list for imposing the highest number of internet shutdowns after removing access 116 times in 2023. 

What’s more, Access Now deemed 2023 the worst year for internet shutdowns globally, recording 283 shutdowns across 39 countries—the highest number of shutdowns in a single year since it first began monitoring in 2016. It’s a steep 41% increase from the previous year, which saw 201 shutdowns in 40 countries, and a 28% increase from 2019, which previously held the record for the highest number of shutdowns. 

“By nearly every measure, 2023 is the worst year of internet shutdowns ever recorded — highlighting an alarming and dangerous trend for human rights,” the report states.

Read More: How Internet Shutdowns Wreak Havoc in India

173 of the shutdowns in 2023 occurred in conflict zones and corresponded to acts of violence. In the Gaza Strip, for example, the Israeli military “used a combination of direct attacks on civilian telecommunications infrastructure, restrictions on access to electricity, and technical disruptions to shut down the internet,” the report reads. (In a statement to TIME, the IDF said “As part of the IDF’s operations in the Gaza Strip, the IDF is facilitating the restoration of infrastructure in areas affected by the war and is coordinating with local teams to bring infrastructure repair to these locations.”)

And in the Amhara region of Ethiopia, security forces imposed a near-total communications blackout to cause terror and mass displacement through the destruction of property and indiscriminate bombing across the region, according to the report.

The watchdog also points out that while the increase of shutdowns associated with violence during armed conflict was high, in 74 instances across nine countries—including Palestine, Myanmar, Sudan, and Ukraine—warring political parties claimed to deploy shutdowns during protests and politically unstable events as a peacekeeping measure. In India alone, authorities ordered 65 shutdowns in 2023 in specific attempts to address communal violence. Similarly, Pakistan and Bangladesh imposed seven and three shutdowns, respectively, as a way to suppress political dissent during political rallies and election campaigning. 

Read More: Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say

93% of all cases recorded in 2023 occurred without giving the public any advance notice of an impending shutdown; a practice that Access Now says only deepens fear and uncertainty, and puts more people in grave danger.

“We are at a tipping point, so take this as a wake-up call: all stakeholders across the globe — governments, civil society, and the private sector alike — must take urgent action to permanently end internet shutdowns,” Zach Rosson, a data analyst at Access Now, said in a statement.

Yesterday — 15 May 2024Tech – TIME

OpenAI’s Co-Founder and Chief Scientist Ilya Sutskever Is Leaving the Company

15 May 2024 at 05:20
ISRAEL-SCIENCE-TECHNOLOGY-AI

OpenAI Chief Scientist and co-founder Ilya Sutskever is leaving the artificial intelligence company, a departure that ends months of speculation in Silicon Valley about the future of a top AI researcher who played a key role in the brief ouster of Sam Altman last year.

Sutskever will be replaced by Research Director Jakub Pachocki, OpenAI said on its blog Tuesday. 

[time-brightcove not-tgx=”true”]

In a post on X, Sutskever called trajectory of OpenAI “miraculous” and said that he was confident the company will build AI that is “both safe and beneficial” under its current leadership. 

The exit removes an executive and renowed researcher who has played a pivotal role in the company since its earliest days, helping guide discussions over the safety of AI technology and at times differing with Altman over strategy. When OpenAI was founded in 2015, he served as its research director after being recruited to join the company by Elon Musk. At that point, Sutskever was already well known in the field for his work on neural networks at the University of Toronto and his work at the Google Brain lab. Sutskever even officiated the wedding of President Greg Brockman at the OpenAI offices.

Sutskever clashed with Altman over how rapidly to develop AI, a technology prominent scientists have warned could harm humanity if allowed to grow without built-in constraints, for instance on misinformation. Jan Leike, another OpenAI veteran who co-led the so-called superalignment team with Sutskever, also resigned. Leike’s responsibilities included exploring ways to limit the potential harm of AI.

Last year, Sutskever was one of several OpenAI board members who moved to fire Chief Executive Officer Altman, a decision that touched off a whirlwind five days at the company: Brockman quit in protest. Investors revolted. And within days, nearly all of OpenAI’s roughly 770 employees signed a letter threatening to quit unless Altman was reinstated.

Adding to the chaos, Sutskever said he regretted his participation in Altman’s ouster. Soon after, the CEO was reinstated. 

After Altman returned to the company in late November, he said in a blog post that Sutskever wouldn’t go back to his former post as a board member, but that the company was “discussing how he can continue his work at OpenAI.”

In the subsequent months, Sutskever largely disappeared from public view, sparking speculation about his continued role at the company. Sutskever’s post on X Tuesday was the first time he shared anything on the social network since reposting a message from OpenAI in December.

Asked about Sutskever at a press conference in March, Altman said he loved him, and that he believed Sutskever loved OpenAI, adding: “I hope we work together for the rest of our careers.”

In a post on X on Tuesday, Altman wrote, “Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

On X, Sutskever posted that he is working on an as-yet-unnamed project that is “very personally meaningful” for him.

The company’s new chief scientist, Pachocki has worked at OpenAI since 2017. Pachocki led the development of the company’s GPT-4 AI model, OpenAI said.

A Group of TikTok Creators Are Suing the U.S. to Block a Potential Ban on the App

TikTok Creators Hold Capitol Hill News Conference

A group of TikTok creators followed the company’s lead and filed their own lawsuit to block the U.S. law that would force Chinese parent ByteDance Ltd. to divest itself of the popular video app by January or face a ban.

Like the May 7 case filed by TikTok, eight creators behind Tuesday’s suit are challenging an ultimatum by the U.S. meant to address national security concerns that the Chinese government could access user data or influence what’s seen on the platform. The creators include a rancher from Texas, a college football coach in North Dakota, a founder of a skincare line in Atlanta and a Maryland book lover who promotes Black authors on the platform.

[time-brightcove not-tgx=”true”]

“Our clients rely on TikTok to express themselves, learn and find community,” Ambika Kumar, a lawyer for the creators, said in a statement. “They hope to vindicate not only their First Amendment rights, but the rights of the other approximately 170 million Americans who also use TikTok. The ban is a pernicious attack on free speech that is contrary to the nation’s founding principles.”

A Justice Department spokesperson said the government looks forward to defending the law in court.

“This legislation addresses critical national security concerns in a manner that is consistent with the First Amendment and other constitutional limitations,” the spokesperson said in a statement.

ByteDance has said it doesn’t have any intention of trying to find a buyer for TikTok as the January deadline approaches. Instead, ByteDance wants the law declared unconstitutional, saying it violates the First Amendment and represents an illegal punishment without due process or a presidential finding that the app is a national security threat.

Read More: What to Know About the Law That Could Get TikTok Banned in the U.S.

TikTok has argued the law will stifle free speech and hurt creators and small business owners who benefit economically from the platform. The company said that in response to data security concerns, it spent more than $2 billion to isolate its U.S. operations and agreed to oversight by American company Oracle Corp.

Professional content creators typically don’t make enough money to provide a living from TikTok itself. The social media company has a fund that pays certain creators based on performance, and it also shares revenue from products tagged and purchased through the app. Instead, creators use the app to gain an audience in the hopes of landing lucrative brand sponsorship deals where they make videos for or plug products of brands, much like on other social media platforms.

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

TikTok’s links to China have faced scrutiny under previous administrations. Former President Donald Trump used an executive order to try to force a sale of the app to an American company or face a ban. But his administration also faced multiple legal challenges—including from creators—and judges blocked the ban from taking place. When Joe Biden became president, he put Trump’s ban under fresh review.

A lobbying push against the law by TikTok Chief Executive Officer Shou Chew failed to convince U.S. lawmakers who worried about the national security threat of China potentially accessing user data and disseminating propaganda to about half the American population. Congress passed the law in April and Biden signed it.

Read More: The Grim Reality of Banning TikTok

Last year, Montana became the first U.S. state to enact a law that would ban residents from using the app. A federal judge sympathized with free-speech arguments by TikTok and creators in blocking the Montana measure while the legal challenges play out.

The Justice Department had no immediate comment on Tuesday’s suit.

DOJ Says Boeing Violated Deal That Avoided Prosecution After 737 Max Crashes

Justice Department Boeing

(WASHINGTON) — Boeing has violated a settlement that allowed the company to avoid criminal prosecution after two deadly crashes involving its 737 Max aircraft, the Justice Department told a federal judge on Tuesday.

It is now up to the Justice Department to decide whether to file charges against the aircraft maker amid increasing scrutiny over the safety of its planes. Prosecutors will tell the court no later than July 7 how they plan to proceed, the Justice Department said.

[time-brightcove not-tgx=”true”]

Boeing reached a $2.5 billion settlement with the Justice Department in January 2021 to avoid prosecution on a single charge of fraud – misleading regulators who approved the 737 Max. Boeing blamed the deception on two relatively low-level employees.

The manufacturing giant came under renewed scrutiny since a door-plug panel blew off a 737 Max jetliner during an Alaska Airlines flight in January. The company is under multiple investigations, and the FBI has told passengers from the flight that they might be victims of a crime.

Boeing didn’t immediately respond to a request for comment.

Glenn Leon, head of the Justice Department criminal division’s fraud section, said in the letter filed in Texas federal court that Boeing failed to make changes to prevent it from violating federal anti-fraud laws — a condition of the the 2021 settlement.

The determination means that Boeing could be prosecuted “for any federal criminal violation of which the United States has knowledge,” including the charge of fraud that the company hoped to avoid with the $2.5 billion settlement, the Justice Department said.

However, it is not clear whether the government will prosecute the manufacturing giant.

“The Government is determining how it will proceed in this matter,” the Justice Department said in the court filing. Prosecutors said they will meet with families of the crash victims on May 31.

Paul Cassell, a lawyer who represents families of passengers who died in the Max crash in Ethiopia, called it a “positive first step, and for the families, a long time coming.”

“But we need to see further action from DOJ to hold Boeing accountable, and plan to use our meeting on May 31 to explain in more details what we believe would be a satisfactory remedy to Boeing’s ongoing criminal conduct,” Cassell said.

Investigations into the 2018 and 2019 crashes pointed to a flight-control system that Boeing added to the Max without telling pilots or airlines. Boeing downplayed the significance of the system, then didn’t overhaul it until after the second crash.

The Justice Department investigated Boeing and settled the case in January 2021. After secret negotiations, the government agreed not to prosecute Boeing on a charge of defrauding the United States by deceiving regulators who approved the plane.

In exchange, the company paid $2.5 billion — a $243.6 million fine, a $500 million fund for victim compensation, and nearly $1.8 billion to airlines whose Max jets were grounded.

Boeing has faced civil lawsuits, congressional investigations and massive damage to its business since the crashes in Indonesia and Ethiopia.

Before yesterdayTech – TIME

Dublin to New York City Portal Temporarily Shut Down Due to Inappropriate Behavior

14 May 2024 at 14:57
People interact with a livestream video "portal" in NYC

A portal linking New York City to Dublin via a livestream has been temporarily shut down after inappropriate behavior ensued, according to the Dublin City Council. 

Less than a week after the 24/7 visual art installation was put in place, officials have opted to close it down temporarily after people began to flash each other, grind on the portal, and one person even shared pictures of the twin tower attack to people in New York City. Alternatively, the portal had also been the site of reunions with old friends and even a proposal, with many documenting their experience with the installation online. 

[time-brightcove not-tgx=”true”]

The Dublin City Council said that although those engaged in the inappropriate behavior were few and far between, videos of said behavior went viral online. 

“While we cannot control all of these actions, we are implementing some technical solutions to address this and these will go live in the next 24 hours,” the council said in a Monday statement. “We will continue to monitor the situation over the coming days with our partners in New York to ensure that portals continue to deliver a positive experience for both cities and the world.”

The New York City portal is next to the Flatiron Building while Dublin’s is at the crux of North Earl Street and O’Connell Street.  

What is the portal?

The portal was launched on May 8 as a way to bring people together via technology. 

“Portals are an invitation to meet people above borders and differences and to experience our world as it really is—united and one,” said Benediktas Gylys, the Lithuanian artist and founder of The Portal. “The livestream provides a window between distant locations, allowing people to meet outside of their social circles and cultures, transcend geographical boundaries, and embrace the beauty of global interconnectedness.”

The Dublin portal is set to connect with other cities and destinations in Poland, Brazil, and Lithuania, the Dublin City Council said in a May 8 press release. The connection with New York City is expected to remain through autumn, with additional cultural performances starting in mid-May.

Why Biden Is Taking a Hard Line on Chinese EVs

14 May 2024 at 11:21
Biden China EV

The Biden Administration has announced new tariffs on Tuesday for Chinese made electric vehicles, quadrupling the current tariff from 27.5% to 102.5%, as well as new tariffs on solar cells, steel, and aluminum.

These tariffs are expected to raise $18 billion in imports from China.

[time-brightcove not-tgx=”true”]

Currently, China exports very few electric vehicles to the U.S., so it is unlikely that the tariffs will have much of an impact in the short run. In the first quarter of 2024, only one Chinese car maker, Geely, exported EVs to the U.S. and it represented less than 1% of the market share. 

Nevertheless, the Biden Administration says that it worries that in the long run, China’s subsidies of its electric vehicle industry could lead it to claim a larger proportion of the market share. “When the global market is flooded by artificially cheap Chinese products, the viability of American and other foreign firms is put into question,” Treasury Secretary Janet Yellen said during a speech she gave while she visited Beijing in April. 

Since coming into office, President Joe Biden has left the tariffs Trump put in place on China intact, as part of a bid to encourage more American manufacturing. On a Monday call with reporters, Lael Brainard, director of the White House National Economic Council, said that the tariffs would help manufacturing workers in Pennsylvania and Michigan by ensuring that “historic investments in jobs spurred by President Biden’s actions are not undercut by a flood of unfairly underpriced exports from China.”

Some observers have suggested that the tariffs are an attempt to get ahead of Donald Trump, who has expressed support for an across-the-board levy of 60% or more on all Chinese goods.

The announcement also comes during an election year during which tensions between the U.S. and China are very high. Over 83% of Americans have an unfavorable view of China, according to a survey conducted by the Pew Research Center in 2023.

Beijing has responded saying that the new tariffs are in violation of the World Trade Organization’s rules. “Section 301 tariffs imposed by the former US administration on China have severely disrupted normal trade and economic exchanges between China and the US. The WTO has already ruled those tariffs against WTO rules,” said Lin Jian, a Chinese Foreign Ministry spokesperson in a conversation with reporters on Friday.

Ahead of the announcement, senior U.S. officials denied the tariffs are related to the presidential election, the Financial Times reported. “This has nothing to do with politics,” one official said.

Why Protesters Around the World Are Demanding a Pause on AI Development 

13 May 2024 at 23:20
Pause AI protest in London

Just one week before the world’s second-ever global summit on artificial intelligence, protesters of a small but growing movement called “Pause AI” demanded that the world’s governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday.

[time-brightcove not-tgx=”true”]

In London, a group of 20 or so protesters stood outside of the U.K.’s Department of Science, Innovation and Technology chanting things like “stop the race, it’s not safe” and “who’s future? our future” with the hopes of attracting the attention of policy makers. The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI’s Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world.

“[AI companies] have proven time and time again… through the way that these companies’ workers are treated, with the way that they treat other people’s work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted,” said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. 

One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. “I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically,” she says. “I love writing personally… I’ve really loved it. And it is kind of just sad, emotionally.”

Read More: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

She says that her main reason for protesting is because she fears that there could be even more dangerous consequences that come from frontier artificial intelligence models in the future. “We have a host of highly qualified knowledgeable experts, Turing Award winners, highly cited AI researchers, and the CEOs of the AI companies themselves [saying that AI could be extremely dangerous].” (The Turing Award is an annual prize awarded to computer scientists for contributions of major importance to the subject, and is sometimes referred to as the “Nobel Prize” of computing.) 

She’s especially concerned about the growing number of experts who warn that improperly controlled AI could lead to catastrophic consequences. A report commissioned from the U.S. government, published in March, warned that “the rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” Currently, the largest AI labs are attempting to build systems that are capable of outperforming humans on nearly every task, including long-term planning and critical thinking. If they succeed, increasing aspects of human activity could become automated, ranging from mundane things like online shopping, to the introduction of autonomous weapons systems that could act in ways that we cannot predict. This could lead to an “arms race” that increases the likelihood of “global- and WMD [weapons of mass destruction]-scale fatal accidents, interstate conflict, and escalation,” according to the report

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

Experts still don’t understand the inner workings of AI systems like Chat GPT, and they worry that in more sophisticated systems, our lack of knowledge could lead us to dramatically miscalculate how more powerful systems would act. Depending on how integrated AI systems are into human life, they could wreak havoc, and gain control of dangerous weapons systems, leading many experts to worry about the possibility of human extinction. “Those warnings aren’t getting through to the general public, and they need to know,” she says. 

As of now, machine learning experts are somewhat divided about exactly how risky further development of artificial intelligence technology is. Two of the three godfathers of deep learning, a type of machine learning that allows AI systems to better simulate the decision making process of the human brain, Geoffrey Hinton and Yoshua Bengio, have publicly stated that they  believe there is a risk that the technology could lead to human extinction. 

Read More: Eric Schmidt and Yoshua Bengio Debate How Much A.I. Should Scare Us

The third godfather, Yann LeCun, who is also the Chief AI Scientist at Meta, staunchly disagrees with the other two. He told Wired in December that “AI will bring a lot of benefits to the world. But people are exploiting the fear about the technology, and we’re running the risk of scaring people away from it.”

Anthony Bailey, another Pause AI protester, said that while he understands there are benefits that could come from new AI systems, he worries that tech companies will be incentivized to build technologies that humans could easily lose control over, because these technologies also have immense potential for profit. “That’s the economically valuable stuff. That’s the stuff that if people are not dissuaded that it’s dangerous, those are the kinds of modules which are naturally going to be built.” 

Why GameStop’s Resurgence Could Signal Another Meme Stock Frenzy

13 May 2024 at 17:00
Keith Gill

A single JPEG has catalyzed yet another rabid surge in the stock price of the video game store GameStop: its price jumped by more than 70% on Monday morning.  

On the evening of Sunday, May 12, a man named Keith Gill posted an illustration on X of a man bolting upright in his chair. Gill, who goes by handle Roaring Kitty, is something of a digital folk hero to many amateur investors—he was one of the major catalysts of the WallStreetBets craze of 2021. His reappearance and the subsequent enthusiasm—coupled with other rising “meme stocks”—suggest that the U.S. is fully in the midst of another period of meme stock frenzy, in which small-scale investors rally together to push the stock market in unpredictable ways. On Monday, GameStop’s shares briefly passed $36—its highest price since August 2022—and were halted multiple times for volatility. And GameStop topped both Google and X’s top trends list. 

[time-brightcove not-tgx=”true”]

In late 2020, Gill became renowned for his stock market advice on YouTube and the subreddit Wall Street Bets. In particular, he advised his fellow investors to buy GameStop shares, believing that they were undervalued. Some major Wall Street institutions, conversely, were betting on GameStop to fail, as a declining number of people went to physical stores to buy video games, instead purchasing them online.

But a sprawling online community soon rallied around Gill’s thesis, hyping up GameStop with memes and other posts on social media. Millions of everyday people soon bought shares, pushing its price to unprecedented heights and punishing the hedge funds who had bet against it. GameStop soon became the textbook definition of a “meme stock,” or a stock whose value was driven more so by social media enthusiasm as opposed to any sort of underlying financial metrics. The GameStop saga showed Wall Street traditionalists that coordinated small-scale retail investors could have an outsize impact on the stock market. 

Read More: Dumb Money and the Complicated Legacy of GameStop

Interest in meme stocks waned after a few months, and Gill mostly disappeared from public life. In September 2023, his story was canonized in the Hollywood film Dumb Money, in which he was played by Paul Dano. The film portrays Gill’s unwavering belief in his investments, and his refusal to sell shares even when they were worth millions of dollars—because many other investors looked to him as the leader of a movement and would only sell if he did first. 

Gill’s X account lay dormant for nearly three years. But on Sunday, the cartoon of a man sitting upright seemed to signal that he was once again ready to jump into the investing fray and rally fellow traders into another mission. The image accrued 14 million views and 77,000 likes in 15 hours. The next morning, he posted several more memes from pop culture, including of a resurgent Wolverine (Hugh Jackman) and Breaking Bad’s Walter White (Bryan Cranston) growling, “We’re done when I say we’re done.”

The actual company GameStop hasn’t been performing particularly well. In March, GameStop slashed its workforce and reported lower year-over-year fourth quarter revenue, as it faced continued competition from online retailers and weak consumer spending.

But its stock’s resurgence comes in the midst of a larger spike of activity in meme stocks. Crypto meme coins have seen significant trading volume over recent months, and jumped once again following Gill’s post. Other meme stocks also jumped, including AMC, which increased 22%, and Reddit, which increased 13%.

Speculative stocks typically see increased activity when the economy is strong, and people feel like they have money to gamble with. Many participants in the WallStreetBets craze also felt like they were waging a symbolic war against Wall Street and its control of the financial system. Many individuals certainly made a lot of money. But whether the larger collective mission was successful has been hotly debated. “The whole GameStop thing: they lost,” Kyla Scanlon, an economics analyst and content creator, told TIME last year. “It’s very hard to beat the stock market.”

[video id=Rt80iSUP autostart="viewable"]

Big Tech Companies Were Investors in Smaller AI Labs. Now They’re Rivals

13 May 2024 at 14:29

Amazon and Microsoft have, so far, stood slightly apart from the artificial intelligence arms race. While Google and Meta made developing their own AI models a top priority, Microsoft and Amazon have invested in smaller technology companies, in return receiving access to those companies’ AI models that they then incorporated into their products and services.

Microsoft has invested at least $13 billion in OpenAI, the company behind ChatGPT. As part of this agreement, OpenAI gives Microsoft exclusive access to the AI systems it develops, while Microsoft provides OpenAI with the computational power it needs. Anthropic has deals with both Amazon and Google, receiving $4 billion and up to $2 billion from each, respectively, in exchange for Anthropic making its models available through Amazon and Google’s cloud services platforms. (Investors in Anthropic also include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

[time-brightcove not-tgx=”true”]

Now, there are signs that the two technology giants are wading deeper into the fray. In March, The Verge reported that Amazon has tasked its AGI team with building a model that outperforms Anthopic’s most capable AI model, Claude 3, by the middle of this year. Earlier this month, The Information reported that Microsoft is training a foundation model large enough to compete with frontier model developers such as OpenAI.

While there are many types of AI systems that are used in a multitude of ways, the big trend of the last couple of years is language models—the AI systems that can generate coherent prose and usable code, and that power chatbots such as ChatGPT. While younger companies OpenAI and Anthropic, alongside the more established Google DeepMind, are in the lead for now, their new big tech rivals have advantages that will be hard to offset. And if the tech giants come to dominate the AI market, the implications—for corporate concentration of power and for whether the most powerful AI systems are being developed safely—could be troubling.

A change in strategy

Over the course of the 2010s, AI researchers began to realize that training their AI systems with more computational power would reliably make them more capable. Over the same period, the computational power used to train AI models increased rapidly, doubling every six months according to researchers at Epoch, an AI-focused research institute.

The specialized semiconductor chips required to do that much computational work are expensive, as is employing the engineers who know how to make use of them. OpenAI CEO Sam Altman has stated that GPT-4 cost over $100 million to train. Needing more and more capital is why OpenAI, which was founded in 2015 as a nonprofit, changed its structure and went on to ink multibillion dollar deals with Microsoft, and why Anthropic has signed similar agreements with Amazon and Google. Google DeepMind—the AI team within Google that develops Google’s most powerful AI systems—was formed last year when Google merged its elite AI group, Google Brain, with DeepMind. Much like OpenAI and Anthropic, DeepMind started out as a startup before it was acquired by Google in 2014. 

Read More: Amazon’s Partnership With Anthropic Shows Size Matters in the AI Industry

These partnerships have paid off for all parties involved. OpenAI and Anthropic have been able to access the computational power they need to train state-of-the-art AI models—most commentators agree that OpenAI’s GPT-4 and Anthropic’s Claude 3 Opus, along with Google DeepMind’s Gemini Ultra, are the three most capable models currently available. Companies behind the frontier have so far tried alternative business strategies. For example, Meta gives more thorough access to its AI models in order to benefit from developers outside the company tuning them up, and to attract talented researchers who prefer to be able to openly publish their work.

At quarterly earnings reports in April, Microsoft and Amazon reported bumper months, which they both partly credited to AI. Both companies also benefit from the agreements in that a large proportion of the money flows back to them, as it’s used to purchase computational power from their cloud computing services units.

However, as the technical feasibility and commercial utility of training larger models has become apparent, it has become more attractive for Microsoft and Amazon to build their own large models, says Neil Thompson, who researches the economics of AI as the director of the FutureTech research project at the Massachusetts Institute of Technology. Building their own models should, if successful, be cheaper than licensing the models from their smaller partners and give the big tech companies more control over how they use the models, he says.

It’s not only the big tech companies that are making advances. OpenAI’s Altman has pitched his company’s products to a range of large firms that include Microsoft customers.

Who will win out?

The good news for OpenAI and Anthropic is that they have a head start. GPT-4 and Claude 3 Opus, alongside Google’s Gemini Ultra, are still in a different class from other language models such as Meta’s Llama 3, according to a popular chatbot ranking site. OpenAI notably finished training GPT-4 back in August 2022.

But maintaining this lead will be “a constant struggle,” writes Nathan Benaich, founder and general partner at venture capital firm Air Street Capital, in an email to TIME. “Labs are in the challenging position of being in constant fundraising mode to pay for talent and hardware, while lacking a plan to translate this model release arms race into a sustainable long-term business. As the sums of money involved become too high for US investors, they’ll also start having to navigate tricky questions around foreign sovereign wealth.” In February, the Wall Street Journal reported that Altman was in talks with investors including the U.A.E government to raise up to $7 trillion for AI chip manufacturing projects.

Read More: The UAE Is on a Mission to Become an AI Power

Big technology companies, on the other hand, have ready access to computational resources—Amazon, Microsoft, and Google account for 31%, 24%, and 11% of the global cloud infrastructure market, respectively, according to data from market intelligence firm Synergy Research Group. This makes it cheaper for them to train large models. It also means that, even if further development of language models doesn’t pay off commercially for any company, the tech companies selling access to computational power via the cloud can still profit.

“The cloud providers are the shovel salesmen during the gold rush. Whether frontier model builders make money or lose it, cloud providers win,” writes Benaich. “Companies like Microsoft and Amazon sit in an enviable position in the value chain, combining both the resources to build their own powerful models with the scale that makes them an essential distribution partner for newer entrants.”

But while the big technology companies may have certain advantages, the smaller companies have their own strengths, such as greater experience training the largest models, and the ability to attract the most talented researchers, says Thompson.

Anthropic is betting that its talent density and proprietary algorithms will allow it to stay at the frontier while using less computational resources than many of its competitors, says Jack Clark, one of the company’s co-founders and head of policy. “We’re going to be on the frontier surprisingly efficiently relative to others,” he says. “For the next few years, I don’t have concerns about this.”

If Big Tech wins

It is still very much an open question whether big technology companies will manage to outcompete their smaller investees. But if they were to, there could be implications for market competition and for efforts to ensure the development of powerful AI systems benefits society. 

While it could be argued that more companies entering the foundation model market would increase competition, it is more likely that the vertical integration will serve to increase the power of already powerful technology companies, argues Amba Kak, co-executive director of the AI Now Institute, a research institute that studies the social implications of artificial intelligence.

“Viewing this as ‘more competition’ would be the most inventive corporate spin that obscures the reality that all the versions of this world serve to consolidate the concentration of power in tech,” she writes to TIME. “We need to be wary of this kind of spin especially in the context of heightened antitrust scrutiny from the UK CMA, the FTC and European Commission.”

Read More: U.K. Competition Watchdog Signals Cautious Approach to AI Regulation

Larger companies coming to dominate could also be troubling because the smaller companies that currently lead were explicitly founded in order to ensure that the building of powerful AI systems goes well for humanity, says Anton Korinek, an economics professor at the University of Virginia. OpenAI’s founding goal was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole,” and Anthropic’s founding goal was “to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people.” 

“In some sense, you can say, the AGI labs—OpenAI, Anthropic, DeepMind—were all founded on the basis of idealism,” he says. “Large shareholder owned and controlled corporations, they just can’t follow that strategy—they have to produce value for the shareholder ultimately.”

Even so, companies like OpenAI and Anthropic cannot act entirely in the public interest, because they’re also exposed to commercial incentives through the need to raise funds, says Korinek. “It’s part of that broader movement, that capital in the form of [computational power] is becoming the most important input,” he says. “If your training runs are in the millions, it is much easier to raise philanthropic funding for this. But if your training rounds are in the billions, you do need financial returns, in the way that our economy is currently organized.”

With reporting by Billy Perrigo/San Francisco

Why This Chinese EV Poses a Big Threat to the U.S. Auto Industry

45th Bangkok International Motor Show.

LIVONIA, Mich. — A tiny, low-priced electric car called the Seagull has American automakers and politicians trembling.

The car, launched last year by Chinese automaker BYD, sells for around $12,000 in China, but drives well and is put together with craftsmanship that rivals U.S. electric vehicles that cost three times as much. A shorter-range version costs under $10,000.

[time-brightcove not-tgx=”true”]

Tariffs on imported Chinese vehicles will keep the Seagull out of America for now, and it likely would sell for more than 12 grand if imported.

Read More: The Lesson From BYD’s EV Takeover: Don’t Discount China

But the rapid emergence of low-priced EVs from China could shake up the global auto industry in ways not seen since Japanese makers arrived during the oil crises of the 1970s. BYD, which stands for “Build Your Dreams,” could be a nightmare for the U.S. auto industry.

“Any car company that’s not paying attention to them as a competitor is going to be lost when they hit their market,” said Sam Fiorani, a vice president at AutoForecast Solutions near Philadelphia. “BYD’s entry into the U.S. market isn’t an if. It’s a when.”

U.S. politicians and manufacturers already see Chinese EVs as a serious threat. The Biden administration on Tuesday is expected to announce 100% tariffs on electric vehicles imported from China, saying they pose a threat to U.S. jobs and national security.

The Alliance for American Manufacturing says in a paper that government subsidized Chinese EVs “could end up being an extinction-level event for the U.S. auto sector.”

Earlier this year, Tesla CEO Elon Musk said Chinese EVs are so good that without trade barriers, “they will pretty much demolish most other car companies in the world.”

Outside of China, EVs are often pricey, aimed at higher-income buyers. But Chinese brands offer affordable options for the masses — just as many governments are encouraging a shift away from gasoline vehicles to fight climate change.

Inside a huge garage near Detroit, a company called Caresoft Global tore apart and reassembled a bright green Seagull that its China office purchased and shipped to the U.S.

Company President Terry Woychowski, a former chief engineer on General Motors’ pickup trucks, said the car is a “clarion call” for the U.S. industry, which is years behind China in designing low-cost EVs.

After the teardown, Woychowski said he was left wondering if U.S. automakers can adjust. “Things will have to change in some radical ways in order to be able to compete,” he said.

There’s no single miracle that explains how BYD can manufacture the Seagull for so little. Instead, Woychowski said the entire car, which can go 252 miles (405 kilometers) per charge, is “an exercise in efficiency.”

Higher U.S. labor costs are a part of the equation. BYD also can keep costs down because of its battery-making expertise — largely lithium iron phosphate chemistry used in consumer products. The batteries cost less but have lower range than most current lithium-ion batteries.

Americans are still learning to make cheaper batteries, Woychowski said.

BYD also makes many of its own parts, including electric motors, dashboards, and bodies, using its huge scale — 3 million vehicles sold worldwide last year — for cost savings.

It designs vehicles with cost and efficiency in mind, he said. For instance, the Seagull has only one windshield wiper, eliminating one motor and one arm, saving on weight, cost and labor to install.

U.S. automakers don’t often design vehicles this way and incur excess engineering costs, Woychowski said.

The efficiency means weight savings that add up, allowing the Seagull to travel farther per charge on a smaller battery.

So Detroit needs to quickly re-learn a lot of design and engineering to keep up while shedding practices from a century of building vehicles, Woychowski said.

The Seagull still has a quality feel. Doors close solidly. The gray synthetic leather seats have stitching that matches the body color, a feature usually found in more expensive cars. The Seagull tested by Caresoft has six air bags and electronic stability control.

A brief drive through some connected parking lots by a reporter showed that it runs quietly and handles curves and bumps as well as more costly EVs

While acceleration isn’t head-snapping like other EVs, the Seagull is peppy and would have no problems entering a freeway.

BYD would have to modify its cars to meet U.S. safety standards, which are more stringent than in China. Woychowski says Caresoft hasn’t done crash tests, but he estimated that would add $2,000 to the cost.

BYD sells the Seagull, also called the Dolphin Mini, in four Latin American countries for about $21,000. The higher price includes transportation and reflects higher profits possible in less cutthroat markets than China.

BYD told the AP last year it is “still in the process” of deciding whether to sell autos in the U.S. It is weighing factory sites in Mexico for the Mexican market.

The company isn’t selling cars in the U.S. largely due to 27.5% tariffs on the sale price of Chinese vehicles when they arrive. Donald Trump slapped on the bulk of the tariff, 25%, when he was president, and it was kept in place under Joe Biden. Trump contends that the rise of EVs backed by Biden will cost U.S. factory jobs, sending the work to China.

The Biden administration has backed legislation and policies to build a U.S. EV manufacturing base.

Some members of Congress are urging Biden to ban imports of Chinese vehicles altogether, including those made in Mexico by Chinese companies that now would come in largely without tariffs.

Ford CEO Jim Farley, has seen Caresoft’s work on the Seagull and BYD’s rapid growth, especially in Europe. He’s moving to change his company. A small “skunkworks” team is designing a new, small EV to keep costs down and quality high, he said earlier this year.

Chinese makers, Farley said, sold almost no EVs in Europe two years ago, but now have 10% of the EV market. It’s likely they’ll export around the globe and possibly sell in the U.S.

Ford is preparing to counter that. “Don’t take anything for granted,” Farley said. “This CEO doesn’t.”

____

Associated Press writers Paul Wiseman and Didi Tang in Washington contributed to this report. Moritsugu reported from Beijing.

Illness Took Away Her Voice. AI Created a Replica She Carries in Her Phone

13 May 2024 at 10:55
AI Recreating Lost Voice

PROVIDENCE, R.I. — The voice Alexis “Lexi” Bogan had before last summer was exuberant.

She loved to belt out Taylor Swift and Zach Bryan ballads in the car. She laughed all the time — even while corralling misbehaving preschoolers or debating politics with friends over a backyard fire pit. In high school, she was a soprano in the chorus.

[time-brightcove not-tgx=”true”]

Then that voice was gone.

Doctors in August removed a life-threatening tumor lodged near the back of her brain. When the breathing tube came out a month later, Bogan had trouble swallowing and strained to say “hi” to her parents. Months of rehabilitation aided her recovery, but her speech is still impaired. Friends, strangers and her own family members struggle to understand what she is trying to tell them.

In April, the 21-year-old got her old voice back. Not the real one, but a voice clone generated by artificial intelligence that she can summon from a phone app. Trained on a 15-second time capsule of her teenage voice — sourced from a cooking demonstration video she recorded for a high school project — her synthetic but remarkably real-sounding AI voice can now say almost anything she wants.

She types a few words or sentences into her phone and the app instantly reads it aloud.

“Hi, can I please get a grande iced brown sugar oat milk shaken espresso,” said Bogan’s AI voice as she held the phone out her car’s window at a Starbucks drive-thru.

Experts have warned that rapidly improving AI voice-cloning technology can amplify phone scams, disrupt democratic elections and violate the dignity of people — living or dead — who never consented to having their voice recreated to say things they never spoke.

It’s been used to produce deepfake robocalls to New Hampshire voters mimicking President Joe Biden. In Maryland, authorities recently charged a high school athletic director with using AI to generate a fake audio clip of the school’s principal making racist remarks.

Read More: To Make a Real Difference in Health Care, AI Will Need to Learn Like We Do

But Bogan and a team of doctors at Rhode Island’s Lifespan hospital group believe they’ve found a use that justifies the risks. Bogan is one of the first people — the only one with her condition — who have been able to recreate a lost voice with OpenAI’s new Voice Engine. Some other AI providers, such as the startup ElevenLabs, have tested similar technology for people with speech impediments and loss — including a lawyer who now uses her voice clone in the courtroom.

“We’re hoping Lexi’s a trailblazer as the technology develops,” said Dr. Rohaid Ali, a neurosurgery resident at Brown University’s medical school and Rhode Island Hospital. Millions of people with debilitating strokes, throat cancer or neurogenerative diseases could benefit, he said.

“We should be conscious of the risks, but we can’t forget about the patient and the social good,” said Dr. Fatima Mirza, another resident working on the pilot. “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”

Mirza and Ali, who are married, caught the attention of ChatGPT-maker OpenAI because of their previous research project at Lifespan using the AI chatbot to simplify medical consent forms for patients. The San Francisco company reached out while on the hunt earlier this year for promising medical applications for its new AI voice generator.

Bogan was still slowly recovering from surgery. The illness started last summer with headaches, blurry vision and a droopy face, alarming doctors at Hasbro Children’s Hospital in Providence. They discovered a vascular tumor the size of a golf ball pressing on her brain stem and entangled in blood vessels and cranial nerves.

“It was a battle to get control of the bleeding and get the tumor out,” said pediatric neurosurgeon Dr. Konstantina Svokos.

The 10-hour length of the surgery coupled with the tumor’s location and severity damaged Bogan’s tongue muscles and vocal cords, impeding her ability to eat and talk, Svokos said.

“It’s almost like a part of my identity was taken when I lost my voice,” Bogan said.

The feeding tube came out this year. Speech therapy continues, enabling her to speak intelligibly in a quiet room but with no sign she will recover the full lucidity of her natural voice.

“At some point, I was starting to forget what I sounded like,” Bogan said. “I’ve been getting so used to how I sound now.”

Whenever the phone rang at the family’s home in the Providence suburb of North Smithfield, she would push it over to her mother to take her calls. She felt she was burdening her friends whenever they went to a noisy restaurant. Her dad, who has hearing loss, struggled to understand her.

Back at the hospital, doctors were looking for a pilot patient to experiment with OpenAI’s technology.

“The first person that came to Dr. Svokos’ mind was Lexi,” Ali said. “We reached out to Lexi to see if she would be interested, not knowing what her response would be. She was game to try it out and see how it would work.”

Bogan had to go back a few years to find a suitable recording of her voice to “train” the AI system on how she spoke. It was a video in which she explained how to make a pasta salad.

Her doctors intentionally fed the AI system just a 15-second clip. Cooking sounds make other parts of the video imperfect. It was also all that OpenAI needed — an improvement over previous technology requiring much lengthier samples.

They also knew that getting something useful out of 15 seconds could be vital for any future patients who have no trace of their voice on the internet. A brief voicemail left for a relative might have to suffice.

When they tested it for the first time, everyone was stunned by the quality of the voice clone. Occasional glitches — a mispronounced word, a missing intonation — were mostly imperceptible. In April, doctors equipped Bogan with a custom-built phone app that only she can use.

“I get so emotional every time I hear her voice,” said her mother, Pamela Bogan, tears in her eyes.

“I think it’s awesome that I can have that sound again,” added Lexi Bogan, saying it helped “boost my confidence to somewhat where it was before all this happened.”

She now uses the app about 40 times a day and sends feedback she hopes will help future patients. One of her first experiments was to speak to the kids at the preschool where she works as a teaching assistant. She typed in “ha ha ha ha” expecting a robotic response. To her surprise, it sounded like her old laugh.

She’s used it at Target and Marshall’s to ask where to find items. It’s helped her reconnect with her dad. And it’s made it easier for her to order fast food.

Bogan’s doctors have started cloning the voices of other willing Rhode Island patients and hope to bring the technology to hospitals around the world. OpenAI said it is treading cautiously in expanding the use of Voice Engine, which is not yet publicly available.

A number of smaller AI startups already sell voice-cloning services to entertainment studios or make them more widely available. Most voice-generation vendors say they prohibit impersonation or abuse, but they vary in how they enforce their terms of use.

“We want to make sure that everyone whose voice is used in the service is consenting on an ongoing basis,” said Jeff Harris, OpenAI’s lead on the product. “We want to make sure that it’s not used in political contexts. So we’ve taken an approach of being very limited in who we’re giving the technology to.”

Harris said OpenAI’s next step involves developing a secure “voice authentication” tool so that users can replicate only their own voice. That might be “limiting for a patient like Lexi, who had sudden loss of her speech capabilities,” he said. “So we do think that we’ll need to have high-trust relationships, especially with medical providers, to give a little bit more unfettered access to the technology.”

Bogan has impressed her doctors with her focus on thinking about how the technology could help others with similar or more severe speech impediments.

“Part of what she has done throughout this entire process is think about ways to tweak and change this,” Mirza said. “She’s been a great inspiration for us.”

While for now she must fiddle with her phone to get the voice engine to talk, Bogan imagines an AI voice engine that improves upon older remedies for speech recovery — such as the robotic-sounding electrolarynx or a voice prosthesis — in melding with the human body or translating words in real time.

She’s less sure about what will happen as she grows older and her AI voice continues to sound like she did as a teenager. Maybe the technology could “age” her AI voice, she said.

For now, “even though I don’t have my voice fully back, I have something that helps me find my voice again,” she said.

___

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

Gen AI Has Already Taken the World by Storm. Just Wait Until It Gets a Quantum Boost

13 May 2024 at 05:00
Quantum computer. Conceptual computer artwork of electronic circuitry as part of a quantum computer structure.

When Lawrence Gasman was looking for a PhD topic back in the 1970s, computing labs were already abuzz with smart people proposing clever studies in artificial intelligence. “But the problem was we had nothing to run them on,” he says. “The processors needed just didn’t exist.”

It took half a century for computing power to catch up with AI’s potential. Today, thanks to hi-powered chips such as GPUs from California-based Nvidia, generative artificial intelligence, or gen AI, is revolutionizing the way we work, study, and consume entertainment, empowering people to create bespoke articles, images, videos, and music in the blink of an eye. The technology has spawned a bevy of competing consumer apps offering enhanced voice recognition, graphic design, and even coding.

[time-brightcove not-tgx=”true”]

Now AI stands poised to get another boost from a radical new form of computing: quantum. “Quantum could potentially do some really remarkable things with AI,” says Gasman, founder of Inside Quantum Technology.

Rather than relying on traditional computing’s binary “bits”—switches denoted as 1s and 0s—quantum use multivariant “qubits” that exist in some percentage of both states simultaneously, akin to a coin spinning in midair. The result is exponentially boosted computing power as well as an enhanced ability to intuitively mimic natural processes that rarely conform to a binary form.

Whereas gen AI’s consumer-targeted applications have made its impact more widespread and immediate, quantum is more geared towards industry, meaning several recent milestones have slipped under the radar. However, they stand to potentially turbocharge the AI revolution.

“Generative AI is one of the best things that has happened to quantum computing,” says Raj Hazra, CEO of Colorado-based quantum start-up Quantinuum. “And quantum computing is one of the best things to happen to the advance of generative AI. They are two perfect partners.”

Ultimately, AI relies on the ability to crunch huge stacks of information, which is where quantum excels. In December, IBM unveiled its latest processor, dubbed Heron, which boasts 133 qubits, the firm’s best ever error reduction and the ability to be linked together within its first modular quantum computer, System Two. In addition, IBM unveiled another chip, Condor, which has 1,121 superconducting qubits arranged in a honeycomb pattern. They’re advances that mean “now we’re entering what I like to call ‘quantum utility,’ where quantum is getting used as a tool,” Jay Gambetta, vice-president of IBM Quantum, tells TIME.

Since qubits are incredibly delicate subatomic particles, they don’t always behave in the same way, meaning quantum relies both on increasing the overall number of qubits to “check” their calculations as well as boosting the fidelity of each individual. Different technologies used to create a quantum effect prioritize different sides of this equation, making direct comparisons very tricky and enhancing the arcane nature of the technology.

IBM uses superconducting qubits, which require cooling to almost absolute zero to mitigate thermal noise, preserve quantum coherence, and minimize environmental interactions. However, Quantinuum uses alternative “trapped-ion” technology that holds ions (charged atoms) in a vacuum using magnetic fields. This technology doesn’t require cooling, though is thought to be more difficult to scale. However, Quantinuum in April claimed it had achieved 99.9% fidelity of its qubits.

“The trapped ion approach is miles ahead of everybody else,” says Hazra. Gambetta, in turn, argues the superconducting quantum has advantages for scaling, speed of quantum interactions, and leveraging existing semiconductor and microwave technology to make advances quicker.

For impartial observers, the jury is still out since the raft of competing, non-linear metrics render it impossible to tell who’s actually ahead in this race. “They are very different approaches, both are showing promise,” says Scott Likens, global AI and innovation technology lead for the PwC business consultancy. “We still don’t see a clear winner, but it’s exciting.”

Where Gambetta and Hazra agree is that quantum has the potential to mesh with AI to produce truly awesome hybrid results. “I would love to see quantum for AI and AI for quantum,” says Gambetta. “The synergies between them, and the advancement in general in technology, makes a lot of sense.”

Hazra concurs, saying “generative AI needs the power of quantum computing to make fundamental advances.” For Hazra, the Fourth Industrial Revolution will be led by generative AI but underpinned by the power of quantum computing. “The workload of AI and the computing infrastructure of quantum computing are both necessary.”

It’s a view shared across the Pacific in China, where investments in quantum are estimated at around $25 billion, dwarfing the rest of the world. China’s top quantum expert, Prof. Pan Jianwei, has developed a Jiuzhang quantum computer that he claims can perform certain kinds of AI-related calculations some 180 million times faster than the world’s top supercomputer.

In a paper published in the peer-reviewed Physical Review Letters journal last May, Jiuzhang processed over 2,000 samples of two common AI-related algorithms—Monte Carlo and simulated annealing—which would take the world’s fastest classical supercomputer five years, in under a second. In October, Pan unveiled Jiuzhang 3.0, which he claims was 10 quadrillion times faster in solving certain problems than a classical supercomputer.

Jiuzhang utilizes yet a third form of quantum technology—light or photons—and Pan is widely lauded as China’s king of quantum. A physics professor at the University of Science and Technology of China, Pan in 2016 launched Micius, the world’s first quantum communication satellite, which beamed entangled photons between earth a year later for the world’s first quantum-secured video call.

Micius is considered quantum’s “Sputnik” moment, prompting American policymakers to funnel hundreds of millions of dollars into quantum information science via the National Quantum Initiative. Bills such as the Innovation and Competition Act of 2021 have provided $1.5 billion for communications research, including quantum technology. The Biden Administration’s proposed 2024 budget includes $25 billion for “emerging technologies” including AI and quantum. Ultimately, quantum’s awesome computing power will soon render all existing cryptography obsolete, presenting a security migraine for governments and corporations everywhere.

Quantum’s potential to turbocharge AI also applies to the simmering technology competition between the world’s superpowers. In 2021, the U.S. Commerce Department added eight Chinese quantum computing organizations to its Entity List, claiming they “support the military modernization of the People’s Liberation Army” and adopt American technologies to develop “counter-stealth and counter-submarine applications, and the ability to break encryption.”

These restrictions dovetail with a raft of measures targeting China’s AI ambitions, including last year blocking Nvida from selling AI chips to Chinese firms. The question is whether competition between the world’s top two economies stymies overall progress on AI and quantum—or pushes each nation to accelerate these technologies. The answer could have far-reaching consequences.

“AI can accelerate quantum computing, and quantum computing can accelerate AI,” Google CEO Sundar Pichai told the MIT Technology Review in 2019. “And collectively, I think it’s what we would need to, down the line, solve some of the most intractable problems we face, like climate change.”

Still, both the U.S. and China must overcome the same hurdle: talent. While only a few universities around the world offer quantum physics or mechanics, dedicated courses on quantum computing are even rarer, let alone expertise on the various specialties within. “Typically, the most valuable and scarcest resource becomes the basis of your competitive advantage,” says Hazra. “And right now in quantum it’s people with that knowledge.”

Guns Are Stolen From Cars at Triple the Rate They Were 10 Years Ago, Report Finds

Gun Thefts Cars

(WASHINGTON) — The rate of guns stolen from cars in the U.S. has tripled over the last decade, making them the largest source of stolen guns in the country, an analysis of FBI data by the gun safety group Everytown found.

The rate of stolen guns from cars climbed nearly every year and spiked during the coronavirus pandemic along with a major surge in weapons purchases in the U.S., according to the report, which analyzes FBI data from 337 cities in 44 states and was provided to The Associated Press.

[time-brightcove not-tgx=”true”]

The stolen weapons have, in some cases, turned up at crime scenes. In July 2021, a gun taken from an unlocked car in Riverside, Florida, was used to kill a 27-year-old Coast Guard member as she tried to stop a car burglary in her neighborhood.

The alarming trend underscores the need for Americans to safely secure their firearms to prevent them from getting into the hands of dangerous people, said Bureau of Alcohol, Tobacco, Firearms and Explosives Director Steve Dettelbach, whose agency has separately found links between stolen guns and violent crimes.

“People don’t go to a mall and steal a firearm from a locked car to go hunting. Those guns are going straight to the street,” said Dettelbach, whose agency was not involved in the report. “They’re going to violent people who can’t pass a background check. They’re going to gangs. They’re going to drug dealers, and they’re going to hurt and kill the people who live in the next town, the next county or the next state.”

Nearly 112,000 guns were reported stolen in 2022, and just over half of those were from cars — most often when they were parked in driveways or outside people’s homes, the Everytown report found. That’s up from about one-quarter of all thefts in 2013, when homes were the leading spot for firearm thefts, the report says.

Stolen guns have also been linked to tragic accidents, such as when a 14-year-old boy in St. Petersburg, Florida, killed his 11-year-old brother after finding in an alley a gun that had been stolen from an unlocked car a few days before.

At least one firearm was stolen from a car every nine minutes on average in 2022, the most recent year for which data was available. That’s almost certainly an undercount, though, since there’s no federal law requiring people to report stolen guns and only one-third of states require a report.

“Every gun stolen from a car increases the chances it’ll be used in a violent crime,” said Sarah Burd-Sharp, senior director of research at Everytown, which advocates for gun control policies. It’s unclear what’s driving the trend. The report found higher theft rates in states with looser gun laws, which also tend to have higher rates of gun ownership.

The report analyzed crime data from the FBI’s National Incident-Based Reporting System, which includes details about what was stolen and where it came from. Guns stolen from cars bucked car theft trends overall — the rate of other things stolen from cars has dropped 11% over the last 10 years, even as the rate of gun thefts from cars grew 200%, Everytown found in its analysis of FBI data.

In Savannah, Georgia, city leaders last month passed an ordinance requiring people to secure firearms left inside cars after seeing more than 200 guns stolen from unlocked cars in a year. The measure is facing pushback from the state’s attorney general.

The ATF has separately said that theft is a significant source of guns that end up in the hands of criminals. More than 1 million guns were reported stolen between 2017 and 2021, the agency found in a sweeping report on crime guns released last year. And the vast majority of gun thefts are from individuals.

The agency is prohibited by law from publicly releasing detailed information about where stolen guns end up. The information can, however, be shared with police investigating a crime.

Strong Solar Storm Could Disrupt Communications and Produce Northern Lights in U.S.

10 May 2024 at 18:31
Solar Storm

(CAPE CANAVERAL, Fla.) — An unusually strong solar storm headed toward Earth could produce northern lights in the U.S. this weekend and potentially disrupt power and communications.

The National Oceanic and Atmospheric Administration issued a rare geomagnetic storm watch — the first in nearly 20 years. That was expected to become a warning Friday night, when the effects of the solar outburst were due to reach Earth.

[time-brightcove not-tgx=”true”]

NOAA already has alerted operators of power plants and spacecraft in orbit to take precautions.

“As far as the worst situation expected here at Earth, that’s tough to say and I wouldn’t want to speculate on that,” said NOAA space weather forecaster Shawn Dahl. “However, severe level is pretty extraordinary, It’s a very rare event to happen.”

NOAA said the sun produced strong solar flares beginning Wednesday, resulting in five outbursts of plasma capable of disrupting satellites in orbit and power grids here on Earth. Each eruption — known as a coronal mass ejection — can contain billions of tons of plasma and magnetic field from the sun’s outer atmosphere, or corona.

The flares seem to be associated with a sunspot that’s 16 times the diameter of Earth, according to NOAA. An extreme geomagnetic storm in 2003 took out power in Sweden and damaged power transformers in South Africa.

The latest storm could produce northern lights as far south in the U.S. as Alabama and Northern California, according to NOAA.

The most intense solar storm in recorded history, in 1859, prompted auroras in central America and possibly even Hawaii. “That’s an extreme-level event,” Dahl said. “We are not anticipating that” but it could come close.

Asteroids, Myst, Resident Evil, SimCity and Ultima Inducted Into World Video Game Hall of Fame

10 May 2024 at 00:06
World Video Game Hall of Fame

(ROCHESTER, N.Y.) — The World Video Game Hall of Fame inducted its 10th class of honorees Thursday, recognizing Asteroids, Myst, Resident Evil, SimCity and Ultima for their impacts on the video game industry and popular culture.

The inductees debuted across decades, advancing technologies along the way and expanding not only the number of players, but the ages and interests of those at the controls, Hall of Fame authorities said in revealing the winners. The Hall of Fame recognizes electronic games of all types — arcade, console, computer, handheld, and mobile.

[time-brightcove not-tgx=”true”]

The Class of 2024 was selected by experts from among a field of 12 finalists that also included Elite, Guitar Hero, Metroid, Neopets, Tokimeki Memorial, Tony Hawk’s Pro Skater and You Don’t Know Jack.

The honor for Atari’s Asteroids comes 45 years after its 1979 debut in arcades, where it was Atari’s bestselling coin-operated game. The game’s glowing space-themed graphics and sound effects made their way from more than 70,000 arcade units into millions of living rooms when a home version of Asteroids was made available on the Atari 2600.

“Through endless variants and remakes across dozens of arcade, home, handheld, and mobile platforms, Asteroids made a simple, yet challenging game about blasting rocks into one of the most widely played and influential video games of all time,” said Jeremy Saucier, assistant vice president for interpretation and electronic games at The Strong museum, where the World Video Game Hall of Fame is located.

The next inductee to debut was Ultima, not necessarily a household name but a force in the development of the computer role-playing genre, digital preservation director Andrew Borman said in the news release. Designed by Richard Garriott and released in 1981, Utima: The First Age of Darkness inspired eight sequels and is credited with inspiring later role-playing games like Dragon Quest and Final Fantasy.

The urban design-inspired SimCity was released by Maxis in 1989 and found an audience among adults as well as children who were challenged to build their own city and respond to problems. Among the sequels and offshoots it inspired was 2016 World Video Game Hall of Fame inductee The Sims.

“At a time when many people thought of video games in terms of arcade shooters or console platformers, SimCity appealed to players who wanted intellectually stimulating fun on their newly bought personal computers,” Aryol Prater, research specialist for Black play and culture, said.

The adventure game Myst sold more than 6 million copies, making it a best-selling computer game in the 1990s. The 1993 Broderbund release used early CD-ROM technology and allowed for a level of player immersion that until then had not been available in computer games, the Hall of Fame said.

“Few other games can match Myst’s ability to open imaginative worlds,” collections manager Kristy Hisert said. “It was a work of artistic genius that captured the imagination of an entire generation of computer game players, and its influence can be seen in many of today’s open-world games.”

The final honoree, Resident Evil’s “cheesy B-movie dialogue, engrossing gameplay, and chilling suspense” helped popularize the “survival horror” genre following its release by Capcom in 1996 and offered mature entertainment for older teenagers and adults, video game curator Lindsey Kurano said. Created by game director Shinji Mikami, it also inspired an action horror film series that as of 2022 had grossed more than $1.2 billion, according to the Hall of Fame.

Anyone can nominate a game to the World Video Game Hall of Fame. Members of an international selection advisory committee submit their top three choices from the list of finalists. Fans also are invited to weigh in online. The public as a whole is treated as a single committee member.

GM to Retire the Chevrolet Malibu as Electric Vehicles Becomes Focus

9 May 2024 at 23:12
Goodbye Malibu

(DETROIT) — The Chevrolet Malibu, the last midsize car made by a Detroit automaker, is heading for the junkyard.

General Motors confirmed Thursday that it will stop making the car introduced in 1964 as the company focuses more on electric vehicles.

The midsize sedan was once the top-selling segment in the U.S., a stalwart of family garages nationwide. But its sales started to decline in the early 2000s as the SUV became more prominent and pickup truck sales grew.

[time-brightcove not-tgx=”true”]

Now the U.S. auto market is dominated by SUVs and trucks. Full-size pickups from Ford, Chevrolet and Ram are the top selling vehicles in America, and the top-selling non pickup is Toyota’s RAV4 small SUV.

Last year midsize cars made up only 8% of U.S. new vehicle sales, but it was 22% as recently as 2007, according to Motorintelligence.com. Still, Americans bought 1.3 million of the cars last year in a segment dominated by the Toyota Camry and Honda Accord.

GM sold just over 130,000 Malibus last year, 8.5% fewer than in 2022. Sales rose to nearly 230,000 after a redesign for the 2016 model year, but much of those were at low profits to rental car companies.

But the midsize car segment made a bit of a comeback last year with sales up almost 5%.

GM said it sold over 10 million Malibus, making nine generations since its debut.

GM’s factory in Kansas City, Kansas, which now makes the Malibu and the Cadillac XT4 small SUV, will stop making the Malibu in November and the XT4 in January. The plant will get a $390 million retooling to make a new version of the Chevrolet Bolt small electric car.

The plant will begin producing the Bolt and XT4 on the same assembly line in late 2025, giving the plant the flexibility to respond to customer demands, the company said.

The Wall Street Journal reported the demise of the Malibu on Wednesday.

TikTok Will Start Labeling AI-Generated Content to Combat Misinformation

9 May 2024 at 18:40
TikTok AI

TikTok will begin labeling content created using artificial intelligence when it’s been uploaded from outside its own platform in an attempt to combat misinformation.

“AI enables incredible creative opportunities, but can confuse or mislead viewers if they don’t know content was AI-generated,” the company said in a prepared statement Thursday. “Labeling helps make that context clear—which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year.”

[time-brightcove not-tgx=”true”]

TikTok’s shift in policy is part of an broader attempt in the technology industry to provide more safeguards for AI usage. In February Meta announced that it was working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools. Users on Facebook and Instagram users would see labels on AI-generated images.

Google said last year that AI labels are coming to YouTube and its other platforms.

A push for digital watermarking and labeling of AI-generated content was also part of an executive order that U.S. President Joe Biden signed in October.

TikTok is teaming up with the Coalition for Content Provenance and Authenticity and will use their Content Credentials technology.

The company said that the technology can attach metadata to content, which it can use to instantly recognize and label AI-generated content. TikTok said it began to deploy the technology Thursday on images and videos and will be coming to audio-only content soon.

In coming months, Content Credentials will be attached to submissions made on TikTok, which will remain on the content when downloaded. This will help identify AI-generated material that’s made on TikTok and help people learn when, where and how the content was made or edited. Other platforms that adopt Content Credentials will be able to automatically label it.

“Using Content Credentials as a way to identify and convey synthetic media to audiences directly is a meaningful step towards AI transparency, even more so than typical watermarking techniques,” Claire Leibowicz, head of the AI and Media Integrity Program at the Partnership on AI, said in a prepared statement. “At the same time we need to better understand how users react to these labels and hope that TikTok reports on the response so that we may better understand how the public navigates an increasingly AI-augmented world.”

TikTok said it’s the first video-sharing platform to put the credentials into practice and will join the Adobe-led Content Authenticity Initiative to help push the adoption of the credentials within the industry.

“TikTok is the first social media platform to support Content Credentials, and with over 170 million users in the United States alone, their platform and their vast community of creators and users are an essential piece of that chain of trust needed to increase transparency online,” Dana Rao, Adobe’s executive vice president, general counsel and chief trust officer, said in a blog post.

TikTok’s policy in the past has been to encourage users to label content that has been generated or significantly edited by AI. It also requires users to label all AI-generated content where it contains realistic images, audio, and video.

“Our users and our creators are so excited about AI and what it can do for their creativity and their ability to connect with audiences.” Adam Presser, TikTok’s Head of Operations & Trust and Safety told ABC News. “And at the same time, we want to make sure that people have that ability to understand what fact is and what is fiction.”

The announcement initially came on ABC’s “Good Morning America” on Thursday.

More From TIME

[video id=oQGtmjgk autostart="viewable"]

TikTok’s AI actions come just two days after TikTok said that it and its Chinese parent company, ByteDance, had filed a lawsuit challenging a new American law that would ban the video-sharing app in the U.S. unless it’s sold to an approved buyer, saying it unfairly singles out the platform and is an unprecedented attack on free speech.

The lawsuit is the latest turn in what’s shaping up to be a protracted legal fight over TikTok’s future in the United States — and one that could end up before the Supreme Court. If TikTok loses, it says it would be forced to shut down next year.

❌
❌