Normal view

There are new articles available, click to refresh the page.
Today — 21 May 2024Tech – TIME

Scarlett Johansson ‘Angered’ By ChatGPT Voice That Sounded ‘Eerily’ Like Her

21 May 2024 at 01:21
TOPSHOT-US-ENTERTAINMENT-JUSTICE-AWARD

Scarlett Johansson said Monday that she was “shocked, angered and in disbelief” when she heard that OpenAI used a voice “eerily similar” to hers for its new ChatGPT 4.0 chatbot, even after she had declined to provide her voice.

Earlier on Monday, OpenAI announced on X that it would pause the AI voice, known as “Sky,” while it addresses “questions about how we chose the voices in ChatGPT.” The company said in a blog post that the “Sky” voice was “not an imitation” of Johansson’s voice, but that it was recorded by a different professional actor, whose identity the company would not reveal to protect her privacy.

[time-brightcove not-tgx=”true”]

But Johansson said in a statement to NPR on Monday that OpenAI’s Chief Executive Officer Sam Altman had asked her in September to voice the ChatGPT 4.0 system because he thought her “voice would be comforting to people.” She declined, but nine months later, her friends, family and the public noticed how the “Sky” voice resembled hers.

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the actress said in her statement. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

Johansson said that she was “forced to hire legal counsel” because of the situation, and that her counsel wrote two letters to Altman and OpenAI asking them to explain the process for creating the “Sky” voice. After, OpenAI “reluctantly agreed” to pull the voice from the platform, she said.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson said in her statement. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

OpenAI first revealed voice functions for ChatGPT in September. In November, the company announced that the feature would be free for all users on the mobile app. Chat GPT 4.0 isn’t publicly available yet—it will be rolled out in the coming weeks and months, according to Associated Press.

Trump Media and Technology Group Posts Over $300M Net Loss in First Public Quarter

21 May 2024 at 00:48
Trump Media Results

(SARASOTA, Fla.) — Trump Media and Technology Group, the owner of former President Donald Trump’s social networking site Truth Social, lost more than $300 million last quarter, according to its first earnings report as a publicly traded company.

For the three-month period that ended March 31, the company posted a loss of $327.6 million, which it said included $311 million in non-cash expenses related to its merger with a company called Digital World Acquisition Corp., which was essentially a pile of cash looking for a target to merge with. It’s an example of what’s called a special purpose acquisition company, or SPAC, which can give young companies quicker and easier routes to getting their shares trading publicly.

[time-brightcove not-tgx=”true”]

A year earlier, Trump Media posted a loss of $210,300.

Trump Media said collected $770,500 in revenue in the first quarter, largely from its “nascent advertising initiative.” That was down from $1.1 million a year earlier.

“At this early stage in the Company’s development, TMTG remains focused on long-term product development, rather than quarterly revenue,” Trump Media said in its earnings news release.

Earlier this month, the company fired an auditor that federal regulators recently charged with “massive fraud.” The former president’s media company dismissed BF Borgers as its independent public accounting firm on May 3, delaying the filing of the quarterly earnings report, according to a securities filings.

Trump Media had previously cycled through at least two other auditors — one that resigned in July 2023, and another that was terminated its the board in March, just as it was re-hiring BF Borgers.

Shares of Trump Media climbed 36 cents to $48.74 in after-hours trading. The stock, which trades under the ticker symbol “DJT,” began trading on Nasdaq in March and peaked at nearly $80 in late March.

Yesterday — 20 May 2024Tech – TIME

Taiwan’s Digital Minister Has an Ambitious Plan to Align Tech With Democracy

20 May 2024 at 13:00

Audrey Tang, Taiwan’s 43-year-old minister of digital affairs, has a powerful effect on people. At a panel discussion at Northeastern University in Boston, 20-year-old student Diane Grant is visibly moved, describing Tang’s talk as the best she’s been to in her undergraduate career. Later that day, a German tourist recognizes Tang leaving the Boston Museum of Science and requests a photo, saying she’s “starstruck.” At the Massachusetts Institute of Technology, a trio of world-leading economists bashfully ask Tang to don a baseball cap emblazoned with the name of their research center and pose for a group photo. Political scientist and former gubernatorial candidate Danielle Allen, confesses to Tang that, although others often tell her that she is a source of inspiration to them, she rarely feels inspired by others. But she has found an exception: Tang inspires her. 

[time-brightcove not-tgx=”true”]

Few visiting dignitaries elicit such reactions. But to some, Tang symbolizes hope. 

In an era when digital technologies—social media, artificial intelligence, blockchains—are increasingly seen as a threat to democracy, Taiwan seems to offer an alternative path. In Taiwan, civil society groups and the government work together to harness technology, giving people more say in how their country is run, and tackling problems like tracing the spread of the COVID-19 pandemic and combatting electoral disinformation campaigns.

Tang, the world’s first openly transgender minister, played a pivotal role in all of this, first as an activist hacker and then from within the government. Now, she is stepping back from her ministerial duties to embark upon a world tour to promote the ideas that have flourished in Taiwan. These are ideas captured in Plurality, a book Tang has co-authored with E. Glen Weyl, a 39-year-old American economist at Microsoft, and more than 100 online collaborators.

Tang aims to be a global ambassador, demonstrating how technology and democracy can coexist harmoniously. “In Taiwan, for the past decade, this is the dominant worldview,” she says. “Just to see how that narrative—how that overarching, intertwined feeling of tech and democracy—can grow in non-Taiwan places. I’m most looking forward to that.”

The tour’s objective is not only to disseminate the book’s ideas but also to expose people to Tang herself. “It would change the world if every major world leader gets to spend 30 minutes with Audrey,” says Weyl, the primary orchestrator of the plan. “It’s about the experience of being with her. It changed my life.”


Tang’s unique charisma was shaped by a rare set of circumstances. At the age of 4, Tang—who was born with a serious heart condition—was given just a 50% chance of surviving long enough to undergo life-saving surgery. If she ever became upset, or angry, or excited, she would lose consciousness and wake up in an intensive care unit. She soon learned to keep her composure, and though an operation corrected her condition when she was 12, her equanimity remained.

“If you’ve been living with that condition for 12 years of your life, that’s your core personality,” she says. “I convinced myself to go on a roller coaster once or twice, rationally knowing I would not die. But it wasn’t very pleasant.”

Tang grew up alongside democracy and digital technologies in Taiwan. Aged 8, she taught herself to program by sketching a keyboard on a piece of paper, feigning typing, and then writing the output on another piece of paper. (After a few weeks of this, her parents relented and bought her a computer). By 14, Tang had left formal education to pursue programming full-time; she spent the next two decades contributing to open-source projects both in Taiwan and abroad.

“The idea of personal computing, to people in Taiwan, is inherently democratic,” Tang says. Computers and internet access meant the ability to publish books without state sponsorship, and communicate without state surveillance, a stark contrast to the martial law era that only ended in 1987, six years after Tang was born. 

All of this fueled the rise of the g0v (gov zero) movement in 2012, led by civic hackers who wanted to increase transparency and participation in public affairs. The movement started by creating superior versions of government websites, which they hosted on .g0v.tw domains instead of the official .gov.tw, often attracting more traffic than their governmental counterparts. The g0v movement has since launched more initiatives that seek to use technology to empower Taiwanese citizens, such as vTaiwan, a platform that facilitates public discussion and collaborative policymaking between citizens, experts, and government officials.

In 2014, the movement’s influence became clear when protestors, many affiliated with g0v, occupied Taiwan’s legislative chamber to oppose a trade deal with China. “Democracy needs me,” Tang wrote to her colleagues at California-based software company Socialtext, before leaving to support the protesters for the duration of their 24-day occupation by helping them to peacefully broadcast their message.

The protests marked a turning point in Taiwan. The government made efforts to engage with young activists and in 2016, Tang, then 35, was appointed as digital minister without portfolio. In 2022, Tang was named Taiwan’s first minister for digital affairs, and in 2023 she was made chairperson of the board of Taiwan’s National Institute of Cyber Security.

In many regards, Taiwan leads the world in digital democracy, thanks to initiatives led by Tang and others. Taiwan’s agile response to COVID-19, including a widely-praised contact tracing system, exemplifies this success. (At one point, the island nation went 200 days without a locally transmitted coronavirus case.) Such achievements, Plurality argues, are partly responsible for Taiwan’s remarkable economic, social, and political success over the last decade.

However, it’s important not to overstate the impact of Taiwan’s digital democracy initiatives, cautions Sara Newland, an assistant professor at Smith College, Massachusetts, who researches Chinese and Taiwanese politics. While Taiwan is a well-governed country and it’s plausible that the various examples of digital democracy contribute to this success, it’s also possible that these initiatives came about because Taiwan is well-governed, she says. The vision outlined in Plurality borders on utopian, and Taiwan’s case may not provide enough evidence to prove its feasibility.

Still, while Plurality might draw heavily on Taiwan’s experience, its scope is global. The book’s core lays out the fundamental rights that societies must promote, how digital technologies can aid in promoting them, and the collaboration-enhancing technologies that could strengthen democracy. For each technology, examples are drawn from outside Taiwan. For example, “immersive shared reality technologies,” futuristic cousins of virtual reality headsets like Apple’s Vision Pro and Meta’s Quest, could foster empathy at a distance and allow people to step into another’s shoes. The book cites Becoming Homeless, a seven-minute virtual reality experience designed by researchers at Stanford to help the user understand how it feels to lose your home, as a primitive example of an immersive shared reality technology.

Plurality aims to offer a roadmap for a future in which technology and democracy not only co-exist but thrive together; in writing the book, Tang and Weyl put this collaborative ethos into practice. The book, which is free to download, began life as a blog post authored by Weyl; although Weyl and Tang conceived of the project and Weyl was the primary author, anyone could contribute to the book’s development. More than 100 people contributed—some copy-edited, some designed graphics, some wrote entire chapters, says Tang. While juggling ministerial duties, Tang spent hours each week working on the book, contributing ideas and building the website. “At the end of the day,” she quips, “I was still a coder for some reason.”


The fledgling plurality movement faces a daunting challenge: countering the threat from the two dominant digital technologies of our time—artificial intelligence and blockchains—and their effects on society. Plurality argues that both of these are undermining democracy in different, but equally pernicious ways. AI systems facilitate top-down control, empowering authoritarian regimes and unresponsive technocratic governments in ostensibly democratic countries. Meanwhile, blockchain-based technologies atomize societies and accelerate financial capitalism, eroding democracy from below. As Peter Thiel, billionaire entrepreneur and investor, put it in 2018: “crypto is libertarian and AI is communist.”

Weyl sees echoes of the 1930s, when fascism and communism battled for ideological supremacy. “But there was another option,” he says—liberal democracy. Now, Weyl and Tang are striving to articulate a new alternative to AI-powered authoritarianism and blockchain-fueled libertarianism: “plurality.” They hope this idea—of a symbiotic relationship between democracy and collaborative technology—can profoundly influence the century ahead. 

Plurality concludes with a call to action, setting bold targets for the movement it hopes to inspire. By 2030, the authors want the idea of plurality to be as widely recognized in the tech world as AI and blockchain, and as prominent in political discourse as environmentalism. To get there, the pair aim to cultivate a core group of 1,000 deeply engaged advocates, distribute 1 million copies of the book, and build sympathy among 1 billion people. “Frankly, I’m starting to feel like these [goals] maybe are actually under ambitious,” Weyl says.

This isn’t his first attempt at movement-building. Weyl’s first book, Radical Markets, generated huge buzz when it was published in 2018, prompting him to channel that enthusiasm into launching the RadicalxChange Foundation, a nonprofit that seeks to advance the book’s ideas. (Tang and Weyl are both members of the Foundation’s board, along with Vitalik Buterin, the “prince of cryptocurrency” who introduced the pair in 2018.) However, while the Foundation has had some success, it fell far short of the targets Weyl has set for Plurality’s impact on the world. And history is littered with extinct political movements, from Occupy Wall Street to the Arab Spring, that failed to meet their goals. If Weyl thinks his targets are under ambitious, many might think them delusional. 

Weyl is unperturbed. Last time, he didn’t have a plan. With Plurality, he says, he’s taking a more ambitious approach—one that hinges on Tang’s star power. Weyl has enlisted Oscar-winning director Cynthia Wade to shoot a short documentary about Tang’s life and Taiwan’s democratic evolution, with the goal of premiering it at film festivals later this year.

As Hollywood shut down during last year’s strikes, working through footage of Tang has been soothing, says Wade. “When you’re editing a film, you’re living with somebody. So [Tang has] been living in our household for the last quarter,” she says. “There’s a way in which she encourages you to stop and reflect that feels very different, and maybe even more participatory.”

A feature-length biopic is also in the early stages, with Weyl floating the idea of casting trans actor Ian Alexander in the lead role. Tang, characteristically deadpan, offers an alternative: “Sora, maybe,” referring to an unreleased AI system developed by OpenAI that generates videos from text prompts.

This playful exchange captures the duo’s dynamic. Over the course of four hours at Weyl’s house in Arlington, Mass., Weyl earnestly expounds on the book’s ideas and aspirations, while Tang interjects with droll asides. The evangelizing, the ideological battle of the 21st century, the numerical targets in the millions and billions—these all come from Weyl, they say. Tang would never think in those terms, Weyl says, “without me constantly badgering her.”

Tang nods in agreement, seemingly unfazed by the weight of his expectations. Despite embarking on a journey that could—if Weyl’s goals are met—change the course of history, she remains remarkably laid-back. When a friend asked her last year why she was devoting so much time to the book, she replied simply: “Just to make Glen feel better and sleep better.” 

Such serenity is not the most natural quality in a representative for what Weyl hopes might be a century-defining ideology, but it is, perhaps, the reason for the strong reactions Tang provokes. In fact, it may be Tang’s poise, as much as Weyl’s zeal, that gives the plurality movement some hope of achieving its lofty goals.

Before yesterdayTech – TIME

A New Lawsuit Accuses Spotify of Cheating Songwriters Out of Royalties

Dilara Irem Sancar—Anadolu/Getty Images

Spotify Technology SA used a legalistic word change to justify slicing royalties to musicians and publishers, reducing the revenue on which royalties are based by almost 50%, according to lawsuit filed by the group that collects their payments.

The change came in March when Spotify added the word “bundled” to its description of its $10.99-a-month music streaming service, the Mechanical Licensing Collective said in its complaint. Nothing else “about the Premium service has actually changed,” according to the suit filed Thursday in federal court in Manhattan.

[time-brightcove not-tgx=”true”]

The collective is legally barred from disclosing how much Spotify royalties declined since March but cited a Billboard story that estimated the loss would amount to about $150 million next year. 

Spotify said it looks forward to “swift resolution” of the lawsuit, which it said concerns terms that publishers and streaming services “agreed to and celebrated years ago.”

“Bundles were a critical component of that settlement, and multiple DSPs include bundles as part of their mix of subscription offerings,” a Spotify spokesperson said in a statement. “Spotify paid a record amount to publishers and societies in 2023 and is on track to pay out an even larger amount in 2024.”

The fight over bundling between the streaming service and publishers has spilled into a dispute over other issues.

The National Music Publishers’ Association on Wednesday sent a cease-and-desist letter to Spotify over products it claims are infringing on songwriters’ copyrights. The NMPA alleges that music videos, lyrics and podcasts on the platform are all using copyrighted music without the proper permissions.

“Before Spotify’s ‘bundling’ betrayal, we may have been able to work together to fix this problem, but they have chosen the hard road by coming after songwriters once again,” David Israelite, chief executive officer at the NMPA, said in a statement.

In response, a Spotify spokesperson called the letter a “press stunt filled with false and misleading claims.” 

Music and audiobook streaming companies, like Spotify, pay musicians and music publishers under a complex system set out in 2018 by the Music Modernization Act of 2018. Under the system, streaming services pay less per stream—in other words, less to creators and publishers—when their services are classified as bundles.

Spotify’s Premium service, which was not classified as a bundle before March 1, includes unlimited music downloads and 15 hours of audiobooks. It added the audiobook offering in November in the U.S. without changing the $10.99 price.

The licensing collective is asking the court to order Spotify to stop classifying Premium as a bundled service and to pay it for lost revenue.

Israelite praised the Mechanical Licensing Collective for “not letting Spotify get away with its latest trick to underpay creators.”

Reddit Partners With OpenAI to Bring Content to ChatGPT and AI Tools to Reddit

Reddit

Reddit Inc. forged a partnership with OpenAI that will bring its content to the chatbot ChatGPT and other products, while also helping the social media company add new artificial intelligence features to its forums.

Shares of Reddit, which had their initial public offering in March, jumped as much as 15% in late trading following the announcement.

The agreement “will enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics,” the companies said Thursday in a joint statement. The deal allows OpenAI to display Reddit’s content and train AI systems on its partner’s data.

[time-brightcove not-tgx=”true”]

Reddit will also offer its users new AI-based tools built on models created by OpenAI, which will place ads on its partner’s site. Financial terms of the deal weren’t disclosed.

Reddit content has long been a popular source of training data for making AI models—including those of OpenAI. Last week, Reddit released new policies governing the use of its data, part of an effort to increase revenue through licensing agreements with artificial intelligence developers and other companies.

“Our data is extremely valuable,” Chief Executive Officer Steve Huffman said at the Bloomberg Technology Summit earlier this month. “We’re seeing a ton of interest in it.”

Finding new moneymaking opportunity was part of Reddit’s pitch in the lead-up to its IPO. The company also signed an accord in January with Alphabet Inc.’s Google worth $60 million to help train large language models, the technology underpinning generative AI.

Huffman previously declined to discuss the specifics of the Google deal but said typical terms can govern how long a Reddit summary can show up in a Google search or whether a licensee has to display Reddit branding in AI-generated results. The San Francisco-based social network has signed licensing deals worth $203 million in total, with terms ranging from two to three years, and has been in talks to strike additional licensing agreements. 

OpenAI, for its part, is increasingly forging partnerships with media companies to help train its AI systems and show more real-time content within its chatbot. The ChatGPT maker also penned deals with Dotdash Meredith earlier this month and the Financial Times in April.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive

Backed by Microsoft Corp., the startup has emerged as a driving force in the development of AI. Sam Altman, CEO of OpenAI, has a long history with Reddit. He was one of the company’s largest shareholders at the time of its IPO earlier this year and briefly served as Reddit’s interim CEO in 2014.

The companies noted in the statement that their partnership was led by OpenAI Chief Operating Officer Brad Lightcap and was approved by its independent directors.

The shares of Reddit, which had declined 5.5% to $56.38 in regular New York trading Thursday, soared as high as $64.75 after the partnership was announced. The stock has gained 66% since its IPO.

How to Hit Pause on AI Before It’s Too Late

16 May 2024 at 15:22
Demonstrator holding "No AI" placard

Only 16 months have passed, but the release of ChatGPT back in November 2022 feels already like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these large language models. Our world, and in particular the world of AI, has decidedly changed.

But the real prize of human-level AI—or artificial general intelligence (AGI)—has yet to be achieved. Such a breakthrough would mean an AI that can carry out most economically productive work, engage with others, do science, build and maintain social networks, conduct politics, and carry out modern warfare. The main constraint for all these tasks today is cognition. Removing this constraint would be world-changing. Yet many across the globe’s leading AI labs believe this technology could be a reality before the end of this decade.

[time-brightcove not-tgx=”true”]

That could be an enormous boon for humanity. But AI could also be extremely dangerous, especially if we cannot control it. Uncontrolled AI could hack its way into online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create tailor-made manipulations for large numbers of people. Even worse, military personnel in charge of nuclear weapons could be manipulated by an AI to share their credentials, posing a huge threat to humanity.

It would be a constructive step to make it as hard as possible for any of that to happen by strengthening the world’s defenses against adverse online actors. But when AI can convince humans, which it is already better at than we are, there is no known defense.

For these reasons, many AI safety researchers at AI labs such as OpenAI, Google DeepMind and Anthropic, and at safety-minded nonprofits, have given up on trying to limit the actions future AI can do. They are instead focusing on creating “aligned” or inherently safe AI. Aligned AI might get powerful enough to be able to exterminate humanity, but it should not want to do this.

There are big question marks about aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the best researchers working on aligning superhuman AI left OpenAI in dissatisfaction, a move that does not inspire confidence. Second, it is unclear what a superintelligent AI would be aligned to. If it was an academic value system, such as utilitarianism, we might quickly find out that most humans’ values actually do not match these aloof ideas, after which the unstoppable superintelligence could go on to act against most people’s will forever. If the alignment was to people’s actual intentions, we would need some way to aggregate these very different intentions. While idealistic solutions such as a U.N. council or AI-powered decision aggregation algorithms are in the realm of possibility, there is a worry that superintelligence’s absolute power would be concentrated in the hands of very few politicians or CEOs. This would of course be unacceptable for—and a direct danger to—all other human beings.

Read More: The Only Way to Deal With the Threat From AI? Shut It Down

Dismantling the time bomb

If we cannot find a way to at the very least keep humanity safe from extinction, and preferably also from an alignment dystopia, AI that could become uncontrollable must not be created in the first place. This solution, postponing human-level or superintelligent AI, for as long as we haven’t solved safety concerns, has the downside that AI’s grand promises—ranging from curing disease to creating massive economic growth—will need to wait.

Pausing AI might seem like a radical idea to some, but it will be necessary if AI continues to improve without us reaching a satisfactory alignment plan. When AI’s capabilities reach near-takeover levels, the only realistic option is that labs are firmly required by governments to pause development. Doing otherwise would be suicidal.

And pausing AI may not be as difficult as some make it out to be. At the moment, only a relatively small number of large companies have the means to perform leading training runs, meaning enforcement of a pause is mostly limited by political will, at least in the short run. In the longer term, however, hardware and algorithmic improvement mean a pause may be seen as difficult to enforce. Enforcement between countries would be required, for example with a treaty, as would enforcement within countries, with steps like stringent hardware controls. 

In the meantime, scientists need to better understand the risks. Although there is widely-shared academic concern, no consensus exists yet. Scientists should formalize their points of agreement, and show where and why their views deviate, in the new International Scientific Report on Advanced AI Safety, which should develop into an “Intergovernmental Panel on Climate Change for AI risks.” Leading scientific journals should open up further to existential risk research, even if it seems speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments have an enormous part to play in how AI unfolds. This starts with officially acknowledging AI’s existential risk, as has already been done by the U.S., U.K., and E.U., and setting up AI safety institutes. Governments should also draft plans for what to do in the most important, thinkable scenarios, as well as how to deal with AGI’s many non-existential issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing scientific, industry, and public evaluation.

It is great progress that major AI countries are constructively discussing common policy at biannual AI safety summits, including one in Seoul from May 21 to 22. This process, however, needs to be guarded and expanded. Working on a shared ground truth on AI’s existential risks and voicing shared concern with all 28 invited nations would already be major progress in that direction. Beyond that, relatively easy measures need to be agreed upon, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from training. An international AI agency needs to be set up to guard execution.

It is fundamentally difficult to predict scientific progress. Still, superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a feasible strategy. Let us use the time we have as wisely as possible.

Billionaire Frank McCourt Wants to Buy TikTok. Here’s Why He Thinks He Could Save It

16 May 2024 at 15:21
McCourt

Billionaire Frank McCourt has long argued that the internet needs to be radically changed on an infrastructural level in order to reduce its toxicity, misinformation, and extractive nature. Now, he’s hoping to slide into a power vacuum in pursuit of that goal. McCourt is putting together a bid to buy TikTok from Chinese technology company ByteDance, which faces a U.S. ban at the end of this year unless it sells the wildly popular app.

[time-brightcove not-tgx=”true”]

McCourt’s central thesis lies in the belief that users should have more control over their personal data and online identities. His aim is to assemble a coalition to buy TikTok, so that its most valuable user data would be kept not by a single company, but on a decentralized protocol. McCourt has developed this idea in conjunction with technologists, academics, and policymakers via his nonprofit Project Liberty. His plan has received support from notable luminaries including the author Jonathan Haidt (The Anxious Generation) and Tim Berners-Lee, the inventor of the world wide web.

McCourt did not say how much he thinks TikTok is worth. Other magnates who have expressed interest in bidding for TikTok include Kevin O’Leary and Steve Mnuchin.

But there is no indication that ByteDance plans to sell TikTok; they are challenging the forced sale in the U.S. court system on the grounds of freedom of speech. And McCourt faces many obstacles in folding TikTok into his technological vision while ensuring the app’s profitability—especially because he says he’s not interested in buying the core algorithm that has hypercharged TikTok’s growth. 

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

In an interview with TIME, McCourt explained his vision for the app and the larger internet ecosystem. Here are excerpts from the conversation.

TIME: A couple years ago, you stepped down as CEO from McCourt Global in order to devote most of your time to Project Liberty, whose goal is fixing the internet. How pivotal could buying TikTok be towards that mission?

Frank McCourt: I think it’s a fantastic opportunity to really accelerate things and catalyze an alternative version of the internet where individuals own and control their identity and data. The internet does not have to operate the way it does right now. It’s important to remember that the other big platforms in the U.S. operate with the same architecture as TikTok: of scraping people’s data and aggregating it and then exploiting it. 

When I say data, it sounds abstract. But it’s our personhood; it’s everything about us. And I think it’s well past time that we correct that fundamental flaw in the design of the internet and return agency to individuals.

Let’s say I’m a small business owner who uses TikTok to post content and sell goods. How would my experience improve under your new design?

The user experience wouldn’t change much. We want this to be a seamless thing. Part of our thinking is to keep TikTok U.S. alive, because China has said they’re not sharing the [core] algorithm under any circumstances. And without a viable bidder to move forward without the algorithm, they may shut it down. But we’re not looking for the algorithm.

Many people contend that the core algorithm is essential to TikTok’s value. Do you worry that TikTok wouldn’t be TikTok without it?

What makes TikTok, TikTok, to me, is the user base, the content created by the user base, the brand, and all the tech short of the algorithm. Of course, TikTok isn’t worth as much without the algorithm. I get that. That’s pretty plain. But we’re talking about a different design, which requires people to move on from the mindset and the paradigm we’re in now. 

It will be a version where everyone is deciding what pieces or portions of their data to share with whom. So you still have a user experience every bit as good, but with much better architecture overall. And not only will individuals have agency, but let’s have a broader group of people participating in who shares in the economic value of the platform itself. 

Read More: Why The Billionaire Frank McCourt is Stepping Down As CEO Of His Company To Focus on Rebuilding Social Media

How would that value sharing work? Are you talking about some sort of directed shares program, or a crypto token?

It’s a bit early to have that conversation. That’s why we’ve retained Kirkland & Ellis to advise us, along with Guggenheim Securities. They’re grappling with and thinking through those very issues right now.

So how would users control their data?

Imagine an internet where individuals set the terms and conditions of their data with use cases and applications. And you’ll still want to share your data, because you’ll want to get the benefits of the internet. But you’re sharing it on a trusted basis. The mere act of giving permission to use it is very different than having it be surveilled and scraped.

The blockchain-based decentralized infrastructure you plan to use for TikTok, DSNP, is already running, and the social media app MeWe is currently migrating its tech and data onto it. What have you learned from MeWe’s transition?

That it works. Like any other engineering challenge, you have to go through all the baby steps to get it right. But the migration started in earnest in Q4, and over 800,000 users have migrated. To me, that’s important that we’re not bringing forward a concept: We’re bringing forward a proven tech solution.

In order to finance this bid, you will seek money from foundations, endowments and pension funds and philanthropies. Are you confident that if you get these big investors on board, you’ll be able to return value to them?

I am. This opens up and unlocks enormous value for investors and users. At the same time, it has a tremendous impact for society. I mentioned the pension funds and endowments and foundations as a category of investor that have a longer term horizon, and look at making investments not strictly on the basis of financial ROI. It’s important they be involved, because this is a societal project to fundamentally change how the internet works.  

We want a lot of people involved in this in different ways, shapes and forms, which is another distinguishing characteristic. We don’t need Saudi money to replace Chinese money. We’re trying to bring forward a solution to address the problem at its root cause, not at the symptomatic level.

You committed $150 million to Project Liberty in 2022. Are you prepared to spend in that ballpark again for TikTok?

Update that number: I’ve committed half a billion dollars to Project Liberty. That should be an indication of my level of seriousness about all this, and my level of seriousness about the bid for TikTok U.S.

2023 Was the Worst Year for Internet Shutdowns Globally, New Report Says

16 May 2024 at 10:00
Internet Cut in Manipur, India

Last year, an internet shutdown in the state of Manipur, India, lasted a staggering 212 days when the state government issued 44 consecutive orders to switch off access across all broadband and mobile networks. The shutdown affected a population of 3.2 million, and made it more difficult to document rampant atrocities committed against minorities during bloody violence between the Meitei and Kuki-Zo tribes, which included murder, rape, arson, and other gender-based violence, says Access Now, a digital rights watchdog that publishes an annual report on internet shutdowns around the world. 

[time-brightcove not-tgx=”true”]

Manipur was just one of hundreds of instances where authorities in India used the tactic as “a near-default response to crises, both proactively and reactively,” according to the group’s latest report published May 15. For the sixth consecutive year, India led the global list for imposing the highest number of internet shutdowns after removing access 116 times in 2023. 

What’s more, Access Now deemed 2023 the worst year for internet shutdowns globally, recording 283 shutdowns across 39 countries—the highest number of shutdowns in a single year since it first began monitoring in 2016. It’s a steep 41% increase from the previous year, which saw 201 shutdowns in 40 countries, and a 28% increase from 2019, which previously held the record for the highest number of shutdowns. 

“By nearly every measure, 2023 is the worst year of internet shutdowns ever recorded — highlighting an alarming and dangerous trend for human rights,” the report states.

Read More: How Internet Shutdowns Wreak Havoc in India

173 of the shutdowns in 2023 occurred in conflict zones and corresponded to acts of violence. In the Gaza Strip, for example, the Israeli military “used a combination of direct attacks on civilian telecommunications infrastructure, restrictions on access to electricity, and technical disruptions to shut down the internet,” the report reads. (In a statement to TIME, the IDF said “As part of the IDF’s operations in the Gaza Strip, the IDF is facilitating the restoration of infrastructure in areas affected by the war and is coordinating with local teams to bring infrastructure repair to these locations.”)

And in the Amhara region of Ethiopia, security forces imposed a near-total communications blackout to cause terror and mass displacement through the destruction of property and indiscriminate bombing across the region, according to the report.

The watchdog also points out that while the increase of shutdowns associated with violence during armed conflict was high, in 74 instances across nine countries—including Palestine, Myanmar, Sudan, and Ukraine—warring political parties claimed to deploy shutdowns during protests and politically unstable events as a peacekeeping measure. In India alone, authorities ordered 65 shutdowns in 2023 in specific attempts to address communal violence. Similarly, Pakistan and Bangladesh imposed seven and three shutdowns, respectively, as a way to suppress political dissent during political rallies and election campaigning. 

Read More: Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say

93% of all cases recorded in 2023 occurred without giving the public any advance notice of an impending shutdown; a practice that Access Now says only deepens fear and uncertainty, and puts more people in grave danger.

“We are at a tipping point, so take this as a wake-up call: all stakeholders across the globe — governments, civil society, and the private sector alike — must take urgent action to permanently end internet shutdowns,” Zach Rosson, a data analyst at Access Now, said in a statement.

OpenAI’s Co-Founder and Chief Scientist Ilya Sutskever Is Leaving the Company

15 May 2024 at 05:20
ISRAEL-SCIENCE-TECHNOLOGY-AI

OpenAI Chief Scientist and co-founder Ilya Sutskever is leaving the artificial intelligence company, a departure that ends months of speculation in Silicon Valley about the future of a top AI researcher who played a key role in the brief ouster of Sam Altman last year.

Sutskever will be replaced by Research Director Jakub Pachocki, OpenAI said on its blog Tuesday. 

[time-brightcove not-tgx=”true”]

In a post on X, Sutskever called trajectory of OpenAI “miraculous” and said that he was confident the company will build AI that is “both safe and beneficial” under its current leadership. 

The exit removes an executive and renowed researcher who has played a pivotal role in the company since its earliest days, helping guide discussions over the safety of AI technology and at times differing with Altman over strategy. When OpenAI was founded in 2015, he served as its research director after being recruited to join the company by Elon Musk. At that point, Sutskever was already well known in the field for his work on neural networks at the University of Toronto and his work at the Google Brain lab. Sutskever even officiated the wedding of President Greg Brockman at the OpenAI offices.

Sutskever clashed with Altman over how rapidly to develop AI, a technology prominent scientists have warned could harm humanity if allowed to grow without built-in constraints, for instance on misinformation. Jan Leike, another OpenAI veteran who co-led the so-called superalignment team with Sutskever, also resigned. Leike’s responsibilities included exploring ways to limit the potential harm of AI.

Last year, Sutskever was one of several OpenAI board members who moved to fire Chief Executive Officer Altman, a decision that touched off a whirlwind five days at the company: Brockman quit in protest. Investors revolted. And within days, nearly all of OpenAI’s roughly 770 employees signed a letter threatening to quit unless Altman was reinstated.

Adding to the chaos, Sutskever said he regretted his participation in Altman’s ouster. Soon after, the CEO was reinstated. 

After Altman returned to the company in late November, he said in a blog post that Sutskever wouldn’t go back to his former post as a board member, but that the company was “discussing how he can continue his work at OpenAI.”

In the subsequent months, Sutskever largely disappeared from public view, sparking speculation about his continued role at the company. Sutskever’s post on X Tuesday was the first time he shared anything on the social network since reposting a message from OpenAI in December.

Asked about Sutskever at a press conference in March, Altman said he loved him, and that he believed Sutskever loved OpenAI, adding: “I hope we work together for the rest of our careers.”

In a post on X on Tuesday, Altman wrote, “Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

On X, Sutskever posted that he is working on an as-yet-unnamed project that is “very personally meaningful” for him.

The company’s new chief scientist, Pachocki has worked at OpenAI since 2017. Pachocki led the development of the company’s GPT-4 AI model, OpenAI said.

A Group of TikTok Creators Are Suing the U.S. to Block a Potential Ban on the App

TikTok Creators Hold Capitol Hill News Conference

A group of TikTok creators followed the company’s lead and filed their own lawsuit to block the U.S. law that would force Chinese parent ByteDance Ltd. to divest itself of the popular video app by January or face a ban.

Like the May 7 case filed by TikTok, eight creators behind Tuesday’s suit are challenging an ultimatum by the U.S. meant to address national security concerns that the Chinese government could access user data or influence what’s seen on the platform. The creators include a rancher from Texas, a college football coach in North Dakota, a founder of a skincare line in Atlanta and a Maryland book lover who promotes Black authors on the platform.

[time-brightcove not-tgx=”true”]

“Our clients rely on TikTok to express themselves, learn and find community,” Ambika Kumar, a lawyer for the creators, said in a statement. “They hope to vindicate not only their First Amendment rights, but the rights of the other approximately 170 million Americans who also use TikTok. The ban is a pernicious attack on free speech that is contrary to the nation’s founding principles.”

A Justice Department spokesperson said the government looks forward to defending the law in court.

“This legislation addresses critical national security concerns in a manner that is consistent with the First Amendment and other constitutional limitations,” the spokesperson said in a statement.

ByteDance has said it doesn’t have any intention of trying to find a buyer for TikTok as the January deadline approaches. Instead, ByteDance wants the law declared unconstitutional, saying it violates the First Amendment and represents an illegal punishment without due process or a presidential finding that the app is a national security threat.

Read More: What to Know About the Law That Could Get TikTok Banned in the U.S.

TikTok has argued the law will stifle free speech and hurt creators and small business owners who benefit economically from the platform. The company said that in response to data security concerns, it spent more than $2 billion to isolate its U.S. operations and agreed to oversight by American company Oracle Corp.

Professional content creators typically don’t make enough money to provide a living from TikTok itself. The social media company has a fund that pays certain creators based on performance, and it also shares revenue from products tagged and purchased through the app. Instead, creators use the app to gain an audience in the hopes of landing lucrative brand sponsorship deals where they make videos for or plug products of brands, much like on other social media platforms.

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

TikTok’s links to China have faced scrutiny under previous administrations. Former President Donald Trump used an executive order to try to force a sale of the app to an American company or face a ban. But his administration also faced multiple legal challenges—including from creators—and judges blocked the ban from taking place. When Joe Biden became president, he put Trump’s ban under fresh review.

A lobbying push against the law by TikTok Chief Executive Officer Shou Chew failed to convince U.S. lawmakers who worried about the national security threat of China potentially accessing user data and disseminating propaganda to about half the American population. Congress passed the law in April and Biden signed it.

Read More: The Grim Reality of Banning TikTok

Last year, Montana became the first U.S. state to enact a law that would ban residents from using the app. A federal judge sympathized with free-speech arguments by TikTok and creators in blocking the Montana measure while the legal challenges play out.

The Justice Department had no immediate comment on Tuesday’s suit.

DOJ Says Boeing Violated Deal That Avoided Prosecution After 737 Max Crashes

Justice Department Boeing

(WASHINGTON) — Boeing has violated a settlement that allowed the company to avoid criminal prosecution after two deadly crashes involving its 737 Max aircraft, the Justice Department told a federal judge on Tuesday.

It is now up to the Justice Department to decide whether to file charges against the aircraft maker amid increasing scrutiny over the safety of its planes. Prosecutors will tell the court no later than July 7 how they plan to proceed, the Justice Department said.

[time-brightcove not-tgx=”true”]

Boeing reached a $2.5 billion settlement with the Justice Department in January 2021 to avoid prosecution on a single charge of fraud – misleading regulators who approved the 737 Max. Boeing blamed the deception on two relatively low-level employees.

The manufacturing giant came under renewed scrutiny since a door-plug panel blew off a 737 Max jetliner during an Alaska Airlines flight in January. The company is under multiple investigations, and the FBI has told passengers from the flight that they might be victims of a crime.

Boeing didn’t immediately respond to a request for comment.

Glenn Leon, head of the Justice Department criminal division’s fraud section, said in the letter filed in Texas federal court that Boeing failed to make changes to prevent it from violating federal anti-fraud laws — a condition of the the 2021 settlement.

The determination means that Boeing could be prosecuted “for any federal criminal violation of which the United States has knowledge,” including the charge of fraud that the company hoped to avoid with the $2.5 billion settlement, the Justice Department said.

However, it is not clear whether the government will prosecute the manufacturing giant.

“The Government is determining how it will proceed in this matter,” the Justice Department said in the court filing. Prosecutors said they will meet with families of the crash victims on May 31.

Paul Cassell, a lawyer who represents families of passengers who died in the Max crash in Ethiopia, called it a “positive first step, and for the families, a long time coming.”

“But we need to see further action from DOJ to hold Boeing accountable, and plan to use our meeting on May 31 to explain in more details what we believe would be a satisfactory remedy to Boeing’s ongoing criminal conduct,” Cassell said.

Investigations into the 2018 and 2019 crashes pointed to a flight-control system that Boeing added to the Max without telling pilots or airlines. Boeing downplayed the significance of the system, then didn’t overhaul it until after the second crash.

The Justice Department investigated Boeing and settled the case in January 2021. After secret negotiations, the government agreed not to prosecute Boeing on a charge of defrauding the United States by deceiving regulators who approved the plane.

In exchange, the company paid $2.5 billion — a $243.6 million fine, a $500 million fund for victim compensation, and nearly $1.8 billion to airlines whose Max jets were grounded.

Boeing has faced civil lawsuits, congressional investigations and massive damage to its business since the crashes in Indonesia and Ethiopia.

Dublin to New York City Portal Temporarily Shut Down Due to Inappropriate Behavior

14 May 2024 at 14:57
People interact with a livestream video "portal" in NYC

A portal linking New York City to Dublin via a livestream has been temporarily shut down after inappropriate behavior ensued, according to the Dublin City Council. 

Less than a week after the 24/7 visual art installation was put in place, officials have opted to close it down temporarily after people began to flash each other, grind on the portal, and one person even shared pictures of the twin tower attack to people in New York City. Alternatively, the portal had also been the site of reunions with old friends and even a proposal, with many documenting their experience with the installation online. 

[time-brightcove not-tgx=”true”]

The Dublin City Council said that although those engaged in the inappropriate behavior were few and far between, videos of said behavior went viral online. 

“While we cannot control all of these actions, we are implementing some technical solutions to address this and these will go live in the next 24 hours,” the council said in a Monday statement. “We will continue to monitor the situation over the coming days with our partners in New York to ensure that portals continue to deliver a positive experience for both cities and the world.”

The New York City portal is next to the Flatiron Building while Dublin’s is at the crux of North Earl Street and O’Connell Street.  

What is the portal?

The portal was launched on May 8 as a way to bring people together via technology. 

“Portals are an invitation to meet people above borders and differences and to experience our world as it really is—united and one,” said Benediktas Gylys, the Lithuanian artist and founder of The Portal. “The livestream provides a window between distant locations, allowing people to meet outside of their social circles and cultures, transcend geographical boundaries, and embrace the beauty of global interconnectedness.”

The Dublin portal is set to connect with other cities and destinations in Poland, Brazil, and Lithuania, the Dublin City Council said in a May 8 press release. The connection with New York City is expected to remain through autumn, with additional cultural performances starting in mid-May.

Why Biden Is Taking a Hard Line on Chinese EVs

14 May 2024 at 11:21
Biden China EV

The Biden Administration has announced new tariffs on Tuesday for Chinese made electric vehicles, quadrupling the current tariff from 27.5% to 102.5%, as well as new tariffs on solar cells, steel, and aluminum.

These tariffs are expected to raise $18 billion in imports from China.

[time-brightcove not-tgx=”true”]

Currently, China exports very few electric vehicles to the U.S., so it is unlikely that the tariffs will have much of an impact in the short run. In the first quarter of 2024, only one Chinese car maker, Geely, exported EVs to the U.S. and it represented less than 1% of the market share. 

Nevertheless, the Biden Administration says that it worries that in the long run, China’s subsidies of its electric vehicle industry could lead it to claim a larger proportion of the market share. “When the global market is flooded by artificially cheap Chinese products, the viability of American and other foreign firms is put into question,” Treasury Secretary Janet Yellen said during a speech she gave while she visited Beijing in April. 

Since coming into office, President Joe Biden has left the tariffs Trump put in place on China intact, as part of a bid to encourage more American manufacturing. On a Monday call with reporters, Lael Brainard, director of the White House National Economic Council, said that the tariffs would help manufacturing workers in Pennsylvania and Michigan by ensuring that “historic investments in jobs spurred by President Biden’s actions are not undercut by a flood of unfairly underpriced exports from China.”

Some observers have suggested that the tariffs are an attempt to get ahead of Donald Trump, who has expressed support for an across-the-board levy of 60% or more on all Chinese goods.

The announcement also comes during an election year during which tensions between the U.S. and China are very high. Over 83% of Americans have an unfavorable view of China, according to a survey conducted by the Pew Research Center in 2023.

Beijing has responded saying that the new tariffs are in violation of the World Trade Organization’s rules. “Section 301 tariffs imposed by the former US administration on China have severely disrupted normal trade and economic exchanges between China and the US. The WTO has already ruled those tariffs against WTO rules,” said Lin Jian, a Chinese Foreign Ministry spokesperson in a conversation with reporters on Friday.

Ahead of the announcement, senior U.S. officials denied the tariffs are related to the presidential election, the Financial Times reported. “This has nothing to do with politics,” one official said.

Why Protesters Around the World Are Demanding a Pause on AI Development 

13 May 2024 at 23:20
Pause AI protest in London

Just one week before the world’s second-ever global summit on artificial intelligence, protesters of a small but growing movement called “Pause AI” demanded that the world’s governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday.

[time-brightcove not-tgx=”true”]

In London, a group of 20 or so protesters stood outside of the U.K.’s Department of Science, Innovation and Technology chanting things like “stop the race, it’s not safe” and “who’s future? our future” with the hopes of attracting the attention of policy makers. The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI’s Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world.

“[AI companies] have proven time and time again… through the way that these companies’ workers are treated, with the way that they treat other people’s work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted,” said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. 

One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. “I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically,” she says. “I love writing personally… I’ve really loved it. And it is kind of just sad, emotionally.”

Read More: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

She says that her main reason for protesting is because she fears that there could be even more dangerous consequences that come from frontier artificial intelligence models in the future. “We have a host of highly qualified knowledgeable experts, Turing Award winners, highly cited AI researchers, and the CEOs of the AI companies themselves [saying that AI could be extremely dangerous].” (The Turing Award is an annual prize awarded to computer scientists for contributions of major importance to the subject, and is sometimes referred to as the “Nobel Prize” of computing.) 

She’s especially concerned about the growing number of experts who warn that improperly controlled AI could lead to catastrophic consequences. A report commissioned from the U.S. government, published in March, warned that “the rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” Currently, the largest AI labs are attempting to build systems that are capable of outperforming humans on nearly every task, including long-term planning and critical thinking. If they succeed, increasing aspects of human activity could become automated, ranging from mundane things like online shopping, to the introduction of autonomous weapons systems that could act in ways that we cannot predict. This could lead to an “arms race” that increases the likelihood of “global- and WMD [weapons of mass destruction]-scale fatal accidents, interstate conflict, and escalation,” according to the report

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

Experts still don’t understand the inner workings of AI systems like Chat GPT, and they worry that in more sophisticated systems, our lack of knowledge could lead us to dramatically miscalculate how more powerful systems would act. Depending on how integrated AI systems are into human life, they could wreak havoc, and gain control of dangerous weapons systems, leading many experts to worry about the possibility of human extinction. “Those warnings aren’t getting through to the general public, and they need to know,” she says. 

As of now, machine learning experts are somewhat divided about exactly how risky further development of artificial intelligence technology is. Two of the three godfathers of deep learning, a type of machine learning that allows AI systems to better simulate the decision making process of the human brain, Geoffrey Hinton and Yoshua Bengio, have publicly stated that they  believe there is a risk that the technology could lead to human extinction. 

Read More: Eric Schmidt and Yoshua Bengio Debate How Much A.I. Should Scare Us

The third godfather, Yann LeCun, who is also the Chief AI Scientist at Meta, staunchly disagrees with the other two. He told Wired in December that “AI will bring a lot of benefits to the world. But people are exploiting the fear about the technology, and we’re running the risk of scaring people away from it.”

Anthony Bailey, another Pause AI protester, said that while he understands there are benefits that could come from new AI systems, he worries that tech companies will be incentivized to build technologies that humans could easily lose control over, because these technologies also have immense potential for profit. “That’s the economically valuable stuff. That’s the stuff that if people are not dissuaded that it’s dangerous, those are the kinds of modules which are naturally going to be built.” 

Why GameStop’s Resurgence Could Signal Another Meme Stock Frenzy

13 May 2024 at 17:00
Keith Gill

A single JPEG has catalyzed yet another rabid surge in the stock price of the video game store GameStop: its price jumped by more than 70% on Monday morning.  

On the evening of Sunday, May 12, a man named Keith Gill posted an illustration on X of a man bolting upright in his chair. Gill, who goes by handle Roaring Kitty, is something of a digital folk hero to many amateur investors—he was one of the major catalysts of the WallStreetBets craze of 2021. His reappearance and the subsequent enthusiasm—coupled with other rising “meme stocks”—suggest that the U.S. is fully in the midst of another period of meme stock frenzy, in which small-scale investors rally together to push the stock market in unpredictable ways. On Monday, GameStop’s shares briefly passed $36—its highest price since August 2022—and were halted multiple times for volatility. And GameStop topped both Google and X’s top trends list. 

[time-brightcove not-tgx=”true”]

In late 2020, Gill became renowned for his stock market advice on YouTube and the subreddit Wall Street Bets. In particular, he advised his fellow investors to buy GameStop shares, believing that they were undervalued. Some major Wall Street institutions, conversely, were betting on GameStop to fail, as a declining number of people went to physical stores to buy video games, instead purchasing them online.

But a sprawling online community soon rallied around Gill’s thesis, hyping up GameStop with memes and other posts on social media. Millions of everyday people soon bought shares, pushing its price to unprecedented heights and punishing the hedge funds who had bet against it. GameStop soon became the textbook definition of a “meme stock,” or a stock whose value was driven more so by social media enthusiasm as opposed to any sort of underlying financial metrics. The GameStop saga showed Wall Street traditionalists that coordinated small-scale retail investors could have an outsize impact on the stock market. 

Read More: Dumb Money and the Complicated Legacy of GameStop

Interest in meme stocks waned after a few months, and Gill mostly disappeared from public life. In September 2023, his story was canonized in the Hollywood film Dumb Money, in which he was played by Paul Dano. The film portrays Gill’s unwavering belief in his investments, and his refusal to sell shares even when they were worth millions of dollars—because many other investors looked to him as the leader of a movement and would only sell if he did first. 

Gill’s X account lay dormant for nearly three years. But on Sunday, the cartoon of a man sitting upright seemed to signal that he was once again ready to jump into the investing fray and rally fellow traders into another mission. The image accrued 14 million views and 77,000 likes in 15 hours. The next morning, he posted several more memes from pop culture, including of a resurgent Wolverine (Hugh Jackman) and Breaking Bad’s Walter White (Bryan Cranston) growling, “We’re done when I say we’re done.”

The actual company GameStop hasn’t been performing particularly well. In March, GameStop slashed its workforce and reported lower year-over-year fourth quarter revenue, as it faced continued competition from online retailers and weak consumer spending.

But its stock’s resurgence comes in the midst of a larger spike of activity in meme stocks. Crypto meme coins have seen significant trading volume over recent months, and jumped once again following Gill’s post. Other meme stocks also jumped, including AMC, which increased 22%, and Reddit, which increased 13%.

Speculative stocks typically see increased activity when the economy is strong, and people feel like they have money to gamble with. Many participants in the WallStreetBets craze also felt like they were waging a symbolic war against Wall Street and its control of the financial system. Many individuals certainly made a lot of money. But whether the larger collective mission was successful has been hotly debated. “The whole GameStop thing: they lost,” Kyla Scanlon, an economics analyst and content creator, told TIME last year. “It’s very hard to beat the stock market.”

[video id=Rt80iSUP autostart="viewable"]

Big Tech Companies Were Investors in Smaller AI Labs. Now They’re Rivals

13 May 2024 at 14:29

Amazon and Microsoft have, so far, stood slightly apart from the artificial intelligence arms race. While Google and Meta made developing their own AI models a top priority, Microsoft and Amazon have invested in smaller technology companies, in return receiving access to those companies’ AI models that they then incorporated into their products and services.

Microsoft has invested at least $13 billion in OpenAI, the company behind ChatGPT. As part of this agreement, OpenAI gives Microsoft exclusive access to the AI systems it develops, while Microsoft provides OpenAI with the computational power it needs. Anthropic has deals with both Amazon and Google, receiving $4 billion and up to $2 billion from each, respectively, in exchange for Anthropic making its models available through Amazon and Google’s cloud services platforms. (Investors in Anthropic also include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

[time-brightcove not-tgx=”true”]

Now, there are signs that the two technology giants are wading deeper into the fray. In March, The Verge reported that Amazon has tasked its AGI team with building a model that outperforms Anthopic’s most capable AI model, Claude 3, by the middle of this year. Earlier this month, The Information reported that Microsoft is training a foundation model large enough to compete with frontier model developers such as OpenAI.

While there are many types of AI systems that are used in a multitude of ways, the big trend of the last couple of years is language models—the AI systems that can generate coherent prose and usable code, and that power chatbots such as ChatGPT. While younger companies OpenAI and Anthropic, alongside the more established Google DeepMind, are in the lead for now, their new big tech rivals have advantages that will be hard to offset. And if the tech giants come to dominate the AI market, the implications—for corporate concentration of power and for whether the most powerful AI systems are being developed safely—could be troubling.

A change in strategy

Over the course of the 2010s, AI researchers began to realize that training their AI systems with more computational power would reliably make them more capable. Over the same period, the computational power used to train AI models increased rapidly, doubling every six months according to researchers at Epoch, an AI-focused research institute.

The specialized semiconductor chips required to do that much computational work are expensive, as is employing the engineers who know how to make use of them. OpenAI CEO Sam Altman has stated that GPT-4 cost over $100 million to train. Needing more and more capital is why OpenAI, which was founded in 2015 as a nonprofit, changed its structure and went on to ink multibillion dollar deals with Microsoft, and why Anthropic has signed similar agreements with Amazon and Google. Google DeepMind—the AI team within Google that develops Google’s most powerful AI systems—was formed last year when Google merged its elite AI group, Google Brain, with DeepMind. Much like OpenAI and Anthropic, DeepMind started out as a startup before it was acquired by Google in 2014. 

Read More: Amazon’s Partnership With Anthropic Shows Size Matters in the AI Industry

These partnerships have paid off for all parties involved. OpenAI and Anthropic have been able to access the computational power they need to train state-of-the-art AI models—most commentators agree that OpenAI’s GPT-4 and Anthropic’s Claude 3 Opus, along with Google DeepMind’s Gemini Ultra, are the three most capable models currently available. Companies behind the frontier have so far tried alternative business strategies. For example, Meta gives more thorough access to its AI models in order to benefit from developers outside the company tuning them up, and to attract talented researchers who prefer to be able to openly publish their work.

At quarterly earnings reports in April, Microsoft and Amazon reported bumper months, which they both partly credited to AI. Both companies also benefit from the agreements in that a large proportion of the money flows back to them, as it’s used to purchase computational power from their cloud computing services units.

However, as the technical feasibility and commercial utility of training larger models has become apparent, it has become more attractive for Microsoft and Amazon to build their own large models, says Neil Thompson, who researches the economics of AI as the director of the FutureTech research project at the Massachusetts Institute of Technology. Building their own models should, if successful, be cheaper than licensing the models from their smaller partners and give the big tech companies more control over how they use the models, he says.

It’s not only the big tech companies that are making advances. OpenAI’s Altman has pitched his company’s products to a range of large firms that include Microsoft customers.

Who will win out?

The good news for OpenAI and Anthropic is that they have a head start. GPT-4 and Claude 3 Opus, alongside Google’s Gemini Ultra, are still in a different class from other language models such as Meta’s Llama 3, according to a popular chatbot ranking site. OpenAI notably finished training GPT-4 back in August 2022.

But maintaining this lead will be “a constant struggle,” writes Nathan Benaich, founder and general partner at venture capital firm Air Street Capital, in an email to TIME. “Labs are in the challenging position of being in constant fundraising mode to pay for talent and hardware, while lacking a plan to translate this model release arms race into a sustainable long-term business. As the sums of money involved become too high for US investors, they’ll also start having to navigate tricky questions around foreign sovereign wealth.” In February, the Wall Street Journal reported that Altman was in talks with investors including the U.A.E government to raise up to $7 trillion for AI chip manufacturing projects.

Read More: The UAE Is on a Mission to Become an AI Power

Big technology companies, on the other hand, have ready access to computational resources—Amazon, Microsoft, and Google account for 31%, 24%, and 11% of the global cloud infrastructure market, respectively, according to data from market intelligence firm Synergy Research Group. This makes it cheaper for them to train large models. It also means that, even if further development of language models doesn’t pay off commercially for any company, the tech companies selling access to computational power via the cloud can still profit.

“The cloud providers are the shovel salesmen during the gold rush. Whether frontier model builders make money or lose it, cloud providers win,” writes Benaich. “Companies like Microsoft and Amazon sit in an enviable position in the value chain, combining both the resources to build their own powerful models with the scale that makes them an essential distribution partner for newer entrants.”

But while the big technology companies may have certain advantages, the smaller companies have their own strengths, such as greater experience training the largest models, and the ability to attract the most talented researchers, says Thompson.

Anthropic is betting that its talent density and proprietary algorithms will allow it to stay at the frontier while using less computational resources than many of its competitors, says Jack Clark, one of the company’s co-founders and head of policy. “We’re going to be on the frontier surprisingly efficiently relative to others,” he says. “For the next few years, I don’t have concerns about this.”

If Big Tech wins

It is still very much an open question whether big technology companies will manage to outcompete their smaller investees. But if they were to, there could be implications for market competition and for efforts to ensure the development of powerful AI systems benefits society. 

While it could be argued that more companies entering the foundation model market would increase competition, it is more likely that the vertical integration will serve to increase the power of already powerful technology companies, argues Amba Kak, co-executive director of the AI Now Institute, a research institute that studies the social implications of artificial intelligence.

“Viewing this as ‘more competition’ would be the most inventive corporate spin that obscures the reality that all the versions of this world serve to consolidate the concentration of power in tech,” she writes to TIME. “We need to be wary of this kind of spin especially in the context of heightened antitrust scrutiny from the UK CMA, the FTC and European Commission.”

Read More: U.K. Competition Watchdog Signals Cautious Approach to AI Regulation

Larger companies coming to dominate could also be troubling because the smaller companies that currently lead were explicitly founded in order to ensure that the building of powerful AI systems goes well for humanity, says Anton Korinek, an economics professor at the University of Virginia. OpenAI’s founding goal was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole,” and Anthropic’s founding goal was “to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people.” 

“In some sense, you can say, the AGI labs—OpenAI, Anthropic, DeepMind—were all founded on the basis of idealism,” he says. “Large shareholder owned and controlled corporations, they just can’t follow that strategy—they have to produce value for the shareholder ultimately.”

Even so, companies like OpenAI and Anthropic cannot act entirely in the public interest, because they’re also exposed to commercial incentives through the need to raise funds, says Korinek. “It’s part of that broader movement, that capital in the form of [computational power] is becoming the most important input,” he says. “If your training runs are in the millions, it is much easier to raise philanthropic funding for this. But if your training rounds are in the billions, you do need financial returns, in the way that our economy is currently organized.”

With reporting by Billy Perrigo/San Francisco

Why This Chinese EV Poses a Big Threat to the U.S. Auto Industry

45th Bangkok International Motor Show.

LIVONIA, Mich. — A tiny, low-priced electric car called the Seagull has American automakers and politicians trembling.

The car, launched last year by Chinese automaker BYD, sells for around $12,000 in China, but drives well and is put together with craftsmanship that rivals U.S. electric vehicles that cost three times as much. A shorter-range version costs under $10,000.

[time-brightcove not-tgx=”true”]

Tariffs on imported Chinese vehicles will keep the Seagull out of America for now, and it likely would sell for more than 12 grand if imported.

Read More: The Lesson From BYD’s EV Takeover: Don’t Discount China

But the rapid emergence of low-priced EVs from China could shake up the global auto industry in ways not seen since Japanese makers arrived during the oil crises of the 1970s. BYD, which stands for “Build Your Dreams,” could be a nightmare for the U.S. auto industry.

“Any car company that’s not paying attention to them as a competitor is going to be lost when they hit their market,” said Sam Fiorani, a vice president at AutoForecast Solutions near Philadelphia. “BYD’s entry into the U.S. market isn’t an if. It’s a when.”

U.S. politicians and manufacturers already see Chinese EVs as a serious threat. The Biden administration on Tuesday is expected to announce 100% tariffs on electric vehicles imported from China, saying they pose a threat to U.S. jobs and national security.

The Alliance for American Manufacturing says in a paper that government subsidized Chinese EVs “could end up being an extinction-level event for the U.S. auto sector.”

Earlier this year, Tesla CEO Elon Musk said Chinese EVs are so good that without trade barriers, “they will pretty much demolish most other car companies in the world.”

Outside of China, EVs are often pricey, aimed at higher-income buyers. But Chinese brands offer affordable options for the masses — just as many governments are encouraging a shift away from gasoline vehicles to fight climate change.

Inside a huge garage near Detroit, a company called Caresoft Global tore apart and reassembled a bright green Seagull that its China office purchased and shipped to the U.S.

Company President Terry Woychowski, a former chief engineer on General Motors’ pickup trucks, said the car is a “clarion call” for the U.S. industry, which is years behind China in designing low-cost EVs.

After the teardown, Woychowski said he was left wondering if U.S. automakers can adjust. “Things will have to change in some radical ways in order to be able to compete,” he said.

There’s no single miracle that explains how BYD can manufacture the Seagull for so little. Instead, Woychowski said the entire car, which can go 252 miles (405 kilometers) per charge, is “an exercise in efficiency.”

Higher U.S. labor costs are a part of the equation. BYD also can keep costs down because of its battery-making expertise — largely lithium iron phosphate chemistry used in consumer products. The batteries cost less but have lower range than most current lithium-ion batteries.

Americans are still learning to make cheaper batteries, Woychowski said.

BYD also makes many of its own parts, including electric motors, dashboards, and bodies, using its huge scale — 3 million vehicles sold worldwide last year — for cost savings.

It designs vehicles with cost and efficiency in mind, he said. For instance, the Seagull has only one windshield wiper, eliminating one motor and one arm, saving on weight, cost and labor to install.

U.S. automakers don’t often design vehicles this way and incur excess engineering costs, Woychowski said.

The efficiency means weight savings that add up, allowing the Seagull to travel farther per charge on a smaller battery.

So Detroit needs to quickly re-learn a lot of design and engineering to keep up while shedding practices from a century of building vehicles, Woychowski said.

The Seagull still has a quality feel. Doors close solidly. The gray synthetic leather seats have stitching that matches the body color, a feature usually found in more expensive cars. The Seagull tested by Caresoft has six air bags and electronic stability control.

A brief drive through some connected parking lots by a reporter showed that it runs quietly and handles curves and bumps as well as more costly EVs

While acceleration isn’t head-snapping like other EVs, the Seagull is peppy and would have no problems entering a freeway.

BYD would have to modify its cars to meet U.S. safety standards, which are more stringent than in China. Woychowski says Caresoft hasn’t done crash tests, but he estimated that would add $2,000 to the cost.

BYD sells the Seagull, also called the Dolphin Mini, in four Latin American countries for about $21,000. The higher price includes transportation and reflects higher profits possible in less cutthroat markets than China.

BYD told the AP last year it is “still in the process” of deciding whether to sell autos in the U.S. It is weighing factory sites in Mexico for the Mexican market.

The company isn’t selling cars in the U.S. largely due to 27.5% tariffs on the sale price of Chinese vehicles when they arrive. Donald Trump slapped on the bulk of the tariff, 25%, when he was president, and it was kept in place under Joe Biden. Trump contends that the rise of EVs backed by Biden will cost U.S. factory jobs, sending the work to China.

The Biden administration has backed legislation and policies to build a U.S. EV manufacturing base.

Some members of Congress are urging Biden to ban imports of Chinese vehicles altogether, including those made in Mexico by Chinese companies that now would come in largely without tariffs.

Ford CEO Jim Farley, has seen Caresoft’s work on the Seagull and BYD’s rapid growth, especially in Europe. He’s moving to change his company. A small “skunkworks” team is designing a new, small EV to keep costs down and quality high, he said earlier this year.

Chinese makers, Farley said, sold almost no EVs in Europe two years ago, but now have 10% of the EV market. It’s likely they’ll export around the globe and possibly sell in the U.S.

Ford is preparing to counter that. “Don’t take anything for granted,” Farley said. “This CEO doesn’t.”

____

Associated Press writers Paul Wiseman and Didi Tang in Washington contributed to this report. Moritsugu reported from Beijing.

Illness Took Away Her Voice. AI Created a Replica She Carries in Her Phone

13 May 2024 at 10:55
AI Recreating Lost Voice

PROVIDENCE, R.I. — The voice Alexis “Lexi” Bogan had before last summer was exuberant.

She loved to belt out Taylor Swift and Zach Bryan ballads in the car. She laughed all the time — even while corralling misbehaving preschoolers or debating politics with friends over a backyard fire pit. In high school, she was a soprano in the chorus.

[time-brightcove not-tgx=”true”]

Then that voice was gone.

Doctors in August removed a life-threatening tumor lodged near the back of her brain. When the breathing tube came out a month later, Bogan had trouble swallowing and strained to say “hi” to her parents. Months of rehabilitation aided her recovery, but her speech is still impaired. Friends, strangers and her own family members struggle to understand what she is trying to tell them.

In April, the 21-year-old got her old voice back. Not the real one, but a voice clone generated by artificial intelligence that she can summon from a phone app. Trained on a 15-second time capsule of her teenage voice — sourced from a cooking demonstration video she recorded for a high school project — her synthetic but remarkably real-sounding AI voice can now say almost anything she wants.

She types a few words or sentences into her phone and the app instantly reads it aloud.

“Hi, can I please get a grande iced brown sugar oat milk shaken espresso,” said Bogan’s AI voice as she held the phone out her car’s window at a Starbucks drive-thru.

Experts have warned that rapidly improving AI voice-cloning technology can amplify phone scams, disrupt democratic elections and violate the dignity of people — living or dead — who never consented to having their voice recreated to say things they never spoke.

It’s been used to produce deepfake robocalls to New Hampshire voters mimicking President Joe Biden. In Maryland, authorities recently charged a high school athletic director with using AI to generate a fake audio clip of the school’s principal making racist remarks.

Read More: To Make a Real Difference in Health Care, AI Will Need to Learn Like We Do

But Bogan and a team of doctors at Rhode Island’s Lifespan hospital group believe they’ve found a use that justifies the risks. Bogan is one of the first people — the only one with her condition — who have been able to recreate a lost voice with OpenAI’s new Voice Engine. Some other AI providers, such as the startup ElevenLabs, have tested similar technology for people with speech impediments and loss — including a lawyer who now uses her voice clone in the courtroom.

“We’re hoping Lexi’s a trailblazer as the technology develops,” said Dr. Rohaid Ali, a neurosurgery resident at Brown University’s medical school and Rhode Island Hospital. Millions of people with debilitating strokes, throat cancer or neurogenerative diseases could benefit, he said.

“We should be conscious of the risks, but we can’t forget about the patient and the social good,” said Dr. Fatima Mirza, another resident working on the pilot. “We’re able to help give Lexi back her true voice and she’s able to speak in terms that are the most true to herself.”

Mirza and Ali, who are married, caught the attention of ChatGPT-maker OpenAI because of their previous research project at Lifespan using the AI chatbot to simplify medical consent forms for patients. The San Francisco company reached out while on the hunt earlier this year for promising medical applications for its new AI voice generator.

Bogan was still slowly recovering from surgery. The illness started last summer with headaches, blurry vision and a droopy face, alarming doctors at Hasbro Children’s Hospital in Providence. They discovered a vascular tumor the size of a golf ball pressing on her brain stem and entangled in blood vessels and cranial nerves.

“It was a battle to get control of the bleeding and get the tumor out,” said pediatric neurosurgeon Dr. Konstantina Svokos.

The 10-hour length of the surgery coupled with the tumor’s location and severity damaged Bogan’s tongue muscles and vocal cords, impeding her ability to eat and talk, Svokos said.

“It’s almost like a part of my identity was taken when I lost my voice,” Bogan said.

The feeding tube came out this year. Speech therapy continues, enabling her to speak intelligibly in a quiet room but with no sign she will recover the full lucidity of her natural voice.

“At some point, I was starting to forget what I sounded like,” Bogan said. “I’ve been getting so used to how I sound now.”

Whenever the phone rang at the family’s home in the Providence suburb of North Smithfield, she would push it over to her mother to take her calls. She felt she was burdening her friends whenever they went to a noisy restaurant. Her dad, who has hearing loss, struggled to understand her.

Back at the hospital, doctors were looking for a pilot patient to experiment with OpenAI’s technology.

“The first person that came to Dr. Svokos’ mind was Lexi,” Ali said. “We reached out to Lexi to see if she would be interested, not knowing what her response would be. She was game to try it out and see how it would work.”

Bogan had to go back a few years to find a suitable recording of her voice to “train” the AI system on how she spoke. It was a video in which she explained how to make a pasta salad.

Her doctors intentionally fed the AI system just a 15-second clip. Cooking sounds make other parts of the video imperfect. It was also all that OpenAI needed — an improvement over previous technology requiring much lengthier samples.

They also knew that getting something useful out of 15 seconds could be vital for any future patients who have no trace of their voice on the internet. A brief voicemail left for a relative might have to suffice.

When they tested it for the first time, everyone was stunned by the quality of the voice clone. Occasional glitches — a mispronounced word, a missing intonation — were mostly imperceptible. In April, doctors equipped Bogan with a custom-built phone app that only she can use.

“I get so emotional every time I hear her voice,” said her mother, Pamela Bogan, tears in her eyes.

“I think it’s awesome that I can have that sound again,” added Lexi Bogan, saying it helped “boost my confidence to somewhat where it was before all this happened.”

She now uses the app about 40 times a day and sends feedback she hopes will help future patients. One of her first experiments was to speak to the kids at the preschool where she works as a teaching assistant. She typed in “ha ha ha ha” expecting a robotic response. To her surprise, it sounded like her old laugh.

She’s used it at Target and Marshall’s to ask where to find items. It’s helped her reconnect with her dad. And it’s made it easier for her to order fast food.

Bogan’s doctors have started cloning the voices of other willing Rhode Island patients and hope to bring the technology to hospitals around the world. OpenAI said it is treading cautiously in expanding the use of Voice Engine, which is not yet publicly available.

A number of smaller AI startups already sell voice-cloning services to entertainment studios or make them more widely available. Most voice-generation vendors say they prohibit impersonation or abuse, but they vary in how they enforce their terms of use.

“We want to make sure that everyone whose voice is used in the service is consenting on an ongoing basis,” said Jeff Harris, OpenAI’s lead on the product. “We want to make sure that it’s not used in political contexts. So we’ve taken an approach of being very limited in who we’re giving the technology to.”

Harris said OpenAI’s next step involves developing a secure “voice authentication” tool so that users can replicate only their own voice. That might be “limiting for a patient like Lexi, who had sudden loss of her speech capabilities,” he said. “So we do think that we’ll need to have high-trust relationships, especially with medical providers, to give a little bit more unfettered access to the technology.”

Bogan has impressed her doctors with her focus on thinking about how the technology could help others with similar or more severe speech impediments.

“Part of what she has done throughout this entire process is think about ways to tweak and change this,” Mirza said. “She’s been a great inspiration for us.”

While for now she must fiddle with her phone to get the voice engine to talk, Bogan imagines an AI voice engine that improves upon older remedies for speech recovery — such as the robotic-sounding electrolarynx or a voice prosthesis — in melding with the human body or translating words in real time.

She’s less sure about what will happen as she grows older and her AI voice continues to sound like she did as a teenager. Maybe the technology could “age” her AI voice, she said.

For now, “even though I don’t have my voice fully back, I have something that helps me find my voice again,” she said.

___

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

Gen AI Has Already Taken the World by Storm. Just Wait Until It Gets a Quantum Boost

13 May 2024 at 05:00
Quantum computer. Conceptual computer artwork of electronic circuitry as part of a quantum computer structure.

When Lawrence Gasman was looking for a PhD topic back in the 1970s, computing labs were already abuzz with smart people proposing clever studies in artificial intelligence. “But the problem was we had nothing to run them on,” he says. “The processors needed just didn’t exist.”

It took half a century for computing power to catch up with AI’s potential. Today, thanks to hi-powered chips such as GPUs from California-based Nvidia, generative artificial intelligence, or gen AI, is revolutionizing the way we work, study, and consume entertainment, empowering people to create bespoke articles, images, videos, and music in the blink of an eye. The technology has spawned a bevy of competing consumer apps offering enhanced voice recognition, graphic design, and even coding.

[time-brightcove not-tgx=”true”]

Now AI stands poised to get another boost from a radical new form of computing: quantum. “Quantum could potentially do some really remarkable things with AI,” says Gasman, founder of Inside Quantum Technology.

Rather than relying on traditional computing’s binary “bits”—switches denoted as 1s and 0s—quantum use multivariant “qubits” that exist in some percentage of both states simultaneously, akin to a coin spinning in midair. The result is exponentially boosted computing power as well as an enhanced ability to intuitively mimic natural processes that rarely conform to a binary form.

Whereas gen AI’s consumer-targeted applications have made its impact more widespread and immediate, quantum is more geared towards industry, meaning several recent milestones have slipped under the radar. However, they stand to potentially turbocharge the AI revolution.

“Generative AI is one of the best things that has happened to quantum computing,” says Raj Hazra, CEO of Colorado-based quantum start-up Quantinuum. “And quantum computing is one of the best things to happen to the advance of generative AI. They are two perfect partners.”

Ultimately, AI relies on the ability to crunch huge stacks of information, which is where quantum excels. In December, IBM unveiled its latest processor, dubbed Heron, which boasts 133 qubits, the firm’s best ever error reduction and the ability to be linked together within its first modular quantum computer, System Two. In addition, IBM unveiled another chip, Condor, which has 1,121 superconducting qubits arranged in a honeycomb pattern. They’re advances that mean “now we’re entering what I like to call ‘quantum utility,’ where quantum is getting used as a tool,” Jay Gambetta, vice-president of IBM Quantum, tells TIME.

Since qubits are incredibly delicate subatomic particles, they don’t always behave in the same way, meaning quantum relies both on increasing the overall number of qubits to “check” their calculations as well as boosting the fidelity of each individual. Different technologies used to create a quantum effect prioritize different sides of this equation, making direct comparisons very tricky and enhancing the arcane nature of the technology.

IBM uses superconducting qubits, which require cooling to almost absolute zero to mitigate thermal noise, preserve quantum coherence, and minimize environmental interactions. However, Quantinuum uses alternative “trapped-ion” technology that holds ions (charged atoms) in a vacuum using magnetic fields. This technology doesn’t require cooling, though is thought to be more difficult to scale. However, Quantinuum in April claimed it had achieved 99.9% fidelity of its qubits.

“The trapped ion approach is miles ahead of everybody else,” says Hazra. Gambetta, in turn, argues the superconducting quantum has advantages for scaling, speed of quantum interactions, and leveraging existing semiconductor and microwave technology to make advances quicker.

For impartial observers, the jury is still out since the raft of competing, non-linear metrics render it impossible to tell who’s actually ahead in this race. “They are very different approaches, both are showing promise,” says Scott Likens, global AI and innovation technology lead for the PwC business consultancy. “We still don’t see a clear winner, but it’s exciting.”

Where Gambetta and Hazra agree is that quantum has the potential to mesh with AI to produce truly awesome hybrid results. “I would love to see quantum for AI and AI for quantum,” says Gambetta. “The synergies between them, and the advancement in general in technology, makes a lot of sense.”

Hazra concurs, saying “generative AI needs the power of quantum computing to make fundamental advances.” For Hazra, the Fourth Industrial Revolution will be led by generative AI but underpinned by the power of quantum computing. “The workload of AI and the computing infrastructure of quantum computing are both necessary.”

It’s a view shared across the Pacific in China, where investments in quantum are estimated at around $25 billion, dwarfing the rest of the world. China’s top quantum expert, Prof. Pan Jianwei, has developed a Jiuzhang quantum computer that he claims can perform certain kinds of AI-related calculations some 180 million times faster than the world’s top supercomputer.

In a paper published in the peer-reviewed Physical Review Letters journal last May, Jiuzhang processed over 2,000 samples of two common AI-related algorithms—Monte Carlo and simulated annealing—which would take the world’s fastest classical supercomputer five years, in under a second. In October, Pan unveiled Jiuzhang 3.0, which he claims was 10 quadrillion times faster in solving certain problems than a classical supercomputer.

Jiuzhang utilizes yet a third form of quantum technology—light or photons—and Pan is widely lauded as China’s king of quantum. A physics professor at the University of Science and Technology of China, Pan in 2016 launched Micius, the world’s first quantum communication satellite, which beamed entangled photons between earth a year later for the world’s first quantum-secured video call.

Micius is considered quantum’s “Sputnik” moment, prompting American policymakers to funnel hundreds of millions of dollars into quantum information science via the National Quantum Initiative. Bills such as the Innovation and Competition Act of 2021 have provided $1.5 billion for communications research, including quantum technology. The Biden Administration’s proposed 2024 budget includes $25 billion for “emerging technologies” including AI and quantum. Ultimately, quantum’s awesome computing power will soon render all existing cryptography obsolete, presenting a security migraine for governments and corporations everywhere.

Quantum’s potential to turbocharge AI also applies to the simmering technology competition between the world’s superpowers. In 2021, the U.S. Commerce Department added eight Chinese quantum computing organizations to its Entity List, claiming they “support the military modernization of the People’s Liberation Army” and adopt American technologies to develop “counter-stealth and counter-submarine applications, and the ability to break encryption.”

These restrictions dovetail with a raft of measures targeting China’s AI ambitions, including last year blocking Nvida from selling AI chips to Chinese firms. The question is whether competition between the world’s top two economies stymies overall progress on AI and quantum—or pushes each nation to accelerate these technologies. The answer could have far-reaching consequences.

“AI can accelerate quantum computing, and quantum computing can accelerate AI,” Google CEO Sundar Pichai told the MIT Technology Review in 2019. “And collectively, I think it’s what we would need to, down the line, solve some of the most intractable problems we face, like climate change.”

Still, both the U.S. and China must overcome the same hurdle: talent. While only a few universities around the world offer quantum physics or mechanics, dedicated courses on quantum computing are even rarer, let alone expertise on the various specialties within. “Typically, the most valuable and scarcest resource becomes the basis of your competitive advantage,” says Hazra. “And right now in quantum it’s people with that knowledge.”

❌
❌