Normal view

There are new articles available, click to refresh the page.
Today — 21 May 2024Technology

Scarlett Johansson ‘Angered’ By ChatGPT Voice That Sounded ‘Eerily’ Like Her

21 May 2024 at 01:21
TOPSHOT-US-ENTERTAINMENT-JUSTICE-AWARD

Scarlett Johansson said Monday that she was “shocked, angered and in disbelief” when she heard that OpenAI used a voice “eerily similar” to hers for its new ChatGPT 4.0 chatbot, even after she had declined to provide her voice.

Earlier on Monday, OpenAI announced on X that it would pause the AI voice, known as “Sky,” while it addresses “questions about how we chose the voices in ChatGPT.” The company said in a blog post that the “Sky” voice was “not an imitation” of Johansson’s voice, but that it was recorded by a different professional actor, whose identity the company would not reveal to protect her privacy.

[time-brightcove not-tgx=”true”]

But Johansson said in a statement to NPR on Monday that OpenAI’s Chief Executive Officer Sam Altman had asked her in September to voice the ChatGPT 4.0 system because he thought her “voice would be comforting to people.” She declined, but nine months later, her friends, family and the public noticed how the “Sky” voice resembled hers.

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the actress said in her statement. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

Johansson said that she was “forced to hire legal counsel” because of the situation, and that her counsel wrote two letters to Altman and OpenAI asking them to explain the process for creating the “Sky” voice. After, OpenAI “reluctantly agreed” to pull the voice from the platform, she said.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson said in her statement. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

OpenAI first revealed voice functions for ChatGPT in September. In November, the company announced that the feature would be free for all users on the mobile app. Chat GPT 4.0 isn’t publicly available yet—it will be rolled out in the coming weeks and months, according to Associated Press.

Trump Media and Technology Group Posts Over $300M Net Loss in First Public Quarter

21 May 2024 at 00:48
Trump Media Results

(SARASOTA, Fla.) — Trump Media and Technology Group, the owner of former President Donald Trump’s social networking site Truth Social, lost more than $300 million last quarter, according to its first earnings report as a publicly traded company.

For the three-month period that ended March 31, the company posted a loss of $327.6 million, which it said included $311 million in non-cash expenses related to its merger with a company called Digital World Acquisition Corp., which was essentially a pile of cash looking for a target to merge with. It’s an example of what’s called a special purpose acquisition company, or SPAC, which can give young companies quicker and easier routes to getting their shares trading publicly.

[time-brightcove not-tgx=”true”]

A year earlier, Trump Media posted a loss of $210,300.

Trump Media said collected $770,500 in revenue in the first quarter, largely from its “nascent advertising initiative.” That was down from $1.1 million a year earlier.

“At this early stage in the Company’s development, TMTG remains focused on long-term product development, rather than quarterly revenue,” Trump Media said in its earnings news release.

Earlier this month, the company fired an auditor that federal regulators recently charged with “massive fraud.” The former president’s media company dismissed BF Borgers as its independent public accounting firm on May 3, delaying the filing of the quarterly earnings report, according to a securities filings.

Trump Media had previously cycled through at least two other auditors — one that resigned in July 2023, and another that was terminated its the board in March, just as it was re-hiring BF Borgers.

Shares of Trump Media climbed 36 cents to $48.74 in after-hours trading. The stock, which trades under the ticker symbol “DJT,” began trading on Nasdaq in March and peaked at nearly $80 in late March.

Yesterday — 20 May 2024Technology

Taiwan’s Digital Minister Has an Ambitious Plan to Align Tech With Democracy

20 May 2024 at 13:00

Audrey Tang, Taiwan’s 43-year-old minister of digital affairs, has a powerful effect on people. At a panel discussion at Northeastern University in Boston, 20-year-old student Diane Grant is visibly moved, describing Tang’s talk as the best she’s been to in her undergraduate career. Later that day, a German tourist recognizes Tang leaving the Boston Museum of Science and requests a photo, saying she’s “starstruck.” At the Massachusetts Institute of Technology, a trio of world-leading economists bashfully ask Tang to don a baseball cap emblazoned with the name of their research center and pose for a group photo. Political scientist and former gubernatorial candidate Danielle Allen, confesses to Tang that, although others often tell her that she is a source of inspiration to them, she rarely feels inspired by others. But she has found an exception: Tang inspires her. 

[time-brightcove not-tgx=”true”]

Few visiting dignitaries elicit such reactions. But to some, Tang symbolizes hope. 

In an era when digital technologies—social media, artificial intelligence, blockchains—are increasingly seen as a threat to democracy, Taiwan seems to offer an alternative path. In Taiwan, civil society groups and the government work together to harness technology, giving people more say in how their country is run, and tackling problems like tracing the spread of the COVID-19 pandemic and combatting electoral disinformation campaigns.

Tang, the world’s first openly transgender minister, played a pivotal role in all of this, first as an activist hacker and then from within the government. Now, she is stepping back from her ministerial duties to embark upon a world tour to promote the ideas that have flourished in Taiwan. These are ideas captured in Plurality, a book Tang has co-authored with E. Glen Weyl, a 39-year-old American economist at Microsoft, and more than 100 online collaborators.

Tang aims to be a global ambassador, demonstrating how technology and democracy can coexist harmoniously. “In Taiwan, for the past decade, this is the dominant worldview,” she says. “Just to see how that narrative—how that overarching, intertwined feeling of tech and democracy—can grow in non-Taiwan places. I’m most looking forward to that.”

The tour’s objective is not only to disseminate the book’s ideas but also to expose people to Tang herself. “It would change the world if every major world leader gets to spend 30 minutes with Audrey,” says Weyl, the primary orchestrator of the plan. “It’s about the experience of being with her. It changed my life.”


Tang’s unique charisma was shaped by a rare set of circumstances. At the age of 4, Tang—who was born with a serious heart condition—was given just a 50% chance of surviving long enough to undergo life-saving surgery. If she ever became upset, or angry, or excited, she would lose consciousness and wake up in an intensive care unit. She soon learned to keep her composure, and though an operation corrected her condition when she was 12, her equanimity remained.

“If you’ve been living with that condition for 12 years of your life, that’s your core personality,” she says. “I convinced myself to go on a roller coaster once or twice, rationally knowing I would not die. But it wasn’t very pleasant.”

Tang grew up alongside democracy and digital technologies in Taiwan. Aged 8, she taught herself to program by sketching a keyboard on a piece of paper, feigning typing, and then writing the output on another piece of paper. (After a few weeks of this, her parents relented and bought her a computer). By 14, Tang had left formal education to pursue programming full-time; she spent the next two decades contributing to open-source projects both in Taiwan and abroad.

“The idea of personal computing, to people in Taiwan, is inherently democratic,” Tang says. Computers and internet access meant the ability to publish books without state sponsorship, and communicate without state surveillance, a stark contrast to the martial law era that only ended in 1987, six years after Tang was born. 

All of this fueled the rise of the g0v (gov zero) movement in 2012, led by civic hackers who wanted to increase transparency and participation in public affairs. The movement started by creating superior versions of government websites, which they hosted on .g0v.tw domains instead of the official .gov.tw, often attracting more traffic than their governmental counterparts. The g0v movement has since launched more initiatives that seek to use technology to empower Taiwanese citizens, such as vTaiwan, a platform that facilitates public discussion and collaborative policymaking between citizens, experts, and government officials.

In 2014, the movement’s influence became clear when protestors, many affiliated with g0v, occupied Taiwan’s legislative chamber to oppose a trade deal with China. “Democracy needs me,” Tang wrote to her colleagues at California-based software company Socialtext, before leaving to support the protesters for the duration of their 24-day occupation by helping them to peacefully broadcast their message.

The protests marked a turning point in Taiwan. The government made efforts to engage with young activists and in 2016, Tang, then 35, was appointed as digital minister without portfolio. In 2022, Tang was named Taiwan’s first minister for digital affairs, and in 2023 she was made chairperson of the board of Taiwan’s National Institute of Cyber Security.

In many regards, Taiwan leads the world in digital democracy, thanks to initiatives led by Tang and others. Taiwan’s agile response to COVID-19, including a widely-praised contact tracing system, exemplifies this success. (At one point, the island nation went 200 days without a locally transmitted coronavirus case.) Such achievements, Plurality argues, are partly responsible for Taiwan’s remarkable economic, social, and political success over the last decade.

However, it’s important not to overstate the impact of Taiwan’s digital democracy initiatives, cautions Sara Newland, an assistant professor at Smith College, Massachusetts, who researches Chinese and Taiwanese politics. While Taiwan is a well-governed country and it’s plausible that the various examples of digital democracy contribute to this success, it’s also possible that these initiatives came about because Taiwan is well-governed, she says. The vision outlined in Plurality borders on utopian, and Taiwan’s case may not provide enough evidence to prove its feasibility.

Still, while Plurality might draw heavily on Taiwan’s experience, its scope is global. The book’s core lays out the fundamental rights that societies must promote, how digital technologies can aid in promoting them, and the collaboration-enhancing technologies that could strengthen democracy. For each technology, examples are drawn from outside Taiwan. For example, “immersive shared reality technologies,” futuristic cousins of virtual reality headsets like Apple’s Vision Pro and Meta’s Quest, could foster empathy at a distance and allow people to step into another’s shoes. The book cites Becoming Homeless, a seven-minute virtual reality experience designed by researchers at Stanford to help the user understand how it feels to lose your home, as a primitive example of an immersive shared reality technology.

Plurality aims to offer a roadmap for a future in which technology and democracy not only co-exist but thrive together; in writing the book, Tang and Weyl put this collaborative ethos into practice. The book, which is free to download, began life as a blog post authored by Weyl; although Weyl and Tang conceived of the project and Weyl was the primary author, anyone could contribute to the book’s development. More than 100 people contributed—some copy-edited, some designed graphics, some wrote entire chapters, says Tang. While juggling ministerial duties, Tang spent hours each week working on the book, contributing ideas and building the website. “At the end of the day,” she quips, “I was still a coder for some reason.”


The fledgling plurality movement faces a daunting challenge: countering the threat from the two dominant digital technologies of our time—artificial intelligence and blockchains—and their effects on society. Plurality argues that both of these are undermining democracy in different, but equally pernicious ways. AI systems facilitate top-down control, empowering authoritarian regimes and unresponsive technocratic governments in ostensibly democratic countries. Meanwhile, blockchain-based technologies atomize societies and accelerate financial capitalism, eroding democracy from below. As Peter Thiel, billionaire entrepreneur and investor, put it in 2018: “crypto is libertarian and AI is communist.”

Weyl sees echoes of the 1930s, when fascism and communism battled for ideological supremacy. “But there was another option,” he says—liberal democracy. Now, Weyl and Tang are striving to articulate a new alternative to AI-powered authoritarianism and blockchain-fueled libertarianism: “plurality.” They hope this idea—of a symbiotic relationship between democracy and collaborative technology—can profoundly influence the century ahead. 

Plurality concludes with a call to action, setting bold targets for the movement it hopes to inspire. By 2030, the authors want the idea of plurality to be as widely recognized in the tech world as AI and blockchain, and as prominent in political discourse as environmentalism. To get there, the pair aim to cultivate a core group of 1,000 deeply engaged advocates, distribute 1 million copies of the book, and build sympathy among 1 billion people. “Frankly, I’m starting to feel like these [goals] maybe are actually under ambitious,” Weyl says.

This isn’t his first attempt at movement-building. Weyl’s first book, Radical Markets, generated huge buzz when it was published in 2018, prompting him to channel that enthusiasm into launching the RadicalxChange Foundation, a nonprofit that seeks to advance the book’s ideas. (Tang and Weyl are both members of the Foundation’s board, along with Vitalik Buterin, the “prince of cryptocurrency” who introduced the pair in 2018.) However, while the Foundation has had some success, it fell far short of the targets Weyl has set for Plurality’s impact on the world. And history is littered with extinct political movements, from Occupy Wall Street to the Arab Spring, that failed to meet their goals. If Weyl thinks his targets are under ambitious, many might think them delusional. 

Weyl is unperturbed. Last time, he didn’t have a plan. With Plurality, he says, he’s taking a more ambitious approach—one that hinges on Tang’s star power. Weyl has enlisted Oscar-winning director Cynthia Wade to shoot a short documentary about Tang’s life and Taiwan’s democratic evolution, with the goal of premiering it at film festivals later this year.

As Hollywood shut down during last year’s strikes, working through footage of Tang has been soothing, says Wade. “When you’re editing a film, you’re living with somebody. So [Tang has] been living in our household for the last quarter,” she says. “There’s a way in which she encourages you to stop and reflect that feels very different, and maybe even more participatory.”

A feature-length biopic is also in the early stages, with Weyl floating the idea of casting trans actor Ian Alexander in the lead role. Tang, characteristically deadpan, offers an alternative: “Sora, maybe,” referring to an unreleased AI system developed by OpenAI that generates videos from text prompts.

This playful exchange captures the duo’s dynamic. Over the course of four hours at Weyl’s house in Arlington, Mass., Weyl earnestly expounds on the book’s ideas and aspirations, while Tang interjects with droll asides. The evangelizing, the ideological battle of the 21st century, the numerical targets in the millions and billions—these all come from Weyl, they say. Tang would never think in those terms, Weyl says, “without me constantly badgering her.”

Tang nods in agreement, seemingly unfazed by the weight of his expectations. Despite embarking on a journey that could—if Weyl’s goals are met—change the course of history, she remains remarkably laid-back. When a friend asked her last year why she was devoting so much time to the book, she replied simply: “Just to make Glen feel better and sleep better.” 

Such serenity is not the most natural quality in a representative for what Weyl hopes might be a century-defining ideology, but it is, perhaps, the reason for the strong reactions Tang provokes. In fact, it may be Tang’s poise, as much as Weyl’s zeal, that gives the plurality movement some hope of achieving its lofty goals.

Before yesterdayTechnology

A New Lawsuit Accuses Spotify of Cheating Songwriters Out of Royalties

Dilara Irem Sancar—Anadolu/Getty Images

Spotify Technology SA used a legalistic word change to justify slicing royalties to musicians and publishers, reducing the revenue on which royalties are based by almost 50%, according to lawsuit filed by the group that collects their payments.

The change came in March when Spotify added the word “bundled” to its description of its $10.99-a-month music streaming service, the Mechanical Licensing Collective said in its complaint. Nothing else “about the Premium service has actually changed,” according to the suit filed Thursday in federal court in Manhattan.

[time-brightcove not-tgx=”true”]

The collective is legally barred from disclosing how much Spotify royalties declined since March but cited a Billboard story that estimated the loss would amount to about $150 million next year. 

Spotify said it looks forward to “swift resolution” of the lawsuit, which it said concerns terms that publishers and streaming services “agreed to and celebrated years ago.”

“Bundles were a critical component of that settlement, and multiple DSPs include bundles as part of their mix of subscription offerings,” a Spotify spokesperson said in a statement. “Spotify paid a record amount to publishers and societies in 2023 and is on track to pay out an even larger amount in 2024.”

The fight over bundling between the streaming service and publishers has spilled into a dispute over other issues.

The National Music Publishers’ Association on Wednesday sent a cease-and-desist letter to Spotify over products it claims are infringing on songwriters’ copyrights. The NMPA alleges that music videos, lyrics and podcasts on the platform are all using copyrighted music without the proper permissions.

“Before Spotify’s ‘bundling’ betrayal, we may have been able to work together to fix this problem, but they have chosen the hard road by coming after songwriters once again,” David Israelite, chief executive officer at the NMPA, said in a statement.

In response, a Spotify spokesperson called the letter a “press stunt filled with false and misleading claims.” 

Music and audiobook streaming companies, like Spotify, pay musicians and music publishers under a complex system set out in 2018 by the Music Modernization Act of 2018. Under the system, streaming services pay less per stream—in other words, less to creators and publishers—when their services are classified as bundles.

Spotify’s Premium service, which was not classified as a bundle before March 1, includes unlimited music downloads and 15 hours of audiobooks. It added the audiobook offering in November in the U.S. without changing the $10.99 price.

The licensing collective is asking the court to order Spotify to stop classifying Premium as a bundled service and to pay it for lost revenue.

Israelite praised the Mechanical Licensing Collective for “not letting Spotify get away with its latest trick to underpay creators.”

Reddit Partners With OpenAI to Bring Content to ChatGPT and AI Tools to Reddit

Reddit

Reddit Inc. forged a partnership with OpenAI that will bring its content to the chatbot ChatGPT and other products, while also helping the social media company add new artificial intelligence features to its forums.

Shares of Reddit, which had their initial public offering in March, jumped as much as 15% in late trading following the announcement.

The agreement “will enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics,” the companies said Thursday in a joint statement. The deal allows OpenAI to display Reddit’s content and train AI systems on its partner’s data.

[time-brightcove not-tgx=”true”]

Reddit will also offer its users new AI-based tools built on models created by OpenAI, which will place ads on its partner’s site. Financial terms of the deal weren’t disclosed.

Reddit content has long been a popular source of training data for making AI models—including those of OpenAI. Last week, Reddit released new policies governing the use of its data, part of an effort to increase revenue through licensing agreements with artificial intelligence developers and other companies.

“Our data is extremely valuable,” Chief Executive Officer Steve Huffman said at the Bloomberg Technology Summit earlier this month. “We’re seeing a ton of interest in it.”

Finding new moneymaking opportunity was part of Reddit’s pitch in the lead-up to its IPO. The company also signed an accord in January with Alphabet Inc.’s Google worth $60 million to help train large language models, the technology underpinning generative AI.

Huffman previously declined to discuss the specifics of the Google deal but said typical terms can govern how long a Reddit summary can show up in a Google search or whether a licensee has to display Reddit branding in AI-generated results. The San Francisco-based social network has signed licensing deals worth $203 million in total, with terms ranging from two to three years, and has been in talks to strike additional licensing agreements. 

OpenAI, for its part, is increasingly forging partnerships with media companies to help train its AI systems and show more real-time content within its chatbot. The ChatGPT maker also penned deals with Dotdash Meredith earlier this month and the Financial Times in April.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive

Backed by Microsoft Corp., the startup has emerged as a driving force in the development of AI. Sam Altman, CEO of OpenAI, has a long history with Reddit. He was one of the company’s largest shareholders at the time of its IPO earlier this year and briefly served as Reddit’s interim CEO in 2014.

The companies noted in the statement that their partnership was led by OpenAI Chief Operating Officer Brad Lightcap and was approved by its independent directors.

The shares of Reddit, which had declined 5.5% to $56.38 in regular New York trading Thursday, soared as high as $64.75 after the partnership was announced. The stock has gained 66% since its IPO.

How to Hit Pause on AI Before It’s Too Late

16 May 2024 at 15:22
Demonstrator holding "No AI" placard

Only 16 months have passed, but the release of ChatGPT back in November 2022 feels already like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these large language models. Our world, and in particular the world of AI, has decidedly changed.

But the real prize of human-level AI—or artificial general intelligence (AGI)—has yet to be achieved. Such a breakthrough would mean an AI that can carry out most economically productive work, engage with others, do science, build and maintain social networks, conduct politics, and carry out modern warfare. The main constraint for all these tasks today is cognition. Removing this constraint would be world-changing. Yet many across the globe’s leading AI labs believe this technology could be a reality before the end of this decade.

[time-brightcove not-tgx=”true”]

That could be an enormous boon for humanity. But AI could also be extremely dangerous, especially if we cannot control it. Uncontrolled AI could hack its way into online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create tailor-made manipulations for large numbers of people. Even worse, military personnel in charge of nuclear weapons could be manipulated by an AI to share their credentials, posing a huge threat to humanity.

It would be a constructive step to make it as hard as possible for any of that to happen by strengthening the world’s defenses against adverse online actors. But when AI can convince humans, which it is already better at than we are, there is no known defense.

For these reasons, many AI safety researchers at AI labs such as OpenAI, Google DeepMind and Anthropic, and at safety-minded nonprofits, have given up on trying to limit the actions future AI can do. They are instead focusing on creating “aligned” or inherently safe AI. Aligned AI might get powerful enough to be able to exterminate humanity, but it should not want to do this.

There are big question marks about aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the best researchers working on aligning superhuman AI left OpenAI in dissatisfaction, a move that does not inspire confidence. Second, it is unclear what a superintelligent AI would be aligned to. If it was an academic value system, such as utilitarianism, we might quickly find out that most humans’ values actually do not match these aloof ideas, after which the unstoppable superintelligence could go on to act against most people’s will forever. If the alignment was to people’s actual intentions, we would need some way to aggregate these very different intentions. While idealistic solutions such as a U.N. council or AI-powered decision aggregation algorithms are in the realm of possibility, there is a worry that superintelligence’s absolute power would be concentrated in the hands of very few politicians or CEOs. This would of course be unacceptable for—and a direct danger to—all other human beings.

Read More: The Only Way to Deal With the Threat From AI? Shut It Down

Dismantling the time bomb

If we cannot find a way to at the very least keep humanity safe from extinction, and preferably also from an alignment dystopia, AI that could become uncontrollable must not be created in the first place. This solution, postponing human-level or superintelligent AI, for as long as we haven’t solved safety concerns, has the downside that AI’s grand promises—ranging from curing disease to creating massive economic growth—will need to wait.

Pausing AI might seem like a radical idea to some, but it will be necessary if AI continues to improve without us reaching a satisfactory alignment plan. When AI’s capabilities reach near-takeover levels, the only realistic option is that labs are firmly required by governments to pause development. Doing otherwise would be suicidal.

And pausing AI may not be as difficult as some make it out to be. At the moment, only a relatively small number of large companies have the means to perform leading training runs, meaning enforcement of a pause is mostly limited by political will, at least in the short run. In the longer term, however, hardware and algorithmic improvement mean a pause may be seen as difficult to enforce. Enforcement between countries would be required, for example with a treaty, as would enforcement within countries, with steps like stringent hardware controls. 

In the meantime, scientists need to better understand the risks. Although there is widely-shared academic concern, no consensus exists yet. Scientists should formalize their points of agreement, and show where and why their views deviate, in the new International Scientific Report on Advanced AI Safety, which should develop into an “Intergovernmental Panel on Climate Change for AI risks.” Leading scientific journals should open up further to existential risk research, even if it seems speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments have an enormous part to play in how AI unfolds. This starts with officially acknowledging AI’s existential risk, as has already been done by the U.S., U.K., and E.U., and setting up AI safety institutes. Governments should also draft plans for what to do in the most important, thinkable scenarios, as well as how to deal with AGI’s many non-existential issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing scientific, industry, and public evaluation.

It is great progress that major AI countries are constructively discussing common policy at biannual AI safety summits, including one in Seoul from May 21 to 22. This process, however, needs to be guarded and expanded. Working on a shared ground truth on AI’s existential risks and voicing shared concern with all 28 invited nations would already be major progress in that direction. Beyond that, relatively easy measures need to be agreed upon, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from training. An international AI agency needs to be set up to guard execution.

It is fundamentally difficult to predict scientific progress. Still, superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a feasible strategy. Let us use the time we have as wisely as possible.

Billionaire Frank McCourt Wants to Buy TikTok. Here’s Why He Thinks He Could Save It

16 May 2024 at 15:21
McCourt

Billionaire Frank McCourt has long argued that the internet needs to be radically changed on an infrastructural level in order to reduce its toxicity, misinformation, and extractive nature. Now, he’s hoping to slide into a power vacuum in pursuit of that goal. McCourt is putting together a bid to buy TikTok from Chinese technology company ByteDance, which faces a U.S. ban at the end of this year unless it sells the wildly popular app.

[time-brightcove not-tgx=”true”]

McCourt’s central thesis lies in the belief that users should have more control over their personal data and online identities. His aim is to assemble a coalition to buy TikTok, so that its most valuable user data would be kept not by a single company, but on a decentralized protocol. McCourt has developed this idea in conjunction with technologists, academics, and policymakers via his nonprofit Project Liberty. His plan has received support from notable luminaries including the author Jonathan Haidt (The Anxious Generation) and Tim Berners-Lee, the inventor of the world wide web.

McCourt did not say how much he thinks TikTok is worth. Other magnates who have expressed interest in bidding for TikTok include Kevin O’Leary and Steve Mnuchin.

But there is no indication that ByteDance plans to sell TikTok; they are challenging the forced sale in the U.S. court system on the grounds of freedom of speech. And McCourt faces many obstacles in folding TikTok into his technological vision while ensuring the app’s profitability—especially because he says he’s not interested in buying the core algorithm that has hypercharged TikTok’s growth. 

Read More: TikTok Vows to Fight Its Ban. Here’s How the Battle May Play Out

In an interview with TIME, McCourt explained his vision for the app and the larger internet ecosystem. Here are excerpts from the conversation.

TIME: A couple years ago, you stepped down as CEO from McCourt Global in order to devote most of your time to Project Liberty, whose goal is fixing the internet. How pivotal could buying TikTok be towards that mission?

Frank McCourt: I think it’s a fantastic opportunity to really accelerate things and catalyze an alternative version of the internet where individuals own and control their identity and data. The internet does not have to operate the way it does right now. It’s important to remember that the other big platforms in the U.S. operate with the same architecture as TikTok: of scraping people’s data and aggregating it and then exploiting it. 

When I say data, it sounds abstract. But it’s our personhood; it’s everything about us. And I think it’s well past time that we correct that fundamental flaw in the design of the internet and return agency to individuals.

Let’s say I’m a small business owner who uses TikTok to post content and sell goods. How would my experience improve under your new design?

The user experience wouldn’t change much. We want this to be a seamless thing. Part of our thinking is to keep TikTok U.S. alive, because China has said they’re not sharing the [core] algorithm under any circumstances. And without a viable bidder to move forward without the algorithm, they may shut it down. But we’re not looking for the algorithm.

Many people contend that the core algorithm is essential to TikTok’s value. Do you worry that TikTok wouldn’t be TikTok without it?

What makes TikTok, TikTok, to me, is the user base, the content created by the user base, the brand, and all the tech short of the algorithm. Of course, TikTok isn’t worth as much without the algorithm. I get that. That’s pretty plain. But we’re talking about a different design, which requires people to move on from the mindset and the paradigm we’re in now. 

It will be a version where everyone is deciding what pieces or portions of their data to share with whom. So you still have a user experience every bit as good, but with much better architecture overall. And not only will individuals have agency, but let’s have a broader group of people participating in who shares in the economic value of the platform itself. 

Read More: Why The Billionaire Frank McCourt is Stepping Down As CEO Of His Company To Focus on Rebuilding Social Media

How would that value sharing work? Are you talking about some sort of directed shares program, or a crypto token?

It’s a bit early to have that conversation. That’s why we’ve retained Kirkland & Ellis to advise us, along with Guggenheim Securities. They’re grappling with and thinking through those very issues right now.

So how would users control their data?

Imagine an internet where individuals set the terms and conditions of their data with use cases and applications. And you’ll still want to share your data, because you’ll want to get the benefits of the internet. But you’re sharing it on a trusted basis. The mere act of giving permission to use it is very different than having it be surveilled and scraped.

The blockchain-based decentralized infrastructure you plan to use for TikTok, DSNP, is already running, and the social media app MeWe is currently migrating its tech and data onto it. What have you learned from MeWe’s transition?

That it works. Like any other engineering challenge, you have to go through all the baby steps to get it right. But the migration started in earnest in Q4, and over 800,000 users have migrated. To me, that’s important that we’re not bringing forward a concept: We’re bringing forward a proven tech solution.

In order to finance this bid, you will seek money from foundations, endowments and pension funds and philanthropies. Are you confident that if you get these big investors on board, you’ll be able to return value to them?

I am. This opens up and unlocks enormous value for investors and users. At the same time, it has a tremendous impact for society. I mentioned the pension funds and endowments and foundations as a category of investor that have a longer term horizon, and look at making investments not strictly on the basis of financial ROI. It’s important they be involved, because this is a societal project to fundamentally change how the internet works.  

We want a lot of people involved in this in different ways, shapes and forms, which is another distinguishing characteristic. We don’t need Saudi money to replace Chinese money. We’re trying to bring forward a solution to address the problem at its root cause, not at the symptomatic level.

You committed $150 million to Project Liberty in 2022. Are you prepared to spend in that ballpark again for TikTok?

Update that number: I’ve committed half a billion dollars to Project Liberty. That should be an indication of my level of seriousness about all this, and my level of seriousness about the bid for TikTok U.S.

❌
❌