Normal view

There are new articles available, click to refresh the page.
Yesterday — 31 May 2024Tech – TIME

Tesla is Recalling 125,000 Vehicles For a Seat Belt Signal Issue. Here’s What We Know

31 May 2024 at 18:12
Day Two Of The 2024 VivaTech Conference

Tesla announced a recall of more than 125,000 cars that were operating with a seat belt system defect and potentially putting drivers at an increased risk of injury when on the road.

Under National Highway Traffic Safety Administration (NHTSA) federal guidelines, vehicles are required to have audible and visual seat belt reminder signals to notify drivers that their seatbelt isn’t properly fastened. The Tesla vehicles facing the recall had signals going off at improper times, the NHTSA said in a report released Thursday.

[time-brightcove not-tgx=”true”]

This recall contributes to the company’s 2.5 million recalls issued this year so far. 

In January, Tesla recalled certain 2023 Y, S and X model vehicles due to a software issue that prevented the camera image from showing images while the Teslas are in reverse. The following month, Tesla recalled almost 2.2 million vehicles because of “incorrect font size” on its instrument panel for its brake, park and antilock brake system warning lights. Then, in April, Tesla recalled all MY 2024 Cybertrucks made between November 13, 2023, to April 4, 2024 due to faulty accelerator pedals.

Tesla also had the highest accident rate of any car in 2023, according to a Lending Tree analysis last year, with 23.54 accidents per 1,000 drivers.

Here’s what we know about the most recent Tesla recall.

What exactly is defective in the models?

On certain vehicles with specific seat belt software, there is no required audible and visual seat belt reminder for drivers even when the driver’s seat belt is not fashioned, because of faulty tracing in driver’s seat occupancy. 

When did Tesla first find out about the discrepancy?

On April 18, Tesla identified this discrepancy with seat belt reminders as a part of an internal compliance audit with the 2024 Tesla Model X, and then investigated the condition through the rest of April and May. 

After Tesla completed their investigation in late May, the company voluntarily recalled the affected vehicles. 

Which models are impacted by the recall?

The Tesla models impacted by the recall include 2012-2024 Model S, 2015-2024 Model X, 2017-2023 Model 3, and 2020-2023 Model Y that are with the defective seat belt logic system.

Has there been any collisions, injuries or fatalities as a result of the issue?

In the NHTSA report, Tesla stated as of May 28, the company identified 104 warranty claims that may be related to the condition, but the company states that it is not aware Tesla is of any collisions, fatalities or injuries that may be related to the condition

How will Tesla remedy the problem for customers?

Tesla plans to send out a free over-the-air (OTA) software remedy to customers with affected vehicles in June of this year. This software will amend the software issue by relying on the driver’s seat belt buckle and the ignition status to trigger seat belt reminders.

Before yesterdayTech – TIME

OpenAI Says Russia, China, and Israel Are Using Its Tools for Foreign Influence Campaigns

30 May 2024 at 20:29
OpenAI Photo Illustrations

OpenAI identified and removed five covert influence operations based in Russia, China, Iran and Israel that were using its artificial intelligence tools to manipulate public opinion, the company said on Thursday.

In a new report, OpenAI detailed how these groups, some of which are linked to known propaganda campaigns, used the company’s tools for a variety of “deceptive activities.” These included generating social media comments, articles, and images in multiple languages, creating names and biographies for fake accounts, debugging code, and translating and proofreading texts. These networks focused on a range of issues, including defending the war in Gaza and Russia’s invasion of Ukraine, criticizing Chinese dissidents, and commenting on politics in India, Europe, and the U.S. in their attempts to sway public opinion. While these influence operations targeted a wide range of online platforms, including X (formerly known as Twitter), Telegram, Facebook, Medium, Blogspot, and other sites, none managed to engage a substantial audience” according to OpenAI analysts. 

[time-brightcove not-tgx=”true”]

The report, the first of its kind released by the company, comes amid global concerns about the potential impact AI tools could have on the more than 64 elections happening around the world this year, including the U.S. presidential election in November. In one example cited in the report, a post by a Russian group on Telegram read, “I’m sick of and tired of these brain damaged fools playing games while Americans suffer. Washington needs to get its priorities straight or they’ll feel the full force of Texas!”

The examples listed by OpenAI analysts reveal how foreign actors largely appear to be using AI tools for the same types of online influence operations they have been carrying out for a decade. They focus on using fake accounts, comments, and articles to shape public opinion and manipulate political outcomes. “These trends reveal a threat landscape marked by evolution, not revolution,” Ben Nimmo, the principal investigator on OpenAI’s Intelligence and Investigations team, wrote in the report. “Threat actors are using our platform to improve their content and work more efficiently.”

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

OpenAI, which makes ChatGPT, says it now has more than 100 million weekly active users. Its tools make it easier and faster to produce a large volume of content, and can be used to mask language errors and generate fake engagement. 

One of the Russian influence campaigns shut down by OpenAI, dubbed “Bad Grammar” by the company, used its AI models to debug code to run a Telegram bot that created short political comments in English and Russian. The operation targeted Ukraine, Moldova, the U.S. and Baltic States, the company says. Another Russian operation known as “Doppelganger,” which the U.S. Treasury Department has linked to the Kremlin, used OpenAI’s models to generate headlines and convert news articles to Facebook posts, and create comments in English, French, German, Italian, and Polish. A known Chinese network, Spamouflage, also used OpenAI’s tools to research social media activity and generate text in Chinese, English, Japanese, and Korean that was posted across multiple platforms including X, Medium, and Blogspot. 

OpenAI also detailed how a Tel Aviv-based Israeli political marketing firm called Stoic used its tools to generate pro-Israel content about the war in Gaza. The campaign, nicknamed “Zero Zeno,” targeted audiences in the U.S., Canada, and Israel. On Wednesday, Meta, Facebook and Instagram’s parent company, said it had removed 510 Facebook accounts and 32 Instagram accounts tied to the same firm. The cluster of fake accounts, which included accounts posing as African Americans and students in the U.S. and Canada, often replied to prominent figures or media organizations in posts praising Israel, criticizing anti-semitism on campuses, and denouncing “radical Islam.” It seems to have failed to reach any significant engagement, according to OpenAI. “Look, it’s not cool how these extremist ideas are, like, messing with our country’s vibe,” reads one post in the report.

OpenAI says it is using its own AI-powered tools to more efficiently investigate and disrupt these foreign influence operations. “The investigations described in the accompanying report took days, rather than weeks or months, thanks to our tooling,” the company said on Thursday. They also noted that despite the rapid evolution of AI tools, human error remains a factor. “AI can change the toolkit that human operators use, but it does not change the operators themselves,” OpenAI said. “While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.”

More From TIME

[video id=A8hZ67ye autostart="viewable"]

How Anthropic Designed Itself to Avoid OpenAI’s Mistakes

Anthropic CEO Dario Amodei testifies during a hearing before the Privacy, Technology, and the Law Subcommittee of Senate Judiciary Committee at Dirksen Senate Office Building on Capitol Hill, in Washington, D.C., on July 25, 2023.

Last Thanksgiving, Brian Israel found himself being asked the same question again and again.

[time-brightcove not-tgx=”true”]

The general counsel at the AI lab Anthropic had been watching dumbfounded along with the rest of the tech world as, just two miles south of Anthropic’s headquarters in San Francisco, its main competitor OpenAI seemed to be imploding.

OpenAI’s board had fired CEO Sam Altman, saying he had lost their confidence, in a move that seemed likely to tank the startup’s $80 billion-plus valuation. The firing was only possible thanks to OpenAI’s strange corporate structure, in which its directors have no fiduciary duty to increase profits for shareholders—a structure Altman himself had helped design so that OpenAI could build powerful AI insulated from perverse market incentives. To many, it appeared that plan had badly backfired. Five days later, after a pressure campaign from OpenAI’s main investor Microsoft, venture capitalists, and OpenAI’s own staff—who held valuable equity in the company—Altman was reinstated as CEO, and two of the three directors who fired him resigned. “AI belongs to the capitalists now,” the New York Times concluded, as OpenAI began to build a new board that seemed more befitting of a high-growth company than a research lab concerned about the dangers of powerful AI.

And so Israel found himself being frantically asked by Anthropic’s investors and clients that weekend: Could the same thing happen at Anthropic?

Anthropic, which like OpenAI is a top AI lab, has an unorthodox corporate structure too. The company similarly structured itself in order to ensure it could develop AI without needing to cut corners in pursuit of profits. But that’s pretty much where the likeness ends. To everybody with questions on Thanksgiving, Israel’s answer was the same: what happened at OpenAI can’t happen to us.

Read More: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy

Prior to the OpenAI disaster, questions about the corporate governance of AI seemed obscure. But it’s now clear that the structure of AI companies has vital implications for who controls what could be the 21st century’s most powerful technology. As AI grows more powerful, the stakes are only getting higher. Earlier in May, two OpenAI leaders on the safety side of the company quit. In a leaving statement one of them, Jan Leike, said that safety had “taken a backseat to shiny products,” and said that OpenAI needed a “cultural change” if it were going to develop advanced AI safely. On Tuesday, Leike announced he had moved to Anthropic. (Altman acknowledged Leike’s criticisms, saying “we have a lot more to do; we are committed to doing it.”)

Anthropic prides itself on being structured differently from OpenAI, but a question mark hangs over its future. Anthropic has raised $7 billion in the last year, mostly from Amazon and Google—big tech companies that, like Microsoft and Meta, are racing to secure dominance over the world of AI. At some point it will need to raise even more. If Anthropic’s structure isn’t strong enough to withstand pressure from those corporate juggernauts, it may struggle to prevent its AI from becoming dangerous, or might allow its technology to fall into Big Tech’s hands. On the other hand, if Anthropic’s governance structure turns out to be more robust than OpenAI’s, the company may be able to chart a new course—one where AI can be developed safely, protected from the worst pressures of the free market, and for the benefit of society at large.

Anthropic’s seven co-founders all previously worked at OpenAI. In his former role as OpenAI’s vice president for research, Anthropic CEO Dario Amodei even wrote the majority of OpenAI’s charter, the document that commits the lab and its workers to pursue the safe development of powerful AI. To be sure, Anthropic’s co-founders left OpenAI in 2021, well before the problems with its structure burst into the open with Altman’s firing. But their experience made them want to do things differently. Watching the meltdown that happened last Thanksgiving made Amodei feel that Anthropic’s governance structure “was the right approach,” he tells TIME. “The way we’ve done things, with all these checks and balances, puts us in a position where it’s much harder for something like that to happen.”

From left: Paul Christiano, Dario Amodei, and Geoffrey Irving write equations on a whiteboard at OpenAI, the artificial intelligence lab founded by Elon Musk, in San Francisco, July 10, 2017.

Still, the high stakes have led many to question why novel and largely untested corporate governance structures are the primary constraint on the behavior of companies attempting to develop advanced AI. “Society must not let the roll-out of AI be controlled solely by private tech companies,” wrote Helen Toner and Tasha McCauley, two former OpenAI board members who voted to fire Altman last year, in a recent article in The Economist. “There are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.”


A ‘public benefit corporation’

Unlike OpenAI, which essentially operates as a capped-profit company governed by a nonprofit board that is not accountable to the company’s shareholders, Anthropic is structured more like a traditional company. It has a board that is accountable to shareholders, including Google and Amazon, which between them have invested some $6 billion into Anthropic. (Salesforce, where TIME co-chair and owner Marc Benioff is CEO, has made a smaller investment.) But Anthropic makes use of a special element of Delaware corporate law. It is not a limited company, but a public benefit corporation (PBC), which means that as well as having a fiduciary obligation to increase profits for shareholders, its board also has legal room to follow a separate mission: to ensure that “transformative AI helps people and society flourish.” What that essentially means is that shareholders would find it more difficult to sue Anthropic’s board if the board chose to prioritize safety over increasing profits, Israel says. 

There is no obvious mechanism, however, for the public to sue Anthropic’s board members for not pursuing its public benefit mission strongly enough. “To my knowledge, there’s no way for the public interest to sue you to enforce that,” Israel says. The PBC structure gives the board “a flexibility, not a mandate,” he says.

The conventional wisdom that venture capitalists pass on to company founders is: innovate on your product, but don’t innovate on the structure of your business. But Anthropic’s co-founders decided at the company’s founding in 2021 to disregard that advice, reasoning that if AI was as powerful as they believed it could be, the technology would require new governance structures to ensure it benefited the public. “Many things are handled very well by the market,” Amodei says. “But there are also externalities, the most obvious ones being the risks of AI models [developing] autonomy, but also national security questions, and other things like whether they break or bend the economy in ways we haven’t seen before. So I wanted to make sure that the company was equipped to handle that whole range of issues.”

Being at the “frontier” of AI development—building bigger models than have ever been built before, and which could carry unknown capabilities and risks—required extra care. “There’s a very clear economic advantage to time in the market with the best [AI] model,” Israel says. On the other hand, he says, the more time Anthropic’s safety researchers can spend testing a model after it has been trained, the more confident they can be that launching it would be safe. “The two are at least theoretically in tension,” Israel says. “It was very important to us that we not be railroaded into [launching] a model that we’re not sure is safe.”

The Long Term Benefit Trust

To Anthropic’s founders, structuring the company as a public benefit corporation was a good first step, but didn’t address the question of who should be on the company’s board. To answer this question, they decided in 2023 to set up a separate body, called the Long Term Benefit Trust (LTBT), which would ultimately gain the power to elect and fire a majority of the board.

 The LTBT, whose members have no equity in the company, currently elects one out of the board’s five members. But that number will rise to two out of five this July, and then to three out of five this November—in line with fundraising milestones that the company has now surpassed, according to Israel and a copy of Anthropic’s incorporation documents reviewed by TIME. (Shareholders with voting stock elect the remaining board members.)

The LTBT’s first five members were picked by Anthropic’s executives for their expertise in three fields that the company’s co-founders felt were important to its mission: AI safety, national security, and social enterprise. Among those selected were Jason Matheny, CEO of the RAND corporation, Kanika Bahl, CEO of development nonprofit Evidence Action, and AI safety researcher Paul Christiano. (Christiano resigned from the LTBT prior to taking a new role in April leading the U.S. government’s new AI Safety Institute, he said in an email. His seat has yet to be filled.) On Wednesday, Anthropic announced that the LTBT had elected its first member of the company’s board: Jay Kreps, the co-founder and CEO of data company Confluent.

The LTBT receives advance notice of “actions that could significantly alter the corporation or its business,” Anthropic says, and “must use its powers to ensure that Anthropic responsibly balances the financial interests of stockholders with the interests of those affected by Anthropic’s conduct and our public benefit purpose.” 

“Anthropic will continue to be overseen by its board, which we expect will make the decisions of consequence on the path to transformative AI,” the company says in a blog post on its website. But “in navigating these decisions, a majority of the board will ultimately have accountability to the Trust as well as to stockholders, and will thus have incentives to appropriately balance the public benefit with stockholder interests.”

However, even the board members who are selected by the LTBT owe fiduciary obligations to Anthropic’s stockholders, Israel says. This nuance means that the board members appointed by the LTBT could probably not pull off an action as drastic as the one taken by OpenAI’s board members last November. It’s one of the reasons Israel was so confidently able to say, when asked last Thanksgiving, that what happened at OpenAI could never happen at Anthropic. But it also means that the LTBT ultimately has a limited influence on the company: while it will eventually have the power to select and remove a majority of board members, those members will in practice face similar incentives to the rest of the board. 

Company leaders, and a former advisor, emphasize that Anthropic’s structure is experimental in nature. “Nothing exactly like this has been tried, to my knowledge,” says Noah Feldman, a Harvard Law professor who served as an outside consultant to Anthropic when the company was setting up the earliest stages of its governance structure. “Even the best designs in the world sometimes don’t work,” he adds. “But this model has been designed with a tremendous amount of thought … and I have great hopes that it will succeed.”

The Amazon and Google question

According to Anthropic’s incorporation documents, there is a caveat to the agreement governing the Long Term Benefit Trust. If a supermajority of shareholders votes to do so, they can rewrite the rules that govern the LTBT without the consent of its five members. This mechanism was designed as a “failsafe” to account for the possibility of the structure being flawed in unexpected ways, Anthropic says. But it also raises the specter that Google and Amazon could force a change to Anthropic’s corporate governance.

But according to Israel, this would be impossible. Amazon and Google, he says, do not own voting shares in Anthropic, meaning they cannot elect board members and their votes would not be counted in any supermajority required to rewrite the rules governing the LTBT. (Holders of Anthropic’s Series B stock, much of which was initially bought by the defunct cryptocurrency exchange FTX, also do not have voting rights, Israel says.) 

Google and Amazon each own less than 15% of Anthropic, according to a person familiar with the matter. Amodei emphasizes that Amazon and Google’s investments in Anthropic are not in the same ballpark as Microsoft’s deal with OpenAI, where the tech giant has an agreement to receive 49% of OpenAI’s profits until its $13 billion investment is paid back. “It’s just worlds apart,” Amodei says. He acknowledges that Anthropic will likely have to raise more money in the future, but says that the company’s ability to punch above its weight will allow it to remain competitive with better-resourced rivals. “As long as we can do more with less, then in the end, the resources are going to find their way to the innovative companies,” he tells TIME.

Still, uncomfortable tradeoffs may loom in Anthropic’s future—ones that even the most well-considered governance structure cannot solve for. “The overwhelming priority at Anthropic is to keep up at the frontier,” says Daniel Colson, the executive director of the AI Policy Institute, a non-profit research group, referring to the lab’s belief that it must train its own world-leading AI models to do good safety research on them. But what happens when Anthropic’s money runs out, and it needs more investment to keep up with the big tech companies? “I think the manifestation of the board’s fiduciary responsibility will be, ‘OK, do we have to partner with a big tech company to get capital, or swallow any other kind of potential poison pill?’” Colson says. In dealing with such an existential question for the company, Anthropic’s board might be forced to weigh total collapse against some form of compromise in order to achieve what it sees as its long-term mission.

Ultimately, Colson says, the governance of AI “is not something that any corporate governance structure is adequate for.” While he believes Anthropic’s structure is better than OpenAI’s, he says the real task of ensuring that AI is developed safely lies with governments, who must issue binding regulations. “It seems like Anthropic did a good job” on its structure, Colson says. “But are these governance structures sufficient for the development of AGI? My strong sense is definitely no—they are extremely illegitimate.”

Correction, May 30

The original version of this story mischaracterized Brian Israel’s view of the aftermath of Sam Altman’s firing. Many observers concluded that OpenAI’s corporate structure had backfired, but Israel did not say so.

How You Can Avoid Using Meta AI

30 May 2024 at 11:46
In this photo illustration, a woman's silhouette holds a

SAN FRANCISCO — If you use Facebook, WhatsApp or Instagram, you’ve probably noticed a new character pop up answering search queries or eagerly offering tidbits of information in your feeds, with varying degrees of accuracy.

It’s Meta AI, and it’s here to help, at least according to Meta Platforms’ CEO Mark Zuckerberg, who calls it “the most intelligent AI assistant that you can freely use.”

[time-brightcove not-tgx=”true”]

The chatbot can recommend local restaurants, offer more information on something you see in a Facebook post, search for airline flights or generate images in the blink of an eye. If you’re chatting with friends to plan a night out, you can invite it into your group conversation by typing @MetaAI, then ask it to recommend, say, cocktail bars.

Meta’s AI tool has been integrated into chat boxes and search bars throughout the tech giant’s platforms. The assistant appears, for example, at the top of your chat list on Messenger. Ask it questions about anything or to “imagine” something and it will generate a picture or animation.

As with any new technology, there are, of course, hiccups, including bizarre exchanges when the chatbots first started engaging with real people. One joined a Facebook moms’ group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum.

Meta AI hasn’t been universally welcomed. Here are some tips if you want to avoid using it.

Can I turn Meta AI off?

Some Facebook users don’t like the chatbot, complaining in online forums that they’re tired of having AI foisted on them all the time or that they just want to stick with what they know. So what if you don’t want Meta AI butting in every time you search for something or scroll through your social feeds? Well, you might need a time machine. Meta and other tech companies are in an AI arms race, churning out new language models and persuading — some might say pressuring — the public to use them.

The bad news is there’s no one button to turn off Meta AI on Facebook, Instagram, Messenger or WhatsApp. However, if you want to limit it, there are some (imperfect) workarounds.

How to mute Meta AI

On the Facebook mobile app, tap the “search” button. You may get a prompt to “Ask Meta AI anything.” Tap the blue triangle on the right, then the blue circle with an “i” inside it. Here, you’ll see a “mute” button, with options to silence the chatbot for 15 minutes or longer, or “Until I change it.” You can do the same on Instagram.

Nonetheless, muting doesn’t get rid of Meta AI completely. Meta AI’s circle logo might still appear where the search magnifying glass used to be — and tapping on it will take you to the Meta AI field. This is now the new way to search in Meta, and just as with Google’s AI summaries, the responses will be generated by AI.

I asked the chatbot about searching Facebook without Meta AI results.

“Meta AI aims to be a helpful assistant and is in the search bar to assist with your questions,” it responded. Then it added, “You can’t disable it from this experience, but you can tap the search button after writing your query and search how you normally would.”

Then I asked a (human) Meta spokesperson. “You can search how you normally would and choose to engage with a variety of results — ones from Meta AI or others that appear as you type,” the spokesperson said in a statement. “And when interacting with Meta AI, you have access to real-time information without having to leave the app you’re using thanks to our search partnerships.”

Like an over-eager personal assistant, Meta AI also pops up under posts on your Facebook news feed, offering more information about what’s discussed in the post — such as the subject of a news article. It’s not possible to disable this feature, so you’ll just have to ignore it.

User an “old school” version of Facebook

Tech websites have noted that one surefire way to avoid Facebook’s AI assistant is to use the social network’s stripped-down mobile site, mbasic.facebook.com. It’s aimed at people in developing countries using older phones on slower internet connections. The basic site has a retro feel that looks crude compared to the current version, and it looks even worse on desktop browsers, but it still works on a rudimentary level and without

AI. in other countries

Meta AI is so far only available in the United States and 13 other countries including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe. So if you don’t live in any of those places, you don’t have to worry about the chatbot because you don’t get to use it. At least not yet.

International Authorities Arrest Man Allegedly Behind ‘Likely the World’s Largest Botnet Ever’

30 May 2024 at 01:55
Hacker using laptop. Lots of digits on the computer screen.

WASHINGTON — An international law enforcement team has arrested a Chinese national and disrupted a major botnet that officials said he ran for nearly a decade, amassing at least $99 million in profits by reselling access to criminals who used it for identity theft, child exploitation, and financial fraud, including pandemic relief scams.

The U.S. Department of Justice quoted FBI Director Christopher Wray as saying Wednesday that the “911 S5” botnet—a network of malware-infected computers in nearly 200 countries—was “likely the world’s largest.”

[time-brightcove not-tgx=”true”]

Justice said in a news release that Yunhe Wang, 35, was arrested May 24. Wang was arrested in Singapore, and search warrants were executed there and in Thailand, the FBI’s deputy assistant director for cyber operations, Brett Leatherman, said in a LinkedIn post. Authorities also seized $29 million in cryptocurrency, Leatherman said.

Read More: Influencers Are Scamming Their Fans Through Crypto. Here’s How Their Tactics Have Evolved.

Cybercriminals used Wang’s network of zombie residential computers to steal “billions of dollars from financial institutions, credit card issuers and accountholders, and federal lending programs since 2014,” according to an indictment filed in Texas’ eastern district.

The administrator, Wang, sold access to the 19 million Windows computers he hijacked—more than 613,000 in the United States—to criminals who “used that access to commit a staggering array of crimes that victimized children, threatened people’s safety and defrauded financial institutions and federal lending programs,” U.S. Attorney General Merrick Garland said in announcing the takedown.

Read More: Why Gen Z Is Surprisingly Susceptible to Financial Scams

He said criminals who purchased access to the zombie network from Wang were responsible for more than $5.9 billion in estimated losses due to fraud against relief programs. Officials estimated 560,000 fraudulent unemployment insurance claims originated from compromised IP addresses.

Wang allegedly managed the botnet through 150 dedicated servers, half of them leased from U.S.-based online service providers.

The indictment says Wang used his illicit gains to purchase 21 properties in the United States, China, Singapore, Thailand, the United Arab Emirates and St. Kitts and Nevis, where it said he obtained citizenship through investment.

In its news release, the Justice Department thanked police and other authorities in Singapore and Thailand for their assistance.

Why the ‘All Eyes on Rafah’ AI Post Is Going Viral on Social Media

29 May 2024 at 20:39

Nearly 45 million Instagram users—including celebrities like Bella Hadid and Nicola Coughlan—have shared an AI-generated image depicting tent camps for displaced Palestinians and a slogan that reads “all eyes on Rafah,” according to a Wednesday afternoon count by Instagram. 

The sharing of the post comes amid criticism from the international community regarding Rafah, which rests on the southern Gaza Strip near the Egyptian border, and has been the subject of intense bombing by Israeli troops. Military strikes set shelters on fire, causing Palestinians to dig through charred remains hoping to rescue survivors. At least 45 Palestinians have been killed thus far. Rafah was previously deemed a humanitarian zone for civilians. 

[time-brightcove not-tgx=”true”]

Sarah Jackson, an associate professor at the Annenberg School for Communication at the University of Pennsylvania, tells TIME that the origins of internet activism date back to the ‘90s, when leaders behind the Zapatista uprising circulated information about what was happening on the ground. But currently, Instagram appeals to activists as a platform for social change because of the visual aspect of the app, allowing users to share both videos and photos.

More From TIME

[video id=T37vtnGD autostart="viewable"]

“One of the really important things that we have to acknowledge is that a lot of Palestinian journalists have been using Instagram to share from the ground what has been happening. We know that a lot of those journalists have been directly targeted and censored because of that, but this has been a platform that has been popular with them,” Jackson says.

Jackson points out that many social media activists may have been struggling to share images from Gaza due to algorithmic guidelines that hide graphic content. Instagram says that while it understands why people share this sort of content in certain instances, it encourages people to caption the photo with warnings about graphic violence, per its community guidelines.

Read More: Israel Continues Rafah Strikes Days After 45 Civilians Killed in Bombing

Users may have found a workaround by sharing an AI image. “Many of the images that are coming from the ground are really graphic and gruesome,” she says. “It has been harder and harder for people to actually document what’s happening…and when compelling images are documented, they are often censored at the platform level…it makes sense that folks would turn to AI.”

Instagram user @ shahv4012 first shared the “all eyes on Rafah” post on their story. Some have criticized the use of AI for the photo. “There are people who are not satisfied with the picture and template, I apologize if I have made a mistake on all of you,” the user said in an Instagram story. “Whatever [you do], don’t look down on the Rafah issue now, spread it so that they are shaken and afraid of the spread of all of us.”

The slogan on the image likely was inspired by Richard Peeperkorn, the WHO representative for Gaza, who previously said that “all eyes” were on what is happening in Rafah.

While some have pointed out that sharing the AI image does not necessarily mean a user is fully educated on what is happening in Rafah, Jackson says that if the point is to spread awareness, and share that someone is “part of a collective that cares about this issue,” then posting the photo on their story is worthwhile. 

Israel’s decision to launch its military offensive into Rafah came two days after the International Court of Justice (ICJ) ordered Israel to stop its planned assault on Rafah, and has been largely criticized by world leaders.

French President Emmanuel Macron said that he was “outraged” by the Israeli strikes in Rafah. “These operations must stop. There are no safe areas in Rafah for Palestinian civilians. I call for full respect for international law and an immediate ceasefire,” Macron shared on X on Monday. U.N. Secretary General António Guterres reiterated his call for an immediate ceasefire, and for the ICJ order to be complied with.

Israeli Prime Minister Benjamin Netanyahu called the deaths “tragic.” More than 36,000 Palestinians and some 1,500 Israelis have been killed since Hamas attacked Israel on October 7, 2023.

OpenAI Forms Safety Committee as It Starts Training Latest AI Model

29 May 2024 at 13:30
Sam Altman

OpenAI says it’s setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions” for its projects and operations.

[time-brightcove not-tgx=”true”]

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products.” OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led.

Leike said Tuesday he’s joining rival AI company Anthropic, founded by ex-OpenAI leaders, to “continue the superalignment mission” there.

OpenAI said it has “recently begun training its next frontier model” and its AI models lead the industry on capability and safety, though it made no mention of the controversy. “We welcome a robust debate at this important moment,” the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee’s first job will be to evaluate and further develop OpenAI’s processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it’s adopting “in a manner that is consistent with safety and security.”

[video id=xiP6HmJu autostart="viewable"]

xAI Raises $6 Billion as Elon Musk Aims to Challenge OpenAI

27 May 2024 at 08:35
Elon Musk speaks at the Milken Institute's Global Conference in Beverly Hills, California, on May 6, 2024.

Elon Musk’s artificial intelligence startup xAI has raised $6 billion to accelerate its challenge to his former allies at OpenAI.

The Series B round, announced in a blog post on May 26, comes less than a year after xAI’s debut and marks one of the bigger investments in the nascent field of developing AI tools. Musk had been an early supporter of artificial intelligence, backing OpenAI before it introduced ChatGPT in late 2022.

[time-brightcove not-tgx=”true”]

He later withdrew his support from the venture and has advocated caution because of the technology’s potential dangers. He was among a large group of industry leaders urging a pause to AI development last year.

Read More: Inside Elon Musk’s Struggle for the Future of AI

Musk launched a rival to OpenAI’s ChatGPT in November, called Grok, which was trained on and integrated into X.com, the social network formerly known as Twitter. That has so far been the most visible product of xAI’s work, which is led by executives with prior experience at Alphabet Inc.’s DeepMind, Microsoft Corp. and Tesla Inc.

The company intends to use the funds to bring its first products to market, build advanced infrastructure and accelerate the development of future technologies, it said in the blog.

Pre-money valuation was $18B

— Elon Musk (@elonmusk) May 27, 2024

Its pre-money valuation was $18 billion, Musk said in a post on X. Marquee venture capital names including Sequoia Capital and Andreessen Horowitz backed the fundraising, which is one of the largest so far in the industry.

Microsoft Corp. has invested about $13 billion in OpenAI, while Amazon.com Inc. put about $4 billion into Anthropic.

Colorado the First State to Move Ahead With Attempt to Regulate AI’s Role in American Life

AI State Legislation

DENVER — The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide.

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday.

[time-brightcove not-tgx=”true”]

Colorado’s bill and those that faltered in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of wading into a technology few yet understand and governors worried about being the odd-state-out and spooking AI startups.

Polis signed Colorado’s bill “with reservations,” saying in an statement he was wary of regulations dousing AI innovation. The bill has a two-year runway and can be altered before it becomes law.

“I encourage (lawmakers) to significantly improve on this before it takes effect,” Polis wrote.

Colorado’s proposal, along with six sister bills, are complex, but will broadly require companies to assess the risk of discrimination from their AI and inform customers when AI was used to help make a consequential decision for them.

The bills are separate from more than 400 AI-related bills that have been debated this year. Most are aimed at slices of AI, such as the use of deepfakes in elections or to make pornography.

[video id=Pe8KJmwX autostart="viewable"]

The seven bills are more ambitious, applying across major industries and targeting discrimination, one of the technology’s most perverse and complex problems.

“We actually have no visibility into the algorithms that are used, whether they work or they don’t, or whether we’re discriminated against,” said Rumman Chowdhury, AI envoy for the U.S. Department of State who previously led Twitter’s AI ethics team.

While anti-discrimination laws are already on the books, those who study AI discrimination say it’s a different beast, which the U.S. is already behind in regulating.

“The computers are making biased decisions at scale,” said Christine Webber, a civil rights attorney who has worked on class action lawsuits over discrimination including against Boeing and Tyson Foods. Now, Webber is nearing final approval on one of the first-in-the-nation settlements in a class action over AI discrimination.

“Not, I should say, that the old systems were perfectly free from bias either,” said Webber. But “any one person could only look at so many resumes in the day. So you could only make so many biased decisions in one day and the computer can do it rapidly across large numbers of people.”

When you apply for a job, an apartment or a home loan, there’s a good chance AI is assessing your application: sending it up the line, assigning it a score or filtering it out. It’s estimated as many as 83% of employers use algorithms to help in hiring, according to the Equal Employment Opportunity Commission.

AI itself doesn’t know what to look for in a job application, so it’s taught based on past resumes. The historical data that is used to train algorithms can smuggle in bias.

Amazon, for example, worked on a hiring algorithm that was trained on old resumes: largely male applicants. When assessing new applicants, it downgraded resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical data — the resumes — it had learned from. The project was scuttled.

Webber’s class action lawsuit alleges that an AI system that scores rental applications disproportionately assigned lower scores to Black or Hispanic applicants. A study found that an AI system built to assess medical needs passed over Black patients for special care.

Studies and lawsuits have allowed a glimpse under the hood of AI systems, but most algorithms remain veiled. Americans are largely unaware that these tools are being used, polling from Pew Research shows. Companies generally aren’t required to explicitly disclose that an AI was used.

“Just pulling back the curtain so that we can see who’s really doing the assessing and what tool is being used is a huge, huge first step,” said Webber. “The existing laws don’t work if we can’t get at least some basic information.”

That’s what Colorado’s bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was killed under opposition from the governor, are largely similar.

Colorado’s bill will require companies using AI to help make consequential decisions for Americans to annually assess their AI for potential bias; implement an oversight program within the company; tell the state attorney general if discrimination was found; and inform to customers when an AI was used to help make a decision for them, including an option to appeal.

Labor unions and academics fear that a reliance on companies overseeing themselves means it’ll be hard to proactively address discrimination in an AI system before it’s done damage. Companies are fearful that forced transparency could reveal trade secrets, including in potential litigation, in this hyper-competitive new field.

AI companies also pushed for, and generally received, a provision that only allows the attorney general, not citizens, to file lawsuits under the new law. Enforcement details have been left up to the attorney general.

While larger AI companies have more or less been on board with these proposals, a group of smaller Colorado-based AI companies said the requirements might be manageable by behemoth AI companies, but not by budding startups.

“We are in a brand new era of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and restricts our use of technology while this is forming is just going to be detrimental to innovation.”

All agreed, along with many AI companies, that what’s formally called “algorithmic discrimination” is critical to tackle. But they said the bill as written falls short of that goal. Instead, they proposed beefing up existing anti-discrimination laws.

Chowdhury worries that lawsuits are too costly and time consuming to be an effective enforcement tool, and laws should instead go beyond what even Colorado is proposing. Instead, Chowdhury and academics have proposed accredited, independent organization that can explicitly test for potential bias in an AI algorithm.

“You can understand and deal with a single person who is discriminatory or biased,” said Chowdhury. “What do we do when it’s embedded into the entire institution?”

Why Donald Trump Is Betting on Crypto

22 May 2024 at 17:27
Former President Donald Trump's Hush Money Trial Continues In New York

Donald Trump used to rail against cryptocurrencies, calling them a “disaster waiting to happen” and saying that bitcoin seemed “like a scam.” Now, he’s vowing to build a “crypto army.” 

On Tuesday, the Trump campaign announced that it would accept donations in crypto, including bitcoin, ether, and dogecoin. Many crypto fans embraced the announcement on social media, arguing that it was proof that crypto will be a key issue in the coming election. Others, however, said that it simply reeked of opportunism. 

[time-brightcove not-tgx=”true”]

Here are the factors leading up to Trump’s about-face on crypto. 

Is crypto left or right? 

When crypto broke through into mainstream consciousness three years ago, its supporters came from both sides of the political aisle. Many crypto fans had libertarian leanings: Minnesota Republican Tom Emmer, for instance, described crypto as a boon to free markets and privacy. Some Democrats including Cory Booker, in contrast, emphasized how crypto could be a “democratizing” force and lead to increased financial access for people who had long been shut out of traditional banking. 

For a while, crypto’s potential alignment with progressive ideals was boosted by the success of FTX founder Sam Bankman-Fried, who became one of the top individual donors to Democratic candidates in the 2022 election cycle and preached about “financial inclusion and equitable access.” But in reality, Bankman-Fried was secretly donating to both parties with the underlying goal of getting pro-crypto candidates into office. After FTX combusted, some of the Democratic politicians that Bankman-Fried had embraced distanced themselves from the industry. 

Read More: The Bombshell Evidence That Led to Sam Bankman-Fried’s Conviction

At the same time, several prominent Democrats went to war on crypto, arguing that it was predatory and posed a major risk to the American economy. Elizabeth Warren vowed to build an “anti-crypto army,” while Gary Gensler, the chair of the Securities and Exchange Commission and a Biden appointee, used his position to prosecute bad actors and stifle the crypto industry in the states. 

The anti-crypto campaigns from Warren and Gensler drew the ire of many crypto fans, who viewed the pair as an existential threat to their industry. Conversely, right-wing candidates with pro-crypto stances, including Vivek Ramaswamy, found friendly audiences at crypto conferences

Given this larger climate, supporting crypto now allows Trump to place himself in direct opposition to Warren—and potentially galvanize support from an industry that skews young and male. The Trump campaign made this overly clear in its statement on Tuesday, claiming that its acceptance of bitcoin donations was part of its opposition to “socialistic government control” over the U.S. financial markets. “As Biden piles regulations and red tape on all of us, President Trump stands ready to embrace new technologies,” the statement read. 

A changed crypto landscape

President Trump isn’t the only politician to soften their opposition to crypto. Last week, many Democrats, including Chuck Schumer, broke with Warren and Gensler in order to reject the SEC’s efforts to make it harder for banks to hold digital assets. Their decision marked another significant loss for Gensler: Last summer, his efforts to block the creation of a bitcoin ETF (a type of investment vehicle aimed at mainstream institutional investors) were rejected by a federal judge. Since that decision, bitcoin ETFs have surpassed hundreds of billions of dollars in trading volume, and a similar ETF for the cryptocurrency ether appears poised for approval as well. 

This week, the House of Representatives will vote on a Republican-driven pro-crypto bill called the Financial Innovation and Technology for the 21st Century Act. A handful of Democrats have indicated support for the bill. And while the White House opposes the bill, it said in a statement that it was eager to work with Congress on legislation to promote “the responsible development of digital assets and payment innovation” in the U.S.

But that bill’s path to passing in the Senate is murky. And many everyday Americans are wary of crypto: three-quarters of respondents to a 2023 Pew study said they were not confident that the current ways to trade crypto were reliable or safe. 

The financial implications

While there is a political calculus to President Trump accepting bitcoin, there’s a strong financial incentive as well. Trump made between $100,000 and $1 million selling NFT trading cards in 2022. He also owns over $1 million in Ethereum, which could increase in value based on his own support of the industry. And crypto meme coins created by his fans, including the MAGA token, have surged in recent weeks, with investors buying in as a sort of proxy for their support of his campaign. Trump himself was gifted a treasure trove of the token, which is now worth over $4 million

Trump’s embrace of crypto could also win him lobbying dollars: Crypto super PACs are poised to spend more than $80 million to influence the 2024 election. 

But while some crypto enthusiasts have embraced Trump, others have responded more warily. “Trump has not shown any true commitment to crypto,” David Hoffman, co-host of the crypto podcast Bankless, wrote on Twitter. “So far, we are just another cow for him to milk.”

Andrew R. Chow’s book about crypto, Cryptomania, will be published in August and is available for preorder.

Why Microsoft’s New AI Feature Has Prompted Major Privacy Concerns

22 May 2024 at 13:10
Microsoft Privacy AI

Microsoft introduced a new series of products, named Copilot+ PCs, that are designed to be integrated with artificial intelligence technology in mind. The company has reportedly struggled with the laptop market in recent months, with sales of the flagship surface pro laptops declining significantly in 2023.

But the new AI device features have raised privacy concerns. In particular, one feature that Microsoft refers to as “Recall” allows the device to take snapshots of a person’s screen every few seconds. These screenshots are encrypted and then stored locally on the individual’s device.

[time-brightcove not-tgx=”true”]

Microsoft said that this feature was designed to “solve one of the most frustrating problems we encounter daily—finding something we know we have seen before on our PC.” The corporation added that this feature will allow users to search through their computer’s history in an intuitive way based on “relationships and associations unique to each of our individual experiences.” 

However, just two days after the product was announced, the Information Commissioner’s Office, a U.K. data watchdog, said it would be reaching out to Microsoft amid growing concerns about the features’ implications for consumer privacy. 

“This could be a privacy nightmare,” Dr. Kris Shrishak, an adviser on AI and privacy, told the BBC. “The mere fact that screenshots will be taken during use of the device could have a chilling effect on people.”

On its website, Microsoft says that the feature is “optional” and that users can “make choices about what snapshots Recall collects and stores.” 

TIME has reached out to Microsoft for further comment.

The Scarlett Johansson Dispute Erodes Public Trust In OpenAI

21 May 2024 at 17:15
A Year In TIME

Scarlett Johannson has gone to war with OpenAI, and in the battle for public opinion, OpenAI is losing—badly. 

Last week, OpenAI released an update of its AI chatbot called ChatGPT-4o, which featured a female voice talking to its users. Many people pointed out that the voice, which sometimes seemed to veer into flirtation, was eerily similar to Scarlett Johannson’s in the 2013 dystopian sci-fi film Her. (Johannson voices a chatbot who falls in love with the protagonist in the film.) OpenAI CEO Sam Altman has long talked about how much the movie inspired the company’s products, and even made the connection clear last week by tweeting the title of the movie. 

[time-brightcove not-tgx=”true”]

But on Monday, Johannson released a statement saying OpenAI had asked her to be the voice of the chatbot, and when she refused, they found a soundalike. Johannson said that she was “shocked, angered and in disbelief” by the turn of events. The company contended that the voice was not inspired by her, and recorded by a different actor—but proceeded to pull the voice of the chatbot anyway. 

Read More: Scarlett Johansson ‘Angered’ By ChatGPT Voice That Sounded ‘Eerily’ Like Her

The backlash on social media against Altman was intense, with users accusing him of acting unethically.

This is far from the first significant battle that has been waged against OpenAI, though Johannson’s may be the most high-profile. The company has a track record of cutting corners when it comes to permissions or copyright, then dealing with consequences later. While this technique has helped OpenAI grow rapidly, it also has engendered intense criticism.

If this is true, then it paints OpenAI – and Sam Altman – highly unethical.

Scarlett Johansson claims she was offered a deal to lend her voice to ChatGPT 4-o, passed on it, but OpenAI proceeded w/o her consent anyway.

Sam Altman made it clear through a tweet it was intentional. https://t.co/XM6hJselxR

— Gergely Orosz (@GergelyOrosz) May 21, 2024

Lawsuits Centering Upon Copyright 

The issue of whether artificial intelligence companies should be able to train their models on copyright material has been one of the most contentious battlegrounds during the industry’s growth. OpenAI hasn’t even denied that it uses the method to train its models: It told the UK’s House of Lords that “it would be impossible to train today’s leading AI models without using copyrighted materials.”

But many creators have fought back in court in an attempt to protect their work and likenesses. Sarah Silverman accused the company of stealing her work by training its model with her memoir The Bedwetter. George R.R. Martin and John Grisham joined a similar lawsuit, accusing the company of “systematic theft on a mass scale.” And the New York Times filed its own suit

Johannson’s case is slightly different, because the company did not train its model on her voice:  They simply hired an actress that sounded like her. These sorts of disputes have existed long before AI: the singer Tom Waits, for example, was awarded $2.5 million in damages after filing a lawsuit against Frito-Lay in 1988, alleging that the company had hired a singer to imitate him and his distinct gravelly voice in a Doritos commercial. But OpenAI’s use of a Johannson soundalike does fit into a larger pattern of the company scraping from pop culture tentpoles in order to strengthen its products. 

OpenAI using a synthetic version of ScarJo’s voice would be a pretty indicative (and very meta) example of how it regards likeness & IP in AI content.

If this is true for a celeb w/ a recognizable voice & lots of legal resources, how does it regard likeness of non-famous people? https://t.co/PvLjHaTxMo

— Marty Swant (@martyswant) May 21, 2024

Personal Accusations Against Sam Altman

OpenAI critics have also argued that the Johannson dispute fits into a larger history of Altman acting dishonestly in order to get what he wants. Last year, sources told TIME that Altman had a history of being misleading and deceptive. OpenAI’s CTO Mira Murati accused him of manipulating executives to get what he wanted in October, and co-founder and chief scientist Ilya Sutskever compiled a list of 20 times he believed that Altman misled OpenAI executives over the years. Their concerns led to the company’s board briefly ousting Altman from the company, but he quickly returned after receiving an outcry of support from both inside and outside OpenAI. 

Since then, reports have trickled out of employees questioning Altman’s leadership style and accusing him of acting in psychologically abusive ways.  And just last week, Sutskever and executive Jan Leike stepped down from the company, with Leike tweeting that “over the past years, safety culture and processes have taken a backseat to shiny products.” 

Spurring Regulation? 

While much of OpenAI’s drama has been confined to Silicon Valley circles, the outcry following Johannson’s statement shows that the public appetite for regulation of AI companies is high. A Pew study from last year found that 67% of those who are familiar with chatbots like ChatGPT voiced concern that the government will not go far enough in regulating their use. In March, Tennessee became the first state to pass legislation combating unauthorized artificial intelligence impersonation.

In her statement, Johannson called for “the passage of appropriate legislation to help ensure that individual rights are protected.” The Hollywood guild SAG-AFTRA is pushing the No AI Fraud Act, a bipartisan bill introduced in January, which would restrict digital likenesses without consent.  Hawaiian Senator Brian Schatz responded to the incident on Twitter:

Alarming that an AI company just seems to have gone ahead and lifted a voice of an actual person without permission or compensation. The impunity is even more worrisome for performers who aren’t already popular. The right to one’s own image and voice must be protected.

— Brian Schatz (@brianschatz) May 20, 2024

More From TIME

[video id=A8hZ67ye autostart="viewable"]

No One Truly Knows How AI Systems Work. A New Discovery Could Change That

Today’s artificial intelligence is often described as a “black box.” AI developers don’t write explicit rules for these systems; instead, they feed in vast quantities of data and the systems learn on their own to spot patterns. But the inner workings of the AI models remain opaque, and efforts to peer inside them to check exactly what is happening haven’t progressed very far. Beneath the surface, neural networks—today’s most powerful type of AI—consist of billions of artificial “neurons” represented as decimal-point numbers. Nobody truly understands what they mean, or how they work.

[time-brightcove not-tgx=”true”]

For those concerned about risks from AI, this fact looms large. If you don’t know exactly how a system works, how can you be sure it is safe?

Read More: Exclusive: U.S. Must Move ‘Decisively’ to Avert ‘Extinction-Level’ Threat From AI, Government-Commissioned Report Says

On Tuesday, the AI lab Anthropic announced it had made a breakthrough toward solving this problem. Researchers developed a technique for essentially scanning the “brain” of an AI model, allowing them to identify collections of neurons—called “features”—corresponding to different concepts. And for the first time, they successfully used this technique on a frontier large language model, Anthropic’s Claude Sonnet, the lab’s second-most powerful system, .

In one example, Anthropic researchers discovered a feature inside Claude representing the concept of “unsafe code.” By stimulating those neurons, they could get Claude to generate code containing a bug that could be exploited to create a security vulnerability. But by suppressing the neurons, the researchers found, Claude would generate harmless code.

The findings could have big implications for the safety of both present and future AI systems. The researchers found millions of features inside Claude, including some representing bias, fraudulent activity, toxic speech, and manipulative behavior. And they discovered that by suppressing each of these collections of neurons, they could alter the model’s behavior.

As well as helping to address current risks, the technique could also help with more speculative ones. For years, the primary method available to researchers trying to understand the capabilities and risks of new AI systems has simply been to chat with them. This approach, sometimes known as “red-teaming,” can help catch a model being toxic or dangerous, allowing researchers to build in safeguards before the model is released to the public. But it doesn’t help address one type of potential danger that some AI researchers are worried about: the risk of an AI system becoming smart enough to deceive its creators, hiding its capabilities from them until it can escape their control and potentially wreak havoc.

“If we could really understand these systems—and this would require a lot of progress—we might be able to say when these models actually are safe, or whether they just appear safe,” Chris Olah, the head of Anthropic’s interpretability team who led the research, tells TIME.

“The fact that we can do these interventions on the model suggests to me that we’re starting to make progress on what you might call an X-ray, or an MRI [of an AI model],” Anthropic CEO Dario Amodei adds. “Right now, the paradigm is: let’s talk to the model, let’s see what it does. But what we’d like to be able to do is look inside the model as an object—like scanning the brain instead of interviewing someone.”

The research is still in its early stages, Anthropic said in a summary of the findings. But the lab struck an optimistic tone that the findings could soon benefit its AI safety work. “The ability to manipulate features may provide a promising avenue for directly impacting the safety of AI models,” Anthropic said. By suppressing certain features, it may be possible to prevent so-called “jailbreaks” of AI models, a type of vulnerability where safety guardrails can be disabled, the company added.


Researchers in Anthropic’s “interpretability” team have been trying to peer into the brains of neural networks for years. But until recently, they had mostly been working on far smaller models than the giant language models currently being developed and released by tech companies.

One of the reasons for this slow progress was that individual neurons inside AI models would fire even when the model was discussing completely different concepts. “This means that the same neuron might fire on concepts as disparate as the presence of semicolons in computer programming languages, references to burritos, or discussion of the Golden Gate Bridge, giving us little indication as to which specific concept was responsible for activating a given neuron,” Anthropic said in its summary of the research.

To get around this problem, Olah’s team of Anthropic researchers zoomed out. Instead of studying individual neurons, they began to look for groups of neurons that would all fire in response to a specific concept. This technique worked—and allowed them to graduate from studying smaller “toy” models to larger models like Anthropic’s Claude Sonnet, which has billions of neurons. 

Although the researchers said they had identified millions of features inside Claude, they cautioned that this number was nowhere near the true number of features likely present inside the model. Identifying all the features, they said, would be prohibitively expensive using their current techniques, because doing so would require more computing power than it took to train Claude in the first place. (Costing somewhere in the tens or hundreds of millions of dollars.) The researchers also cautioned that although they had found some features they believed to be related to safety, more study would still be needed to determine whether those features could reliably be manipulated to improve a model’s safety.

For Olah, the research is a breakthrough that proves the utility of his esoteric field, interpretability, to the broader world of AI safety research. “Historically, interpretability has been this thing on its own island, and there was this hope that someday it would connect with [AI] safety—but that seemed far off,” Olah says. “I think that’s no longer true.”

Scarlett Johansson ‘Angered’ By ChatGPT Voice That Sounded ‘Eerily’ Like Her

21 May 2024 at 01:21
TOPSHOT-US-ENTERTAINMENT-JUSTICE-AWARD

Scarlett Johansson said Monday that she was “shocked, angered and in disbelief” when she heard that OpenAI used a voice “eerily similar” to hers for its new ChatGPT 4.0 chatbot, even after she had declined to provide her voice.

Earlier on Monday, OpenAI announced on X that it would pause the AI voice, known as “Sky,” while it addresses “questions about how we chose the voices in ChatGPT.” The company said in a blog post that the “Sky” voice was “not an imitation” of Johansson’s voice, but that it was recorded by a different professional actor, whose identity the company would not reveal to protect her privacy.

[time-brightcove not-tgx=”true”]

But Johansson said in a statement to NPR on Monday that OpenAI’s Chief Executive Officer Sam Altman had asked her in September to voice the ChatGPT 4.0 system because he thought her “voice would be comforting to people.” She declined, but nine months later, her friends, family and the public noticed how the “Sky” voice resembled hers.

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the actress said in her statement. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”

Johansson said that she was “forced to hire legal counsel” because of the situation, and that her counsel wrote two letters to Altman and OpenAI asking them to explain the process for creating the “Sky” voice. After, OpenAI “reluctantly agreed” to pull the voice from the platform, she said.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson said in her statement. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”

OpenAI first revealed voice functions for ChatGPT in September. In November, the company announced that the feature would be free for all users on the mobile app. Chat GPT 4.0 isn’t publicly available yet—it will be rolled out in the coming weeks and months, according to Associated Press.

Trump Media and Technology Group Posts Over $300M Net Loss in First Public Quarter

21 May 2024 at 00:48
Trump Media Results

SARASOTA, Fla. — Trump Media and Technology Group, the owner of former President Donald Trump’s social networking site Truth Social, lost more than $300 million last quarter, according to its first earnings report as a publicly traded company.

For the three-month period that ended March 31, the company posted a loss of $327.6 million, which it said included $311 million in non-cash expenses related to its merger with a company called Digital World Acquisition Corp., which was essentially a pile of cash looking for a target to merge with. It’s an example of what’s called a special purpose acquisition company, or SPAC, which can give young companies quicker and easier routes to getting their shares trading publicly.

[time-brightcove not-tgx=”true”]

A year earlier, Trump Media posted a loss of $210,300.

Trump Media said collected $770,500 in revenue in the first quarter, largely from its “nascent advertising initiative.” That was down from $1.1 million a year earlier.

“At this early stage in the Company’s development, TMTG remains focused on long-term product development, rather than quarterly revenue,” Trump Media said in its earnings news release.

Earlier this month, the company fired an auditor that federal regulators recently charged with “massive fraud.” The former president’s media company dismissed BF Borgers as its independent public accounting firm on May 3, delaying the filing of the quarterly earnings report, according to a securities filings.

Trump Media had previously cycled through at least two other auditors — one that resigned in July 2023, and another that was terminated its the board in March, just as it was re-hiring BF Borgers.

Shares of Trump Media climbed 36 cents to $48.74 in after-hours trading. The stock, which trades under the ticker symbol “DJT,” began trading on Nasdaq in March and peaked at nearly $80 in late March.

Taiwan’s Digital Minister Has an Ambitious Plan to Align Tech With Democracy

20 May 2024 at 13:00

Audrey Tang, Taiwan’s 43-year-old minister of digital affairs, has a powerful effect on people. At a panel discussion at Northeastern University in Boston, 20-year-old student Diane Grant is visibly moved, describing Tang’s talk as the best she’s been to in her undergraduate career. Later that day, a German tourist recognizes Tang leaving the Boston Museum of Science and requests a photo, saying she’s “starstruck.” At the Massachusetts Institute of Technology, a trio of world-leading economists bashfully ask Tang to don a baseball cap emblazoned with the name of their research center and pose for a group photo. Political scientist and former gubernatorial candidate Danielle Allen, confesses to Tang that, although others often tell her that she is a source of inspiration to them, she rarely feels inspired by others. But she has found an exception: Tang inspires her. 

[time-brightcove not-tgx=”true”]

Few visiting dignitaries elicit such reactions. But to some, Tang symbolizes hope. 

In an era when digital technologies—social media, artificial intelligence, blockchains—are increasingly seen as a threat to democracy, Taiwan seems to offer an alternative path. In Taiwan, civil society groups and the government work together to harness technology, giving people more say in how their country is run, and tackling problems like tracing the spread of the COVID-19 pandemic and combatting electoral disinformation campaigns.

Tang, the world’s first openly transgender minister, played a pivotal role in all of this, first as an activist hacker and then from within the government. Now, she is stepping back from her ministerial duties to embark upon a world tour to promote the ideas that have flourished in Taiwan. These are ideas captured in Plurality, a book Tang has co-authored with E. Glen Weyl, a 39-year-old American economist at Microsoft, and more than 100 online collaborators.

Tang aims to be a global ambassador, demonstrating how technology and democracy can coexist harmoniously. “In Taiwan, for the past decade, this is the dominant worldview,” she says. “Just to see how that narrative—how that overarching, intertwined feeling of tech and democracy—can grow in non-Taiwan places. I’m most looking forward to that.”

The tour’s objective is not only to disseminate the book’s ideas but also to expose people to Tang herself. “It would change the world if every major world leader gets to spend 30 minutes with Audrey,” says Weyl, the primary orchestrator of the plan. “It’s about the experience of being with her. It changed my life.”


Tang’s unique charisma was shaped by a rare set of circumstances. At the age of 4, Tang—who was born with a serious heart condition—was given just a 50% chance of surviving long enough to undergo life-saving surgery. If she ever became upset, or angry, or excited, she would lose consciousness and wake up in an intensive care unit. She soon learned to keep her composure, and though an operation corrected her condition when she was 12, her equanimity remained.

“If you’ve been living with that condition for 12 years of your life, that’s your core personality,” she says. “I convinced myself to go on a roller coaster once or twice, rationally knowing I would not die. But it wasn’t very pleasant.”

Tang grew up alongside democracy and digital technologies in Taiwan. Aged 8, she taught herself to program by sketching a keyboard on a piece of paper, feigning typing, and then writing the output on another piece of paper. (After a few weeks of this, her parents relented and bought her a computer). By 14, Tang had left formal education to pursue programming full-time; she spent the next two decades contributing to open-source projects both in Taiwan and abroad.

“The idea of personal computing, to people in Taiwan, is inherently democratic,” Tang says. Computers and internet access meant the ability to publish books without state sponsorship, and communicate without state surveillance, a stark contrast to the martial law era that only ended in 1987, six years after Tang was born. 

All of this fueled the rise of the g0v (gov zero) movement in 2012, led by civic hackers who wanted to increase transparency and participation in public affairs. The movement started by creating superior versions of government websites, which they hosted on .g0v.tw domains instead of the official .gov.tw, often attracting more traffic than their governmental counterparts. The g0v movement has since launched more initiatives that seek to use technology to empower Taiwanese citizens, such as vTaiwan, a platform that facilitates public discussion and collaborative policymaking between citizens, experts, and government officials.

In 2014, the movement’s influence became clear when protestors, many affiliated with g0v, occupied Taiwan’s legislative chamber to oppose a trade deal with China. “Democracy needs me,” Tang wrote to her colleagues at California-based software company Socialtext, before leaving to support the protesters for the duration of their 24-day occupation by helping them to peacefully broadcast their message.

The protests marked a turning point in Taiwan. The government made efforts to engage with young activists and in 2016, Tang, then 35, was appointed as digital minister without portfolio. In 2022, Tang was named Taiwan’s first minister for digital affairs, and in 2023 she was made chairperson of the board of Taiwan’s National Institute of Cyber Security.

In many regards, Taiwan leads the world in digital democracy, thanks to initiatives led by Tang and others. Taiwan’s agile response to COVID-19, including a widely-praised contact tracing system, exemplifies this success. (At one point, the island nation went 200 days without a locally transmitted coronavirus case.) Such achievements, Plurality argues, are partly responsible for Taiwan’s remarkable economic, social, and political success over the last decade.

However, it’s important not to overstate the impact of Taiwan’s digital democracy initiatives, cautions Sara Newland, an assistant professor at Smith College, Massachusetts, who researches Chinese and Taiwanese politics. While Taiwan is a well-governed country and it’s plausible that the various examples of digital democracy contribute to this success, it’s also possible that these initiatives came about because Taiwan is well-governed, she says. The vision outlined in Plurality borders on utopian, and Taiwan’s case may not provide enough evidence to prove its feasibility.

Still, while Plurality might draw heavily on Taiwan’s experience, its scope is global. The book’s core lays out the fundamental rights that societies must promote, how digital technologies can aid in promoting them, and the collaboration-enhancing technologies that could strengthen democracy. For each technology, examples are drawn from outside Taiwan. For example, “immersive shared reality technologies,” futuristic cousins of virtual reality headsets like Apple’s Vision Pro and Meta’s Quest, could foster empathy at a distance and allow people to step into another’s shoes. The book cites Becoming Homeless, a seven-minute virtual reality experience designed by researchers at Stanford to help the user understand how it feels to lose your home, as a primitive example of an immersive shared reality technology.

Plurality aims to offer a roadmap for a future in which technology and democracy not only co-exist but thrive together; in writing the book, Tang and Weyl put this collaborative ethos into practice. The book, which is free to download, began life as a blog post authored by Weyl; although Weyl and Tang conceived of the project and Weyl was the primary author, anyone could contribute to the book’s development. More than 100 people contributed—some copy-edited, some designed graphics, some wrote entire chapters, says Tang. While juggling ministerial duties, Tang spent hours each week working on the book, contributing ideas and building the website. “At the end of the day,” she quips, “I was still a coder for some reason.”


The fledgling plurality movement faces a daunting challenge: countering the threat from the two dominant digital technologies of our time—artificial intelligence and blockchains—and their effects on society. Plurality argues that both of these are undermining democracy in different, but equally pernicious ways. AI systems facilitate top-down control, empowering authoritarian regimes and unresponsive technocratic governments in ostensibly democratic countries. Meanwhile, blockchain-based technologies atomize societies and accelerate financial capitalism, eroding democracy from below. As Peter Thiel, billionaire entrepreneur and investor, put it in 2018: “crypto is libertarian and AI is communist.”

Weyl sees echoes of the 1930s, when fascism and communism battled for ideological supremacy. “But there was another option,” he says—liberal democracy. Now, Weyl and Tang are striving to articulate a new alternative to AI-powered authoritarianism and blockchain-fueled libertarianism: “plurality.” They hope this idea—of a symbiotic relationship between democracy and collaborative technology—can profoundly influence the century ahead. 

Plurality concludes with a call to action, setting bold targets for the movement it hopes to inspire. By 2030, the authors want the idea of plurality to be as widely recognized in the tech world as AI and blockchain, and as prominent in political discourse as environmentalism. To get there, the pair aim to cultivate a core group of 1,000 deeply engaged advocates, distribute 1 million copies of the book, and build sympathy among 1 billion people. “Frankly, I’m starting to feel like these [goals] maybe are actually under ambitious,” Weyl says.

This isn’t his first attempt at movement-building. Weyl’s first book, Radical Markets, generated huge buzz when it was published in 2018, prompting him to channel that enthusiasm into launching the RadicalxChange Foundation, a nonprofit that seeks to advance the book’s ideas. (Tang and Weyl are both members of the Foundation’s board, along with Vitalik Buterin, the “prince of cryptocurrency” who introduced the pair in 2018.) However, while the Foundation has had some success, it fell far short of the targets Weyl has set for Plurality’s impact on the world. And history is littered with extinct political movements, from Occupy Wall Street to the Arab Spring, that failed to meet their goals. If Weyl thinks his targets are under ambitious, many might think them delusional. 

Weyl is unperturbed. Last time, he didn’t have a plan. With Plurality, he says, he’s taking a more ambitious approach—one that hinges on Tang’s star power. Weyl has enlisted Oscar-winning director Cynthia Wade to shoot a short documentary about Tang’s life and Taiwan’s democratic evolution, with the goal of premiering it at film festivals later this year.

As Hollywood shut down during last year’s strikes, working through footage of Tang has been soothing, says Wade. “When you’re editing a film, you’re living with somebody. So [Tang has] been living in our household for the last quarter,” she says. “There’s a way in which she encourages you to stop and reflect that feels very different, and maybe even more participatory.”

A feature-length biopic is also in the early stages, with Weyl floating the idea of casting trans actor Ian Alexander in the lead role. Tang, characteristically deadpan, offers an alternative: “Sora, maybe,” referring to an unreleased AI system developed by OpenAI that generates videos from text prompts.

This playful exchange captures the duo’s dynamic. Over the course of four hours at Weyl’s house in Arlington, Mass., Weyl earnestly expounds on the book’s ideas and aspirations, while Tang interjects with droll asides. The evangelizing, the ideological battle of the 21st century, the numerical targets in the millions and billions—these all come from Weyl, they say. Tang would never think in those terms, Weyl says, “without me constantly badgering her.”

Tang nods in agreement, seemingly unfazed by the weight of his expectations. Despite embarking on a journey that could—if Weyl’s goals are met—change the course of history, she remains remarkably laid-back. When a friend asked her last year why she was devoting so much time to the book, she replied simply: “Just to make Glen feel better and sleep better.” 

Such serenity is not the most natural quality in a representative for what Weyl hopes might be a century-defining ideology, but it is, perhaps, the reason for the strong reactions Tang provokes. In fact, it may be Tang’s poise, as much as Weyl’s zeal, that gives the plurality movement some hope of achieving its lofty goals.

A New Lawsuit Accuses Spotify of Cheating Songwriters Out of Royalties

Dilara Irem Sancar—Anadolu/Getty Images

Spotify Technology SA used a legalistic word change to justify slicing royalties to musicians and publishers, reducing the revenue on which royalties are based by almost 50%, according to lawsuit filed by the group that collects their payments.

The change came in March when Spotify added the word “bundled” to its description of its $10.99-a-month music streaming service, the Mechanical Licensing Collective said in its complaint. Nothing else “about the Premium service has actually changed,” according to the suit filed Thursday in federal court in Manhattan.

[time-brightcove not-tgx=”true”]

The collective is legally barred from disclosing how much Spotify royalties declined since March but cited a Billboard story that estimated the loss would amount to about $150 million next year. 

Spotify said it looks forward to “swift resolution” of the lawsuit, which it said concerns terms that publishers and streaming services “agreed to and celebrated years ago.”

“Bundles were a critical component of that settlement, and multiple DSPs include bundles as part of their mix of subscription offerings,” a Spotify spokesperson said in a statement. “Spotify paid a record amount to publishers and societies in 2023 and is on track to pay out an even larger amount in 2024.”

The fight over bundling between the streaming service and publishers has spilled into a dispute over other issues.

The National Music Publishers’ Association on Wednesday sent a cease-and-desist letter to Spotify over products it claims are infringing on songwriters’ copyrights. The NMPA alleges that music videos, lyrics and podcasts on the platform are all using copyrighted music without the proper permissions.

“Before Spotify’s ‘bundling’ betrayal, we may have been able to work together to fix this problem, but they have chosen the hard road by coming after songwriters once again,” David Israelite, chief executive officer at the NMPA, said in a statement.

In response, a Spotify spokesperson called the letter a “press stunt filled with false and misleading claims.” 

Music and audiobook streaming companies, like Spotify, pay musicians and music publishers under a complex system set out in 2018 by the Music Modernization Act of 2018. Under the system, streaming services pay less per stream—in other words, less to creators and publishers—when their services are classified as bundles.

Spotify’s Premium service, which was not classified as a bundle before March 1, includes unlimited music downloads and 15 hours of audiobooks. It added the audiobook offering in November in the U.S. without changing the $10.99 price.

The licensing collective is asking the court to order Spotify to stop classifying Premium as a bundled service and to pay it for lost revenue.

Israelite praised the Mechanical Licensing Collective for “not letting Spotify get away with its latest trick to underpay creators.”

Reddit Partners With OpenAI to Bring Content to ChatGPT and AI Tools to Reddit

Reddit

Reddit Inc. forged a partnership with OpenAI that will bring its content to the chatbot ChatGPT and other products, while also helping the social media company add new artificial intelligence features to its forums.

Shares of Reddit, which had their initial public offering in March, jumped as much as 15% in late trading following the announcement.

The agreement “will enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics,” the companies said Thursday in a joint statement. The deal allows OpenAI to display Reddit’s content and train AI systems on its partner’s data.

[time-brightcove not-tgx=”true”]

Reddit will also offer its users new AI-based tools built on models created by OpenAI, which will place ads on its partner’s site. Financial terms of the deal weren’t disclosed.

Reddit content has long been a popular source of training data for making AI models—including those of OpenAI. Last week, Reddit released new policies governing the use of its data, part of an effort to increase revenue through licensing agreements with artificial intelligence developers and other companies.

“Our data is extremely valuable,” Chief Executive Officer Steve Huffman said at the Bloomberg Technology Summit earlier this month. “We’re seeing a ton of interest in it.”

Finding new moneymaking opportunity was part of Reddit’s pitch in the lead-up to its IPO. The company also signed an accord in January with Alphabet Inc.’s Google worth $60 million to help train large language models, the technology underpinning generative AI.

Huffman previously declined to discuss the specifics of the Google deal but said typical terms can govern how long a Reddit summary can show up in a Google search or whether a licensee has to display Reddit branding in AI-generated results. The San Francisco-based social network has signed licensing deals worth $203 million in total, with terms ranging from two to three years, and has been in talks to strike additional licensing agreements. 

OpenAI, for its part, is increasingly forging partnerships with media companies to help train its AI systems and show more real-time content within its chatbot. The ChatGPT maker also penned deals with Dotdash Meredith earlier this month and the Financial Times in April.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive

Backed by Microsoft Corp., the startup has emerged as a driving force in the development of AI. Sam Altman, CEO of OpenAI, has a long history with Reddit. He was one of the company’s largest shareholders at the time of its IPO earlier this year and briefly served as Reddit’s interim CEO in 2014.

The companies noted in the statement that their partnership was led by OpenAI Chief Operating Officer Brad Lightcap and was approved by its independent directors.

The shares of Reddit, which had declined 5.5% to $56.38 in regular New York trading Thursday, soared as high as $64.75 after the partnership was announced. The stock has gained 66% since its IPO.

How to Hit Pause on AI Before It’s Too Late

16 May 2024 at 15:22
Demonstrator holding "No AI" placard

Only 16 months have passed, but the release of ChatGPT back in November 2022 feels already like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these large language models. Our world, and in particular the world of AI, has decidedly changed.

But the real prize of human-level AI—or artificial general intelligence (AGI)—has yet to be achieved. Such a breakthrough would mean an AI that can carry out most economically productive work, engage with others, do science, build and maintain social networks, conduct politics, and carry out modern warfare. The main constraint for all these tasks today is cognition. Removing this constraint would be world-changing. Yet many across the globe’s leading AI labs believe this technology could be a reality before the end of this decade.

[time-brightcove not-tgx=”true”]

That could be an enormous boon for humanity. But AI could also be extremely dangerous, especially if we cannot control it. Uncontrolled AI could hack its way into online systems that power so much of the world, and use them to achieve its goals. It could gain access to our social media accounts and create tailor-made manipulations for large numbers of people. Even worse, military personnel in charge of nuclear weapons could be manipulated by an AI to share their credentials, posing a huge threat to humanity.

It would be a constructive step to make it as hard as possible for any of that to happen by strengthening the world’s defenses against adverse online actors. But when AI can convince humans, which it is already better at than we are, there is no known defense.

For these reasons, many AI safety researchers at AI labs such as OpenAI, Google DeepMind and Anthropic, and at safety-minded nonprofits, have given up on trying to limit the actions future AI can do. They are instead focusing on creating “aligned” or inherently safe AI. Aligned AI might get powerful enough to be able to exterminate humanity, but it should not want to do this.

There are big question marks about aligned AI. First, the technical part of alignment is an unsolved scientific problem. Recently, some of the best researchers working on aligning superhuman AI left OpenAI in dissatisfaction, a move that does not inspire confidence. Second, it is unclear what a superintelligent AI would be aligned to. If it was an academic value system, such as utilitarianism, we might quickly find out that most humans’ values actually do not match these aloof ideas, after which the unstoppable superintelligence could go on to act against most people’s will forever. If the alignment was to people’s actual intentions, we would need some way to aggregate these very different intentions. While idealistic solutions such as a U.N. council or AI-powered decision aggregation algorithms are in the realm of possibility, there is a worry that superintelligence’s absolute power would be concentrated in the hands of very few politicians or CEOs. This would of course be unacceptable for—and a direct danger to—all other human beings.

Read More: The Only Way to Deal With the Threat From AI? Shut It Down

Dismantling the time bomb

If we cannot find a way to at the very least keep humanity safe from extinction, and preferably also from an alignment dystopia, AI that could become uncontrollable must not be created in the first place. This solution, postponing human-level or superintelligent AI, for as long as we haven’t solved safety concerns, has the downside that AI’s grand promises—ranging from curing disease to creating massive economic growth—will need to wait.

Pausing AI might seem like a radical idea to some, but it will be necessary if AI continues to improve without us reaching a satisfactory alignment plan. When AI’s capabilities reach near-takeover levels, the only realistic option is that labs are firmly required by governments to pause development. Doing otherwise would be suicidal.

And pausing AI may not be as difficult as some make it out to be. At the moment, only a relatively small number of large companies have the means to perform leading training runs, meaning enforcement of a pause is mostly limited by political will, at least in the short run. In the longer term, however, hardware and algorithmic improvement mean a pause may be seen as difficult to enforce. Enforcement between countries would be required, for example with a treaty, as would enforcement within countries, with steps like stringent hardware controls. 

In the meantime, scientists need to better understand the risks. Although there is widely-shared academic concern, no consensus exists yet. Scientists should formalize their points of agreement, and show where and why their views deviate, in the new International Scientific Report on Advanced AI Safety, which should develop into an “Intergovernmental Panel on Climate Change for AI risks.” Leading scientific journals should open up further to existential risk research, even if it seems speculative. The future does not provide data points, but looking ahead is as important for AI as it is for climate change.

For their part, governments have an enormous part to play in how AI unfolds. This starts with officially acknowledging AI’s existential risk, as has already been done by the U.S., U.K., and E.U., and setting up AI safety institutes. Governments should also draft plans for what to do in the most important, thinkable scenarios, as well as how to deal with AGI’s many non-existential issues such as mass unemployment, runaway inequality, and energy consumption. Governments should make their AGI strategies publicly available, allowing scientific, industry, and public evaluation.

It is great progress that major AI countries are constructively discussing common policy at biannual AI safety summits, including one in Seoul from May 21 to 22. This process, however, needs to be guarded and expanded. Working on a shared ground truth on AI’s existential risks and voicing shared concern with all 28 invited nations would already be major progress in that direction. Beyond that, relatively easy measures need to be agreed upon, such as creating licensing regimes, model evaluations, tracking AI hardware, expanding liability for AI labs, and excluding copyrighted content from training. An international AI agency needs to be set up to guard execution.

It is fundamentally difficult to predict scientific progress. Still, superhuman AI will likely impact our civilization more than anything else this century. Simply waiting for the time bomb to explode is not a feasible strategy. Let us use the time we have as wisely as possible.

❌
❌