[ad_1]
OpenAI has been hit with more than a dozen high-profile lawsuits and government investigations since Silverman’s complaint. Top authors including Jodi Picoult and media companies including the New York Times have also alleged that the company violates copyright law by training the algorithms that power popular services like ChatGPT on their work. Billionaire Elon Musk sued OpenAI for diverging from its original nonprofit mission. And government agencies in the United States and Europe are investigating whether the company ran afoul of competition, securities and consumer protection laws in multiple regulatory probes.
“It might be a good thing that ChatGPT could be a lawyer because a lot of people are taking its a** to court,” Silverman said during a November segment on Comedy Central’s “The Daily Show.”
Under siege, OpenAI is turning to some of the world’s top legal and political human minds. It has hired about two dozen in-house lawyers since March 2023 to work on issues including copyright, according to a Washington Post analysis of LinkedIn. The company has posted a job for an antitrust lawyer — with a salary of up to $300,000 — to handle the increasing scrutiny in the United States and Europe of its partnership with Microsoft. It has also retained some of the top U.S. law firms, including Cooley and Morrison Foerster, to represent it in key cases.
OpenAI is in advanced talks to hire Chris Lehane, a former press secretary for Al Gore’s presidential campaign and the architect of Airbnb’s public policy efforts, according to a person familiar with the matter, who spoke on the condition of anonymity to describe sensitive talks. OpenAI plans in the coming months to lean heavily into the idea that U.S. AI companies are a bulwark against China, supporting American economic and national security interests against an increasingly aggressive foreign power — a strategy once deployed by Facebook parent Meta in an effort to align more closely with the Trump White House.
Lehane positioned Airbnb as supporting the aspirations of everyday entrepreneurs, amid heated regulatory disputes with cities across the country. In another sign of OpenAI’s maturing political strategy, the company joined the industry trade group TechNet this year.
The rapid expansion underscores a new reality: OpenAI is at war.
The company is playing defense amid a rush of lawsuits, investigations and potential legislation that threaten its goal of building the world’s most powerful AI. The posture is a dramatic shift from just a year ago, when Washington lawmakers were enamored with the potential of ChatGPT and the political acumen of the company’s CEO, Sam Altman.
“Everyone thinks of us as Big Tech,” said Che Chang, OpenAI’s general counsel. But Chang argues the company isn’t far from start-up mode, adding that in 2022, it had just 200 employees.
Now OpenAI has about 1,000 employees total, he said, and the legal team has been part of that rapid growth. He jokes that he’s aged a few years in the months since ChatGPT was released but calls the increased legal challenges “relatively commensurate to the impact we have had on the world.”
“I am empathetic to the point that a lot of people say, ‘Look, I was just minding my own business and this AI revolution happened,’” Chang said. “Naturally, there’s going to be some negativity coming out of that.”
Such an evolution is part of a pattern in Silicon Valley, where companies initially celebrated for their technological achievements ultimately face legal and political backlash for the perilous downsides of their products.
“Congratulations, you’re in the big leagues,” said Bradley Tusk, Uber’s first political adviser and a fixer for start-ups in heavily regulated industries. “They are the market leaders in this completely revolutionary thing, which is very exciting but also means it’s going to be controversial for a really long time”
But even for the fast-moving tech world, OpenAI’s evolution happened quickly. Other companies’ products were available for many years or even decades before they attracted the eye of Washington regulators or legal challenges from celebrities and legacy companies. It has been less than 18 months since the release of ChatGPT.
Apple’s iPhone empire expanded with little intervention for almost 17 years until last month, when the Justice Department brought a lawsuit alleging it wielded an illegal monopoly over phones. Google was 22 years old when the agency hit the company with its first landmark antitrust case in 2020. Even Facebook — with a notoriously fraught relationship with Washington lawmakers — launched on college campuses 13 years before its Cambridge Analytica scandal and fallout from the 2016 election sullied its reputation.
OpenAI has had mixed success so far in the copyright suits. A judge dismissed many of the claims in Silverman’s lawsuit, but she allowed some key allegations over whether OpenAI copied the comedian’s and other authors’ work to stand. Silverman and the authors refiled their complaint last month.
As the copyright cases proceed, OpenAI is also embroiled in litigation with its co-founder and now competitor, Musk. He sued the company this year, alleging it has diverged from its nonprofit mission. He sought a court order requiring OpenAI to follow its “long-standing practice of making AI research and technology developed at OpenAI available to the public” rather than keeping it proprietary.
The company’s gloves are off. OpenAI responded by publishing old emails it said show that Musk sought control over the start-up and attempted to merge it with his car company, Tesla. In a court filing last week, OpenAI asked a judge to dismiss the billionaire’s claims, calling his lawsuit “150 paragraphs of self-congratulation and revisionist history.”
OpenAI is also at the center of several regulatory investigations, which have forced the company to spend even more on legal support. The Securities and Exchange Commission is looking into whether investors were misled during the chaotic period when Altman briefly left the company. The Federal Trade Commission is probing whether it ran afoul of consumer protection laws in a number of areas, including a data leak and ChatGPT’s inaccurate claims. And the commission has had talks with the Justice Department about which agency should probe its multibillion-dollar partnership with Microsoft, amid concerns that such deals are dampening competition in the quickly evolving AI market.
Anna Makanju, the company’s global affairs chief, said in a Washington Post Live interview that the growing regulatory scrutiny of the company should be in some ways “reassuring” because it shows governments already have a number of mechanisms to address the challenges presented by artificial intelligence.
“There is sometimes a sentiment that because this technology is new, we’re totally unprepared and there are no ways to really keep it under control,” she said. “There are quite a few regulators that already do have the authority to take action against AI-generated harms.”
Meanwhile, governments around the world are increasingly crafting laws to respond to AI. Last month, the European Union passed its AI Act, which will put new guardrails on the technology in the coming years. Similar efforts lag in the United States, but a bipartisan group of senators is expected to release a plan to create AI legislation in the near future. Chang says he’s optimistic that more guidance from policymakers could help answer some of the legal questions confronting the industry now.
“This is the initial crescendo of loud response,” he said. “It will never go away, but I think the initial shock and awe will calm down a little bit.”
Source link