AI Companies Are Facing Increasing Risk of Litigation and Regulatory Oversight.

Enrico Schaefer - July 14, 2023 - Artificial Intellience

img

Is your AI company protected against the primary legal risks and the legal risks unique to artificial intelligence? An attorney specializing in AI company representation can help you identify and reduce legal risks which could put you out of business. Some current lawsuits, class actions, and regulatory actions are discussed below. But the floodgates of liability are just beginning to open.

Is Your AI Start-UP Headed for Company Killing Litigation?

If you are an AI developer, service company, or app, I want you to be aware of something critical to your success or failure. Every new and emerging technology has an early period where anything goes. The regulatory agencies have not caught up to the technology. The lawyers have not started filing lawsuits yet. After all the initial hype wears off, we always see a drastic uptick in regulatory action and litigation filed by law firms against any new successful technology. We saw this in the blockchain. We saw this in software as a service. We will soon see this with artificial intelligence.

Lawsuits and Regulatory Actions Against AI Companies Are Ramping Up.

Several lawsuits and regulatory actions have been filed against the more significant AI players, including OpenAI. While smaller AI companies and startups can hide in the shadow now, they should not expect this grace period to last. Every AI startup must work with experienced AI attorneys to identify and reduce risks across their contracts, corporate structures, employees, contractors, and vendors. Ensure your website agreements and software as a service (SaaS) agreements are reviewed by lawyers who understand AI is critical. There is also the issue of trademark infringement, copyright infringement, defamation, data privacy, and other legal issues.

FTC is investigating ChatGPT-maker OpenAI for possible consumer harm.

This CNBC article reveals that OpenAI, the mastermind behind ChatGPT, is now in the Federal Trade Commission‘s (FTC) crosshairs. The FTC is digging deep, questioning whether OpenAI has overstepped the boundaries of consumer protection laws. The focus is whether OpenAI has been playing fast and loose with privacy or data security practices, or if it has been engaging in practices that could harm consumers, including damaging their reputation.

This investigation is part of a more substantial, complex puzzle – understanding the far-reaching implications of artificial intelligence, particularly generative AI, which feeds on colossal datasets to learn. The FTC and other agencies are flexing their legal muscles, reminding everyone they have the authority to chase down any harm birthed by AI.

The FTC’s Civil Investigative Demand (CID) is demanding answers from OpenAI. They want a list of third parties with access to its large language models, the names of their top ten customers or licensors, and an explanation of how they handle consumer information. The CID also asks for a detailed account of how OpenAI sources information to train their models, evaluate risk, and monitor and handle potentially misleading or damaging statements about individuals.

This investigation is a glaring sign of the intensifying regulatory scrutiny that AI companies face now. For AI companies, this is a wake-up call. They need to ensure their data privacy and security practices are rock-solid and that their operations are as transparent as glass. It’s also a reminder that they need to have their fingers on the pulse of the legal landscape and potential liabilities, especially as regulators are becoming more assertive in their oversight of this rapidly evolving technology.

To reduce risk, AI companies should consider conducting thorough audits of their data practices, implementing iron-clad data governance policies, and fostering open dialogues with regulators. They should also think about pouring resources into research and development to enhance the safety and alignment of their AI systems and be brutally honest about the limitations of their technology.

The courtroom battles are just starting in the generative AI legal Wild West.

The CNBC article highlights the escalating legal showdowns in the wild frontier of generative AI. As AI technology evolves and proliferates, it’s sparking a wildfire of copyright infringement lawsuits. The heart of the matter is this: AI, with tools like OpenAI’s DALL-E and ChatGPT leading the charge, can whip up creative content – art, music, writing – that’s causing a stir among creators who fear their copyrighted work is being stolen without their say-so.

The legal battlefield is already teeming with action. Getty Images has thrown the gauntlet against Stability AI, accusing the company of swiping 12 million images without asking or paying a dime. Stability AI, DeviantArt, and Midjourney are also caught in the crossfire of a lawsuit that argues their use of AI tramples on the rights of millions of artists. Prisma Labs, the brains behind the Lensa app, is staring down a lawsuit alleging it unlawfully nabbed users’ biometric data. TikTok recently waved the white flag and settled a lawsuit with voice actress Bev Standing, who argued the company used her voice without her green light for its text-to-speech feature.

The article also points out a growing divide. While tech companies are singing the praises of generative AI, media companies and creators are sounding the alarm about their copyrighted work being hijacked. The legal skirmishes are heating up, and experts are betting their bottom dollar that more are on the horizon.

Regarding dodging risk, AI companies need to open their eyes to the potential legal fallout of their technology. They need to ensure they’re using large language models and text-to-image generators in a way that respects data protection laws. They should also think about cutting a check to human creators if their intellectual property is used in the development of AI-generative models, following in the footsteps of Shutterstock.

The article highlights the importance of AI companies staying on top of the shifting legal landscape and potential liabilities. As the use of AI continues to skyrocket, AI companies must understand and respect copyright laws and data protection regulations to sidestep potential legal landmines.

Don’t be fooled. The FTC is already enforcing current regulations against AI companies.

The reality is, AI is regulated. Here are just a few examples:

Unfair and deceptive trade practices laws apply to AI. At the FTC, section 5 jurisdiction extends to companies making, selling, or using AI.8 If a company makes a deceptive claim using (or about) AI, that company can be held accountable. If a company injures consumers in a way that satisfies our test for unfairness when using or releasing AI, that company can be held accountable.

Civil rights laws apply to AI. If you’re a creditor, look to the Equal Credit Opportunity Act. If you’re an employer, consider Title VII of the Civil Rights Act. If you’re a housing provider, look to the Fair Housing Act.

Tort and product liability laws apply to AI. There is no AI carve-out to product liability statutes, nor is there an AI carve-out to common law causes of action.

Contact an Attorney Who Understands AI.

We’ve been representing new and emerging technology companies since 1992 when the new and emerging technology was the internet. We understand cloud, blockchain, and AI technologies, which allows us to provide expert representation to AI startups, software-as-a-service companies, platform-as-a-service companies, and emerging-growth artificial intelligence companies. Feel free to contact one of our AI lawyers to learn more. AI Attorney, Enrico Schaefer.

GET IN Touch

We’re here to field your questions and concerns. If you are a company able to pay a reasonable legal fee each month, please contact us today.