in ,

Why Generative AI Is Hot Despite Ethical and Safety Concerns


  • The field of generative AI, or AI that can create content, has recently exploded in popularity.
  • These AI can generate text, images, videos, code, and even suggested responses to customer issues.
  • However, many express ethical and safety concerns and questions around business-model viability.

This year, four generative-AI startups alone have raised over $370 million in capital, with three of them achieving unicorn status as they explode onto the tech scene and garner the fascination of investors and consumers alike.

The companies — Cresta, Adept AI, Stability AI, and Jasper — are up-and-comers in the field of generative AI, an emerging space whose billion-dollar valuations and massive rounds have stood in stark contrast against the broader VC pessimism of 2022.

But what is generative AI, and what about it has investors clamoring for a spot in white-hot rounds while struggling with the industry’s existential ethical and safety concerns?

Generative AI’s rise to popularity

Simply put, generative AI refers to artificial intelligence that can create content.

“The model is not trying to classify an image, like if it’s a cat or a dog, or tell you a prediction of like, hey, what’s your customer-churn journey looking like?” Kanu Gulati, a partner at Khosla Ventures, said. “This is actually just generating content from scratch.”

Tech advancements over the past few years — specifically transformer and diffusion models — have made generative AI possible. AI models refer to the training and deployment of algorithms on a dataset. 

Transformer models understand context, making them ideal for generating nuanced human language. Diffusion models function by adding noise to images before reversing that process to recreate the original image. These models can then reapply this learned process to pure noise, slowly removing noise until it has created the desired image. This process is what helps companies like Stability AI generate pictures.

The generative-AI hype started in the consumer space with users flocking to online communities to post AI-generated images ranging from Shrek’s high-school yearbook photo to a medieval tapestry depicting a cow’s alien abduction.

In recent times, startups have applied this foundational tech to more practical use cases.

In the AI-generated image space, companies like OpenAI, Stability AI, Midjourney, and Craiyon abound, while text-generating startups like Jasper and Copy.ai help writers create blog posts and headlines. Other startups produce video avatars, edit photos and videos, write code, and create customer-support responses.

For Saam Motamedi, a partner at the venture-capital firm Greylock, some of the most exciting advancements come from multimodal AI, or AI that works across multiple modes like text, images, and video, which he said will make “workers more productive.”

Motamedi put real capital behind that bet by investing in Adept AI, a startup creating a “system that can do anything a human can do in front of a computer.” Practically, this means that users can instruct AI to do tasks ranging from finding a four-bedroom house to creating a profit column in Excel.

Removing the rose-colored glasses

Despite their bullishness on the space, Gulati and Motamedi agree that generative AI brings a host of ethical and safety issues too.

Motamedi organizes his concerns into three buckets: those with a malicious actor, those without a malicious actor, and those concerning ownership.

For the first group, individuals can insidiously “poison” training data to distort the AI’s output or even use AI to impersonate others. 

“There’s a company we invested in, Resemble AI, that actually generates voices,” Bryan Rosenblatt, a partner at Craft Ventures, said. “God forbid, you see a message from the president saying something that they didn’t say — that’s scary.”

Even when there are no malicious actors, biases in data that AI is trained on can lead to biased outputs — for instance, a model that spits out a photo of a white male when asked to produce an image of a CEO, Gulati said.

Furthermore, high-profile lawsuits like Thaler v. Vidal raise questions around who gets authorship credit for AI-generated art.

And though investors and founders assure consumers and creators that AI will be used to augment — not replace — human capabilities, many still wonder if the jobs of artists, marketers, and developers are at stake.

For Gulati, Motamedi, and Rosenblatt, who have all invested in the space, the key is finding startups that incorporate safety guardrails in every step of their development process, like OpenAI’s content filter, or those who are actively building defense tools, like Resemble AI’s fake-speech detector.

For other investors, concerns center around the viability of generative-AI startups as venture-backable enterprise-focused businesses.

“Where I see long-term budget in enterprise is usually around the CFO, the CISO, the head of HR, the CTO, VP sales, VP finance,” Anna Khan, a general partner at CRV, said. “I don’t see generative AI today through its early use cases demanding any of those budgets quite yet.”

Khan added that she’s not yet convinced that generative AI products are compelling enough to garner large contract sizes and usage expansion, but she’s keeping an eye out for product traction among skeptical non-tech companies.

In the meantime, investors expect both businesses and consumers to continue experimenting with the tech.

“Everyone can be a user, everyone can experience the magic of it, and everyone has that ‘aha’ moment,” Rosenblatt said. “There’s virality just built into it.”



Source link

What do you think?

TikTok users share experiences to end ‘Almond Mom’ culture

Warp Records is reissuing Artificial Intelligence on vinyl