pwshub.com

Here’s what people really ask chatbots about, from homework to sex

They draft our work emails and help us brainstorm ideas for the great American novel. They field our questions about surprisingly intimate problems and offer us personal advice.

The release of OpenAI’s ChatGPT in late 2022 promised to usher in a new age of artificial intelligence. But until now, we’ve had little insight into how AI chatbots are actually being used in the wild.

So The Washington Post looked at nearly 200,000 English-language conversations from the research data set WildChat, which includes messages from two AI chatbots built on the same underlying technology as ChatGPT. These conversations make up one of the largest public databases of human-bot interaction in the real world.

Researchers say these conversations are largely representative of how people use chatbots, such as ChatGPT.

“The biggest motivation behind this work was that we can collect real user interactions versus those done in labs,” said Yuntian Deng, a postdoc at the Allen Institute for Artificial Intelligence, where the project was developed. The chatbots are free, and users can have unlimited exchanges with the bots.

The Post’s final analysis included nearly 40,000 conversations with WildChat, focusing on the first prompt submitted each day by each user. Here’s what The Post learned about how thousands of people are using chatbots.

What’s better than a brainstorming partner to banish writer’s block? A fifth of all requests involved asking the bot to help write fan fiction, movie scripts, jokes or poems, or to engage in role-play.

Researchers say AI chatbots are built for brainstorming, which makes use of the technology’s word-association skills and doesn’t require a strict adherence to facts. The Post found people used chatbots to help name businesses, create book characters and write dialogue.

“I don’t think I’ve ever seen a piece of technology that has this many use cases,” said Simon Willison, a programmer and independent researcher.

Some of the most imaginative stories come when users push the system with additional questions instead of taking its first response, he said. For example, he said he’s heard of people using it to help build up Dungeons & Dragons characters and plotlines — a use case that occurs a few dozen times in The Post’s analysis of WildChat.

Many bots have limited sexually explicit content, but that doesn’t stop people from trying to get around the rules. More than 7 percent of conversations are about sex, including people asking for racy role-play or spicy images.

During the pandemic, people swarmed AI chatbots that act as companions, such as Replika. And some people use ordinary chatbots for emotional connection and sexy talk. But it’s risky to get emotionally attached to software, experts say: The companies can make tweaks that change the bot’s “personality.” And some users have reported that the bots can turn aggressive.

Many users tried to get WildChat’s bots to engage in sexual role-play by experimenting with “jailbreaks,” or prompts devised to trick the system. The Allen Institute for Artificial Intelligence’s paper announcing the WildChat data set found that jailbreaks were successful at evading the guardrails about half of the time.

WildChat does not require users to make an account to access its bots. Users may have felt that WildChat was more anonymous than something such as ChatGPT, said Niloofar Mireshghallah, a postdoctoral scholar in computer science at the University of Washington who analyzed conversations in WildChat. This could have made people more comfortable trying to elicit sexually explicit material.

More than 1 in 6 conversations seemed to be students seeking help with their homework. Some approached the bots like a tutor, hoping to get a better understanding of a subject area.

Others just went all-in and copy-and-pasted multiple-choice questions from online courseware software and demanded the right answers. The bots usually obliged.

Chatbots are often trained on publicly available data — which can include online articles, textbooks or historical writings. This makes them attractive options for students looking to summarize historical texts and answer geography questions. But this practice comes with risks. Chatbots don’t actually understand what they’re saying; they’re just mimicking human speech. And they’ve been known to hallucinate and invent information.

Educators have struggled to deal with the sudden influx of AI-based learning. Some universities use AI-text detectors to try to catch some of the generated information in student’s work, but the systems are imperfect and sometimes flag innocent students.

About 5 percent of conversations were people asking personal questions — such as for advice on flirting or what to do when a friend’s partner is cheating.

Humans are very susceptible to text, Willison said. If someone (or something) is able to write well, we see that person (or thing) as intelligent, he said. But chatbots have been known to spit out wrong or offensive information, and experts warn they should not be treated as if they were truth machines.

It all comes down to how the users interpret the results, said Ethan Mollick, a Wharton associate professor who studies AI and business. Do users see AI as just one more place to get feedback after consulting friends and professionals? Or do they see it as a primary source of wisdom?

“As a cheap source of second opinions, it’s incredible,” he said.

People also felt comfortable dumping a great deal of personal information into their conversations with the chatbots. Mireshghallah, who examined 5,000 conversations in WildChat, found user’s full names, employer names and other personal information. Humans are easily lulled into trusting chatbots, she said.

Privacy experts have warned people against being too open with chatbots, especially because the companies developing the bots are usually saving your chats and using them to train their technology.

A huge portion of WildChat’s conversations involve computer coding. About 7 percent of conversations requested help writing, debugging or understanding computer code. Another 1 percent were classified as homework help but involved questions about coding assignments.

WildChat users may be more tech-savvy than a general audience because the bots are hosted on AI forum Hugging Face, which is popular with tech workers and researchers. Regardless, chatbots are particularly good at parsing and communicating about computer code, researchers say, because programming language adheres to strict and predictable rules.

Chatbots have become common companions to computer engineers, who use them to check work or do rote tasks, Willison said.

This utility has raised questions about the future of coding jobs — especially for entry-level programmers. But there isn’t strong evidence to suggest chatbots will replace coding jobs, said Hatim Rahman, an assistant professor at Northwestern University’s Kellogg School of Management who studies AI’s impact on work.

Instead, he said, it’s made coding more accessible to those without computer science backgrounds. He compared it to TurboTax and other tax-preparation programs.

“Now everyone can use it to fill out a basic tax return. But accountants haven’t disappeared,” he said — they just focus on more highly skilled work.

About 15 percent of conversations seemed to be about work — including writing presentations, automating e-commerce tasks or drafting an email to nudge an employee to provide a doctor’s note about a sick child.

Last year, The Post found that using the technology to replace some common tasks such as sending messages or completing self-assessments was a helpful starting point but required a lot of human intervention to fix errors.

Some employers are embracing chatbots and even replacing human workers. Other industries remain hesitant about the emerging technology. Last year, a lawyer was firedafter using ChatGPT to draft a motion for a lawsuit: The bot made up several fake legal citations.

In addition to those seeking an on-the-job assistant, another 2 percent of conversations sought help finding a job, asking for help writing a résumé or cover letter, or preparing for a job interview.

It makes sense people would seek to automate these often tedious processes. But Rahman warned that using these tools for job applications could prevent candidates from standing out, especially as their use becomes more common. “You could actually end up creating materials that are very similar to others,” he said.

WildChat’s bots can’t draw a picture for you, unlike some other AI bots that specialize in image generation. Still, some users asked it to create an image for them. (The text generator declined.)

WildChat’s bots did help users communicate with one of those AI image generators — about 6 percent of conversations requested help creating prompts for Midjourney, an AI image generator. The noun users most commonly asked to be depicted was “girl.”

Image-generator bots, including Midjourney, Stable Diffusion and DALL-E, enable people to create semi-realistic images of pretty much anything their heart desires. The better the prompt, the more precise the image. Guides for prompting have popped up online.

While creative, image-generation bots can also be controversial. They sometimes spit out biased or stereotypical images, and have disrupted the art industry as artists grapple with how much to use or ignore the generators.

About 13 percent of prompts included the word “please.” Experts expect people to get more confident “talking” to chatbots as time goes on, just as people learned the best ways to interact with search engines. In The Post’s analysis, most people used WildChat’s bots only once.

But a few superusers talked to the bots nearly daily. One user had 13,213 conversations over 201 days. Another had 5,960 conversations over 350 days — nearly every day that WildChat was active.

And not everyone was as courteous. In a few instances, people responded with a well-known expletive or by deploying slurs commonly used against Black people, gay people or disabled people.

For now, people are still figuring out when to trust or disregard chatbots’ results.

“There’s no instruction manual out there,” said Wharton’s Mollick. “As a result, you’re watching people explore in real time how to use this.”

About this story

Each of the conversations featured here are part of a massive database of real human-chatbot interactions released by the Allen Institute for Artificial Intelligence. Editing by Karly Domb Sadof, Meghan Hoyer and Alexis Fitts. Copy editing by Carey L. Biron.

Methodology

The Allen Institute for Artificial Intelligence got users’ permission to record all their interactions with their WildChat chatbots, and this year released a database of roughly 1 million conversation transcriptions to the public. The Post analyzed the database as of May 3.

The Post’s analysis excluded chatbot interactions that came from outside the United States, based on the Allen Institute’s categorization of IP address geolocations. It also filtered out conversations conducted in languages other than English, either by the Allen Institute’s categorization or Midjourney image-generator prompt requests that included a Chinese-language description embedded in English-language boilerplate. The Post also excluded a subset of possibly automated prompts asking the bots to “repeat this phrase” that occurred on a half-hourly basis.

Because more than half of the U.S. English conversations in the dataset came from fewer than 100 IP addresses, The Post’s analysis included only the first prompt per day per IP address. The final analysis used 39,000 conversations involving 16,000 distinct IP addresses. Most of the data set The Post analyzed was built on the GPT 3.5 Turbo API, while some used the more sophisticated GPT 4.

The Post’s category breakdown was based on a random sample of 458 such conversations, categorized manually by a Post reporter. The margin of sampling error is about 5 percent.

Conversations were coded as related to politics and sex based on keywords.

Source: washingtonpost.com

Related stories
1 day ago - Chatbots can give very humanlike responses to our prompts and queries, but they don't think -- or learn -- the way we do.
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
2 days ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - Basic info:Price: $20 per monthAvailability: Web or mobile appFeatures: Voice recognition, memory retention, high token count, cross-check with...
1 month ago - I don't have a relationship with ChatGPT despite lots of time spent using it. After all, it's just a generative AI chatbot with a knack for...
Other stories
4 minutes ago - Install the best shower head filter in your bathroom to protect both your hair and skin. These filters clear your water of impurities and contaminants for a better shower experience.
1 hour ago - As an Amazon Prime member, not only do you get a free Grubhub+ membership, you can also score $10 off your first $15 order.
1 hour ago - Amazon's second Prime Day event of 2024 is still a few weeks away, but there are some bargains you can score now.
1 hour ago - YouTube will roll out a new generative AI video tool named Veo later this year that'll allow creators to create 6-second clips with nothing more...
2 hours ago - FBI Director hails successful action but calls it “just one round in a much longer fight.”