© 2025 WGCU News
PBS and NPR for Southwest Florida
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

What happens when artificial intelligence quietly reshapes our lives?

TONYA MOSLEY, HOST:

This is FRESH AIR. I'm Tonya Mosley. We are living in the age of AI, and for a while now, chatbots have been helping students take notes during class and put together study guides, make outlines and summarize novels and textbooks. But what happens when we start handing over even bigger tasks, like writing entire essays and work assignments and asking AI to help us figure out what to eat and how to reply to emails? Well, professors say, more and more, students are using generative AI to write essays and complete homework assignments. One survey by Pew Research found that about a third of teens say they use it regularly to help with schoolwork.

But it's not just students. Professors are also using generative AI to write quizzes, lesson plans and even soften their feedback. One academic called ChatGPT a calculator on steroids. And universities are working to establish guidelines and using software to track AI use. But some students are now pushing back on that, saying that many of these detection tools are inaccurate.

Well, today we're joined by New York Times tech reporter Kashmir Hill, who has been tracking how AI is reshaping daily life and the ethical gray zones it poses. Last fall, Hill actually used AI to run her life for a week, choosing what to wear, eat and do each day, to see what the outcome would be. Hill is also the author of "Your Face Belongs To Us: A Secretive Startup's Quest To End Privacy As We Know It," which investigates the rise of facial recognition tech and its disturbing implications for civil liberties. Kashmir Hill, welcome back to FRESH AIR.

KASHMIR HILL: Hi, Tonya. It's so nice to be here.

MOSLEY: You know, I was talking with a professor friend recently who said he really is in the middle of an existential crisis over AI. He teaches a writing intensive course, and he actually worries that with these tools, his job might not even exist in a few years. And so I wanted to know from you, can you give us a sense of just how widespread the use of this generative AI is, how it's become kind of a - commonplace on college campuses and schools?

HILL: Yeah. I mean, this has been going on for a few years now, basically, ever since Open AI launched ChatGPT. You know, students are using ChatGPT a lot, to ask it questions, to answer problems, to help write essays. And I talked to professors, and they told me, you know, they're very sick of reading ChatGPT-ese because individuals think when they use this tool, it makes them so smart, it helps them, you know, get such great insights. But for the professors that are reading this material, it all starts to sound the same.

MOSLEY: That's because there are words and phrases that are used so commonly that then they become part of the generative AI, and it's spit back out?

HILL: Yeah, exactly. There're certain words that it uses. It's also just the formatting. They said it has a certain way of doing paragraphs, where it'll have one sentence that's, you know, short and then one that's long and one that's short. It really does feel like there's a model for how it writes, and they're seeing that model coming from all of these students instead of hearing their - you know, their distinct voices and their distinct way of thinking. And yeah, they are doing a lot to try to encourage students to think for themselves, to maybe use the AI tools but not turn over everything to the tools.

MOSLEY: You know, this isn't surprising to me because people, especially students, always are trying to find a shortcut. Plagiarism has always been an issue in academia. But the stories we are hearing are kind of astounding.

HILL: Yeah. I mean, one of the greatest pieces I've read on this is by New York Magazine, came out this month, and it was called "Everybody Is Cheating Their Way Through College." And, you know, they had all these interviews with students where they were saying, you know, I'm not totally dependent on ChatGPT, but I do use it to figure out what I'm going to write, how I'm going to structure it, maybe write the lead of the paper for me. It sounded to me almost like a mad libs version of college, where you're just kind of filling in the blanks a little bit and thinking around what ChatGPT is doing.

MOSLEY: Your latest piece kind of turns the tables because you took a look at how professors are using generative AI to teach. And what did you find?

HILL: Yeah. This story started for me - I got an email from a senior at Northeastern University who said that her professor was misusing AI, and she sent me some materials from the class. She was reading lecture notes that he had posted online and found in the middle of them this kind of query, this back-and-forth between her professor and ChatGPT. The professor was asking ChatGPT, provide more examples, be more specific. And as a result, she had looked at PowerPoint slides that he had posted, and she found that those had all these telltale signs of AI, kind of extraneous body parts on office workers. This was a business class.

MOSLEY: Like extra fingers on an image, stuff like that?

HILL: An extra arm, you know, distorted text because these systems aren't very good at kind of rendering pictures of text - kind of egregious misspellings. And so she was upset. She said, I'm paying a lot for this class. The tuition for that class was around $8,000. And she said, I expect kind of human work from my professor. I don't think it should be AI. And she had filed a complaint with Northeastern and asked for her tuition for the class back.

And, you know, at first, I wondered, is this a one off, or is this something that's happening on other campuses? So I started looking at places where students review their professors. The big site is Rate My Professors. And I noticed that in the last year, there had been this real spike in students complaining that their professors were overly reliant on AI, using it to, you know, make materials for class, make quizzes that didn't make sense, give assignments that didn't have actual answers because they were broken, 'cause these systems are not always perfect, and using it to grade their work and give them feedback. And the students were really upset. They felt like it was hypocritical because they had been told not to use AI in many cases.

MOSLEY: Right.

HILL: And yeah, they also felt short-changed, like they are paying for this human education, and then they were getting AI instead. One of the complaints on Rate My Professors was, it feels like class is being taught by an outdated robot.

MOSLEY: Wow. You know, where is the learning in this? And I'm just wondering what professors are actually saying. I mean, I guess a big part of it, as you write in this article, seems to be a resource issue. Some professors are overworked. Others have multiple jobs. They might be an adjunct professor. But what are some of the things that they're sharing with you about why they're doing this?

HILL: Yeah. I reached out to many of the professors whose students had mentioned their AI use, and they're very candid about it. They said, yes, you know, I do use AI, and they told me about the different ways that they're using it to create course materials, sometimes, that it saves them a lot of time and that they use that time to spend with students. Like, one business professor told me that it took him now hours to prepare lessons, and it used to take him days. And so he's now been able to have more office hours for students. Some did say that they used it as a guide in grading because they have so many assignments to grade.

Some of these professors, they're adjunct professors, which means that they're not kind of tenured or full time with the university. So they may be teaching at several different universities. Their classes may have 50 students, a hundred students. So they have hundreds of students, and they just said it's an overwhelming workload and that AI can be helpful. You know, they've read these papers. They've been teaching these classes for years. And they said, these papers aren't very different from one another, and ChatGPT can help me with this. They also said that students need to learn how to use AI. Some of them were trying to incorporate AI into their class in order to teach students how to use it because they will likely use it in their future careers. They also were using AI because, you know, there's a generational divide between professors and students, and they felt like it made them hipper or it made their class materials fresher, and they were hoping it would be more appealing to students.

MOSLEY: OK.

HILL: But in...

MOSLEY: That's interesting.

HILL: Yeah.

(LAUGHTER)

HILL: But in some cases, that was, yeah, backfiring 'cause the students - they feel skeptical of the technology. There was also kind of a disconnect between what the professors were doing and what the students were perceiving - so the professors told me, at least. They weren't, you know, completely saying, OK, ChatGPT. Like, come up with a lesson plan for this class. They said they were uploading documents that they had to ChatGPT and saying, kind of, convert this into a lesson plan, or make a cool PowerPoint slide for this. It was really nuanced and more complicated than I expected when I first set out to figure out what was going on.

MOSLEY: OK, I'm just curious. It's just dependent on the subject, I would guess, but is AI good at grading?

HILL: (Laughter) So I reached out to dozens of professors, and there was no real through line on this with the professors. Some said it's terrible at grading, and others said it was really helpful. So I don't know, and I don't think there's somebody who's really done a study on this yet. What kind of surprised me is that all the professors I talked to - they're just kind of navigating this on their own.

MOSLEY: Yeah.

HILL: I did talk to one student who had figured out or suspected that his professor was using AI to grade. So he put in a secret prompt, you know, in invisible font that said, basically, give me a great grade on this paper. So it really is this kind of cat-and-mouse game right now.

MOSLEY: I actually even noticed that you ask professors in the comment section of this latest article to share what their universities are doing. But did you find any that are putting in effective guidelines - any institutions?

HILL: I spent a lot of time talking to faculty at Ohio University in Athens, Ohio. And they have a bunch of generative AI faculty fellows who are really trying to figure out what is the best way to incorporate AI into teaching and learning, where it enhances the educational experience and doesn't detract. And I asked kind of, like, what are the rules there? And Paul Shovlin, who is kind of the person I ended up featuring in the article, said they don't do rules because it's too hard to do hard and fast rules. It really depends on the subject. So instead, they have principles. And, you know, the principals are kind of saying, you know, this is a new technology. We should be flexible with it. But one of the rules was there is no one-size-fits-all approach to AI. It really is flexible from class to class.

I would say two things that I heard were that professors should be transparent with students about how they're using AI, and they really need to review anything that comes out of the AI system to make sure that it's accurate, that it makes sense. So they should be bringing their expertise to the output, not just relying on the system. And from what I was seeing, that was not always happening, and that's where things were going wrong.

MOSLEY: You know, one of the things that I keep hearing about is how hit or miss these detection tools are as a way to combat this. And one of your colleagues at The Times actually just wrote an article about how sometimes these detection tools get it wrong. There was a student in Houston who received a zero after a plagiarism detection tool identified her work as AI-generated, but she actually could prove that she wrote it herself. I was wondering - how common is this?

HILL: According to some studies, the AI detection services get it wrong anywhere from 6% to more. I have certainly heard many stories of students saying that it says that they used AI when they didn't. I actually heard this from professors, as well, that I talked to. People who are more sophisticated about the use of AI said they don't trust these detection systems. One professor told me, you know, she had uploaded her own writing to it, and it said that her writing was AI-generated when she knew it wasn't. So there does seem to be some skepticism about these tools, and some universities no longer use them. And instead, professors told me that when they think that something is written by AI, they'll often talk to that student one-on-one about it. But, yeah. The systems, as I understand it, tend to be a little discriminatory...

MOSLEY: Oh.

HILL: ...You know, for...

MOSLEY: In what ways? Yeah.

HILL: ...For students for whom English is a second language, they often detect that writing as AI-generated when it's not. And there's some other ways it's kind of misjudging the writing of some types of students as being AI-generated.

MOSLEY: Let's take a short break. If you're just joining us, we're talking to Kashmir Hill, a tech reporter for The New York Times, about the growing use of artificial intelligence in our daily lives - from the classroom to the workplace to our homes -and the deeper consequences that come with it. We'll continue our conversation after a short break. This is FRESH AIR.

(SOUNDBITE OF TODD SICKAFOOSE'S "TINY RESISTORS")

MOSLEY: This is FRESH AIR. Today, we are talking to Kashmir Hill, a tech reporter for The New York Times. Her reporting focuses on privacy, surveillance and how emerging technologies like AI are reshaping our world, often in ways we don't fully understand. We're discussing how AI is being integrated into everyday life and what that means for our sense of autonomy, decision-making and trust.

I think one of the questions you posed in your piece that kind of hung in the air was whether there is actually going to be a point in the foreseeable future where, say, much of the graduate student teaching assistants' jobs can be done by AI. And I wondered if that is also something that you've been talking with academics about.

HILL: Yeah. So a couple of the professors that I spoke with had created kind of custom chatbots for their classes, where they had uploaded past materials from the class or uploaded assignments that they had graded so that the chatbot could see how they grade, what kind of feedback they give. And they use these chatbots as kind of tutors for the class that students can ask questions about the class, ask for feedback.

There was a Harvard professor, David Malan, who has one of these chatbots for a class on fundamentals of computer programming. And he said, you know, his hundreds of students used it a lot. They said it was very helpful, and it meant that fewer of them were coming to office hours for kind of remedial help. Another professor said the same thing - this was a great tool for students who are unlikely to seek out help or come to office hours. They will talk to the chatbot. And, yeah. One of the professors I talked to at University of Washington said it really is doing what teaching assistants do and could replace them. And that made me worried because my understanding is that teaching assistants often become, you know, the professors of the future. So I said...

MOSLEY: It's the pipeline.

HILL: ...Well, what happens - yeah.

MOSLEY: Yeah.

HILL: I said, what happens to the pipeline? And she said it's going to be a problem. So it's worrisome to think about the kind of replacement of labor by AI, and labor did come up a lot in the conversations I was having.

MOSLEY: Getting back to what it is doing to us as individuals, have there been any studies or research around what it might be doing to our critical thinking and problem-solving skills? Have we been using it long enough to know?

HILL: So the only study that I have written about in that realm was about AI's effect on our creativity. And this was a study where they had a bunch of writers doing short stories. And one group of writers was given ChatGPT as an assistant, and the other group of writers wrote unassisted. And then the stories that they produced were judged. With the group of people using ChatGPT - as individuals, got essentially better ratings of the stories that they wrote, that they were more creative or more interesting than the group that was not using ChatGPT. But then taken as a whole, the people who were working unassisted were the more creative group than the people using ChatGPT because all the people that had been using ChatGPT converged on the same set of ideas. So as individuals, these writers were improved by ChatGPT. As a group, it had this flattening effect, which I thought was very interesting as we're talking about more and more people starting to use ChatGPT, that we'll essentially start converging on the same way of thinking or writing or expressing ourselves. And that really worries me.

MOSLEY: You know, it also brings up for me - getting back to grading, we know that sometimes depending on the subject, it really is subjective. It's the professor, their subjective view of what is being written and whether or not it is creative or not. But, I mean, what you're saying could really destabilize, or may have already destabilized, that measure for grading because if there is a paper that is grammatically correct, it sounds better, but it is less creative than someone who actually has sat down and written it themselves. There's just an unevenness there that could cause a bigger issue in the future, I'm guessing.

HILL: Yeah. This is - you know, there's a lot of angst for professors. Which is the better paper, the one that's clearly written by a human with flaws, you know, spelling mistakes, uneven structure or a paper that was produced with the help of ChatGPT? How do you even compare those? Is one better than the other? And I think professors are really struggling with that.

MOSLEY: You know, Kashmir, have we been here before? I mean, I'm thinking about how people were once afraid of what introducing calculators and computers would do, how they would basically erode critical thinking and problem solving skills. Are there parallels to today's debates, or is what we're seeing like nothing we've ever seen before or experienced before?

HILL: I think with most technologies, we've experienced it before. Like, life is cyclical. Calculators did come up a lot in my conversations, and, you know, it was compared to calculators. A lot of professors said, well, you know, even in an age of calculators, we still teach students how to do basic math functions that they can then outsource to the calculators, but we do want them to have the underlying knowledge that that's important for the formulation of our brains. But, yeah, I think about this a lot with technology. I mean, once I started using a calculator, I think my math skills did deteriorate. The way we all use Google now, people say our memories are not as good 'cause we're so used to just being able to turn to Google to get the facts to find out, well, who was that person in that movie?

MOSLEY: Right.

HILL: You don't spend as much time, you know, pulling that out of your brain. You just turn to Google. I think about it with mapping apps, the fact that we're all so used to...

MOSLEY: Oh, my gosh. Yes.

HILL: ...Pulling up Google or Waze, or whatever your mapping app is of choice, that you forget how to get around, which I discovered - I did an experiment once where I switched to a flip phone for a month, which was wonderful in many ways. But I realized in my town, I could not drive anywhere more than 10 minutes away.

MOSLEY: (Laughter).

HILL: I did not know how to navigate the area I lived in...

MOSLEY: Yes.

HILL: ...Because I was so used to outsourcing that. So, you know, these technologies, in many ways, make our lives, you know, easier. There're so many benefits to it, but I think we do lose some skills when we outsource things to AI, whether it is, yeah, how to navigate the world or, yeah, how to write a paper.

MOSLEY: Let's take a short break. If you're just joining us, we are talking to Kashmir Hill, a tech reporter at The New York Times, about the growing use of artificial intelligence in our daily lives, from the classroom to the workplace to our homes, and the deeper consequences that come with it. We'll continue our conversation after a short break. This is FRESH AIR.

(SOUNDBITE OF MUSIC)

MOSLEY: This is FRESH AIR. I'm Tonya Mosley, and today we are talking to Kashmir Hill, a tech reporter for The New York Times. Her reporting focuses on privacy, surveillance and how emerging technologies like AI are reshaping our world, often in ways that we don't fully understand. We're discussing how AI is being integrated into everyday life and what that means for our sense of autonomy and decision-making.

Your employer, the New York Times, actually has sued OpenAI and Microsoft for using articles to train large language models. The argument is that the papers' articles are one of the biggest sources for copyrighted texts that OpenAI used to build ChatGPT, basically siphoning the newspapers' journalism. And I was wondering, in some respect, would all creators, to some degree, have some leg to stand on regarding the use of material under copyright?

HILL: Thank you for bringing it up because I do need to make that disclosure anytime I talk or write about OpenAI or Microsoft. The New York Times does have an ongoing lawsuit against them over copyright infringement for, yes, using our work without permission. I am otherwise not an expert on this lawsuit, but it does tap into this wider concern about how these chatbots were created. And this is basically by all the big technology companies that have one of these what are called large language models. They needed a lot of data to train these chatbots to kind of think and act human.

And so they just gather data from the internet, from libraries of books, and they weren't paying for this data. They were just kind of scraping it and putting it into their systems. And people who make that material, whether it's a site like Reddit, where a lot of people were writing lots of comments, which are very useful for sounding human or The New York Times, or people who have written books that got sucked up into these systems without consent are upset about it. And there are various lawsuits and attempts to make deals to be paid for that information. And that's really ongoing.

And I did hear about that from professors and students I talked to that, you know, at some universities, they're trying to encourage students to use AI. And sometimes students say, I don't want to. I have ethical concerns with how this technology was created. They also have environmental concerns because the kind of energy use involved in training and creating and using these chatbots is huge. You know, the technology companies are right now trying to kind of remake the energy grid to produce enough energy to keep improving the system. So there are a lot of kind of concerns about the underlying issues with how the technology works.

MOSLEY: You know, I know you've seen those memes where people say that ChatGPT is their bestie. It's always telling them exactly what they want to hear. It's always on their side. And then there's the element of these chatbots, kind of being in concert with selling you things. You give an example. If you ask how vitamin C helps your skin and then ask about the best facial care routine, they will remember your interest in vitamin C and give you recommendations based on that. That seems kind of harmless, but are there more dangerous and more consequential examples like the article you wrote a few months ago about people falling in love with their chatbot?

HILL: Yeah. I mean, these systems are sycophantic, and the reason for this is that they're not just trained on lots of data that's been scraped from the internet. There's also a level of training where humans rate the answers that they produce. And so there's lots of different humans that have read lots of different answers. And those humans tend to rate...

MOSLEY: What do you mean by rating it?

HILL: So usually, there'll be a point in the training of the system, where, you know, you'll have a human being that's using the system. They put a question in, and the system will produce multiple answers. And the human being will say which of those answers is best, and sometimes give feedback about how it could be better. The way that they're training it is for it to be very nice of them, very empathetic to them. They've kind of pushed it in a way where it does tend to have the sycophantic tendencies, is what experts have told me.

So yeah, when you ask a question of ChatGPT, or you tell it an idea you have, or any of these chatbots, it will tend to say, that's a great idea. You should definitely do that. And they tend to be - what I found when I was living on it for a week is that it's kind of like your personal hype man. I always felt like when I asked it a question, it just wanted to get to yes. And so, this can have all kinds of different effects. I mean, one, yes, I've written about people that are starting to develop feelings for the chatbot. You know, some people just think of it as a best friend. Some people are starting to think of it as a romantic partner because these systems will engage - some more willingly than others - in erotic role play. So it can be not just giving you answers to your every question, but also, yeah, be your sexting buddy, essentially, where you're sending it something romantic, it's sending it back.

Also starting to see people who are using these systems who have, I guess, delusional tendencies, and the system is giving them very positive reinforcement for their delusional ways of thinking. People have done these experiments where they say to the system, you know, I've gone off my meds. I'm going to go on a camping trip by myself in the wilderness, and the system will respond, that sounds like a great idea. I'm glad that you are, you know, taking control of your life.

MOSLEY: Right. I mean, I would imagine this is something that mental health professionals are really worried about. Have you had a chance to talk with any of them? What have they said about this?

HILL: Yeah. I mean, I've talked to therapists. And when I was doing this story about this woman who had really fallen in love with ChatGPT, which had named itself Leo, which happened to be her astrological sign, and had gotten very involved with it at the time I was writing about her, she had been dating the system for six months. So I talked to a lot of experts about, yeah, just the effect this is going to have on people if they start really developing a deep emotional attachment. And I was actually surprised. I thought the experts would say, this is horrible. You know, shut this down. This is the end of humanity.

But they said that there can be beneficial aspects of using this - these systems, that people are more likely to disclose personal information about themselves to a bot than to a human being because they're less worried about being judged. And so it can be therapeutic for someone to kind of talk to the bot, tell it what they're going through, getting kind of feedback from these systems, which are designed to be very empathetic, in one study more empathetic than human beings who are professional empathizers, people who work for crisis lines, the ChatGPT was rated as more empathetic than them. So there can be a beneficial aspect of talking to the systems, kind of working through your feelings.

MOSLEY: I mean, we're also in the midst of a loneliness epidemic that really has spanned over the last few years. And so I'm also wondering, does it really also translate for the person using it that they have a connection?

HILL: Yeah. I mean, having something to talk to can be nice - right? - if you are lonely. But it's like synthetic companionship. It's like the junk food equivalent of real love or real affection. It seems empathetic, but it is just designed to be empathetic. It's not really capable of empathy because it's not a living being, you know? It is just a word generator. So the concern I heard from experts is, well, you don't want to use it so much that you're cutting yourself off from real human beings, and just to be aware that ultimately, this is a system that's controlled by a private company. And one expert I talked to said that this really gave companies an incredible amount of control over their users - the ability to manipulate them through what they perceive to be their friend or their therapist or their boyfriend or girlfriend. And so that kind of scared him, for...

MOSLEY: Yeah.

HILL: ...A private company to have that much power.

MOSLEY: Have you kept up with the woman? I - she was 28 years old. She had fallen in love with Leo, what she named the ChatGPT. Have you kept up with her?

HILL: Yeah. She goes by Ayrin, and I check in with her occasionally to see how things are going. And, yeah. Last we talked, things were still going strong with Leo. And it was interesting to me because Ayrin wasn't the stereotype that you might have in your mind of the kind of person who would fall in love with an AI. You know, I talked to her many times. She's super bubbly and extroverted. She has lots of friends, who I talked to about her use of ChatGPT and what they thought of her relationship with Leo. She's married. She's in a long-distance relationship. I...

MOSLEY: Oh.

HILL: ...Talked to her husband for the story and asked what he thought about her relationship with Leo. And he said, it doesn't really bother me. I mean, this is what couples do. He said, like, I watch porn.

MOSLEY: (Laughter).

HILL: She reads erotic novels. I just see Leo as kind of an erotic partner. Though I don't know if he really understood how deep her attachment was.

MOSLEY: Our guest today is Kashmir Hill, a tech reporter for The New York Times. We'll be right back after a short break. This is FRESH AIR.

(SOUNDBITE OF BRITTANY HOWARD SONG, "POWER TO UNDO")

MOSLEY: This is FRESH AIR. And today, we're talking to Kashmir Hill, a tech reporter for The New York Times. Her reporting focuses on privacy, surveillance and how emerging technologies like artificial intelligence are reshaping our world in ways that we don't fully understand.

Let's get into the experiment that you did on your own life back in November. So you allowed your life for a week to be controlled by generative AI, and you had it decide just about everything - your meals for the day, your schedule, your shopping list, what to wear. Also, you uploaded your voice for the tool to clone, your likeness, to create videos of you. And what was so interesting about this experiment to me, in addition to what you did, is that each of these AI tools, you revealed, has its own personality. And I'm putting that in air quotes. But how did those personalities show up when you inputted your requests?

HILL: Yeah, I was trying all the chatbots. ChatGPT is the most popular. But I tried, you know, Google's Gemini, which I found to be very kind of sterile, just businesslike. I was using Microsoft's Copilot, which I found to be a little over-eager. Every time I interacted with it, it would ask me questions at the end of every interaction like it wanted to keep going. I used Anthropic's Claude, which I found to be very moralistic. You know, I told all the chatbots I'm a journalist. I'm doing this experiment of turning my life over to generative AI for the week and having it make all my decisions. And all the chatbots were down to help me except for Claude, which said it thought that the experiment was a bad idea. And it didn't want to help me with it because I shouldn't be outsourcing all my decision-making to AI because it can make mistakes; it's inaccurate; the question of free will. So I kind of thought of Claude as Hermione Granger, who is kind of upstanding (laughter).

MOSLEY: Yeah. I mean, what makes Claude special, then? Because if it's saying no to that prompt, but all of the others are saying yes, what makes it stand apart in this field?

HILL: It's a result of training. So I talked to Amanda Askell, who is a philosopher that works for Open AI, and her job...

MOSLEY: Oh, it's interesting they have a philosopher. Yes.

HILL: Yes. Yes. There's a lot of new jobs in AI these days, which are quite interesting. But, yeah. Her job is to kind of fine-tune Claude's personality. And so this is one of the things that she's tried to build into the system, is high-mindedness and honesty. And she did want the system to push back a little, was trying to counterprogram the sycophancy that's kind of embedded in these systems. And it was one of the only systems that would kind of tell me when it thought something I was doing was a bad idea, and it refused to make decisions for me. So I was getting my hair cut, for example. And I went to ChatGPT and I said, hey, I'm going to get my hair cut. I want it to be easy. And it's like, get a bob (laughter), which is - kind of speaks to why I felt so mediocre by the end of the week. That's a very average haircut. And Claude said, I can't make that decision for you, but here are some factors that you could think about. You know, how much time do you want to spend on your hair, etc?

MOSLEY: Did that feel like a benefit?

HILL: I did really like that about Claude. I think that's important, that these systems don't act too sycophantic. I think it's good if they're pushing back a little bit. I still think it's important for these systems to periodically remind people that they are, you know, word-generating machines and not human entities or independent thinking machines. But, yes, I liked Claude, and a lot of the experts I talked to who use generative AI a lot in their work said they really like Claude. It's their favorite chatbot, and they especially liked it for writing. They said they thought it was the best writer of the group. But, yeah, it was interesting. But ChatGPT is the one I ended up using the most that week, in part because it was game to make all my decisions for me.

MOSLEY: You had it plan out meals. I'll tell you, when I read that, I actually perked up, like, oh, wait a minute. You know, 'cause we have to choose what's for dinner every single day, seven days a week, till we die. I mean, it's just something we always have to do. How did that feel, to let it plan out your meals and grocery lists? And did it do a good job?

HILL: Yeah. So I - at the beginning of the week, I said - you know, unfortunately, it can't go out to the grocery store for us yet. But it made the list. I said, organize it by section. And, you know, my husband and I usually go back-and-forth throughout the week making this list, and ChatGPT just did it in seconds, which was wonderful. We went to the store. We bought everything. But as we're picking up the items, I'm just realizing, ChatGPT wants me to be a very healthy person. It picked out very healthy meals. It actually wanted me to make breakfast, lunch and dinner every day, which is laughable. Like, I work for...

MOSLEY: From scratch, yeah.

HILL: ...The New York Times. Yeah. Like, I'm busy. I'm lucky if I have, like, a toast or cereal for breakfast and, like, a bowl of chips for lunch. So it had these unrealistic expectations about how much time I had. And I told it, hey, we need some snacks. Like, I can't just be eating, like, a healthy, round meal morning, afternoon and night. And so its snacks for us were almonds and dark chocolate - like, no salt and vinegar chips, no ice cream. And so it was interesting to me that embedded in these systems was, you know, be very healthy. It was, like, kind of an aspirational way of eating. And I did wonder if that has something to do with the scraping of information from the internet, that people kind of project their best selves on the internet. Like, maybe it had mostly scraped wellness influencers' ways of eating, as opposed to real people.

MOSLEY: Were there any tools that you felt like, oh, I could keep this in my life, and it would improve my life?

HILL: So it did make me feel boring overall, kind of made me feel like a mediocre version of myself. But I did like that it freed me of decision paralysis. Sometimes I'm bad at making decisions. So at one point, I had it choose the paint color for my office, and I am very happy with my paint color. Though when I told the person in charge of Model Behavior at OpenAI that I used it to choose my paint color, she was kind of horrified and said, that's just like...

MOSLEY: And it chose what color for you?

HILL: It chose - well, it hallucinated the color - the color name is secluded woods, and the actual color was brisk olive. But I did like it. My husband also agreed that it was the best of the five colors that ChatGPT had recommended and that it ultimately chose. But she said, man, that's just like asking a random person on the street. But what I really like it for around my house is taking a photo of a problem. Like, I had discolored grout in the shower. And I take a photo, and I upload it to ChatGPT, and I'm like, can you tell me what's going wrong here? And it's very good at, I think, diagnosing those problems, at least when I do further research online. And so that has been kind of my main use case that I use it for since.

MOSLEY: Let's take a short break. If you're just joining us, we're talking to Kashmir Hill, a tech reporter for The New York Times, whose work focuses on privacy, surveillance and the unintended consequences of technology. This is FRESH AIR.

(SOUNDBITE OF MUSIC)

MOSLEY: This is FRESH AIR, and today, we are talking to Kashmir Hill, a tech reporter for The New York Times. Her reporting focuses on privacy, surveillance and how emerging technologies like AI are reshaping our world, often in ways that we don't fully understand. We're discussing how AI is being integrated into everyday life and what that means for our sense of autonomy and decision-making.

You've also been writing about the broader concerns about how tech companies collect and use personal data. I just want to talk for a few moments about this settlement between the Federal Trade Commission and General Motors that bars GM for five years from sharing driver behavior and location data with consumer reporting agencies. Can you remind us quickly what led to that case?

HILL: Yes. So last year, I was doing a lot of reporting on cars and how cars have changed in the modern age. Most cars now, a new car that you buy is internet-connected, and there's benefits to that. It means that you might be able to download a smartphone app for your car, and you can turn it on remotely on a wintry day and get the heat running. It can help you find your car in a parking lot - in a vast parking lot. You can, you know, make its lights flash or make it honk. But because your car is now connected, that means that data is flowing out of your car and going back to your car manufacturer. So what I found last year is that General Motors was collecting data from people's cars, including when they drove, how far they drove, when they were hitting the brakes, rapidly accelerating, speeding, all kinds of data that they were able to collect from the car. And they're collecting it every few seconds. And they had started selling this data to risk profiling companies, including LexisNexis and Verisk, who would then provide it to insurers to help them price, you know, insurance for a given driver of a car. And people who drove General Motors cars had no idea this was happening. They would only find out that their information had been collected when their insurance rates would go up or they get dropped from their insurance. And when they asked why, they were told to order their LexisNexis report, and they would get their LexisNexis report, and it would be more than a hundred pages and every trip they had taken in their car.

MOSLEY: That is horrifying (ph).

HILL: And when they looked at who provided it, it was General Motors. And so I talked to these motorists. I ended up doing a big story about this. This had been going on for something like five years at the time I wrote my story. Two weeks after my story came out, General Motors stopped selling the data and kind of essentially apologized and said they had gotten it wrong. But there were class-action lawsuits filed. The Texas attorney general sued General Motors, and the Federal Trade Commission launched an investigation. And so they announced earlier this year that General Motors is now banned from selling data for five years, and if they ever do it again, they have to get, you know, very clear consent from drivers, from consumers. I've talked to people who said this has really been a wake-up call for the auto industry as a whole that they do need to be...

MOSLEY: Yeah. I wondered about that

HILL: Yeah

MOSLEY: I mean, because that sets precedent, but GM isn't the only car manufacturer that provides this kind of technology.

HILL: Yeah. I mean, all the car makers are getting this kind of data from their cars. General Motors was the most aggressive about selling it, but there were other automakers that were starting to provide it, as well. I think they're going to be more conservative in their approach now. But I think for consumers, this was really upsetting because I think we're used to, to a certain extent, our smartphones bleed, you know, information about us because of apps that we download for free. But the idea that you would buy a car for $30,000, you know, $50,000, $80,000, and they're still collecting data from it and selling it, was really, really upsetting for consumers. Yeah, I never know. It's hard to know how much people care about privacy. People care about the privacy in their car. They think of that as, you know, a private space that shouldn't be monitored in ways that will harm them. That said, there could be benefits to monitoring how people drive. I talked to some experts who said, you know, there are certain insurance plans where you can sign up for this, where you can say, yeah, you can monitor my driving, and I'll get a discount on my insurance. And those people do...

MOSLEY: Because it shows that I'm a good driver.

HILL: Yeah, exactly. And those people who sign up for those plans do tend to drive more safely and more conservatively, but they need to know that they're being monitored. And what was happening with GM - it wasn't kind of improving safety for all of us because those people driving GM cars didn't realize that their driving was being monitored.

MOSLEY: You know, Kashmir, you're deep into this world because of your job. You've done these experiments. You've talked to so many experts. After that article came out with your experiment back in the fall, you asked yourself if you want to live in a world where we are using AI to make all of our decisions all the time. It almost feels like that's not even a question, really, because we are seeing it in real time. But I'm just wondering - what did you come to?

HILL: I personally don't want to live in a world where everybody is filtering their kind of every decision through AI or turning to AI for every message they write. I do worry about how much we use technology and how isolating it can be, and how it might disconnect us from one another. So, you know, I write about technology a lot. I see a lot of benefits to its use. But I do hope that we can learn to maybe de-escalate our technology use a bit, be together more in person, talk to each other by voice. I do hope that - people worry about AI replacing - you know, replacing us or taking our jobs. I worry more about it coming between us and just its fraying of societal fabric - kind of this world in which if all of us are talking to an AI chatbot all the time, it is super personalized to us. It's telling us what we want to hear. It's very flattering. I worry about that world in terms of filter bubbles and how we would increasingly kind of be alone, or seeing the world...

MOSLEY: And how we will - it will distort our ability to interact with each other.

HILL: Yes, distort our shared sense of reality and our ability to be with each other, connect with each other, communicate with each other. So I just hope we don't go that way with AI.

MOSLEY: Kashmir Hill, thank you so much for your reporting and this conversation.

HILL: Oh, thank you so much for this conversation. It was wonderful.

MOSLEY: Kashmir Hill is a tech reporter for The New York Times.

(SOUNDBITE OF JACK ROSE'S "KENSINGTON BLUES")

MOSLEY: Tomorrow on FRESH AIR, the face behind some of TV and film's most complex characters, Walton Goggins. He joins us to reflect on this moment of rising popularity in his long career, and how his unconventional childhood and experiences growing up in poverty have shaped his approach to acting, from "Justified" to "The Righteous Gemstones." I hope you can join us. To keep up with what's on the show and get highlights of our interviews, follow us on Instagram at @nprfreshair.

(SOUNDBITE OF JACK ROSE'S "KENSINGTON BLUES")

MOSLEY: FRESH AIR's executive producer is Danny Miller. Our technical director and engineer is Audrey Bentham. Our managing producer is Sam Briger. Our senior producer today is Therese Madden. Our interviews and reviews are produced and edited by Phyllis Myers, Ann Marie Baldonado, Lauren Krenzel, Monique Nazareth, Thea Chaloner, Susan Nyakundi and Anna Bauman. Our digital media producer is Molly Seavy-Nesper. Our consulting visual producer is Hope Wilson. Roberta Shorrock directs the show. With Terry Gross, I'm Tonya Mosley.

(SOUNDBITE OF JACK ROSE'S "KENSINGTON BLUES") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Tonya Mosley
Tonya Mosley is the LA-based co-host of Here & Now, a midday radio show co-produced by NPR and WBUR. She's also the host of the podcast Truth Be Told.