• #ChatGPT needs a huge amount of editing’: users’ views mixed on AI chatbot | ChatGPT | The Guardian
    https://www.theguardian.com/technology/2023/feb/08/chatgpt-users-views-ai-chatbot-essays-emails
    https://i.guim.co.uk/img/media/27c7ef1d75977b30bb45b4ba7759f38b6cd29e2c/0_72_5472_3283/master/5472.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-ali

    ChatGPT has been a godsend for Joy. The New Zealand-based therapist has attention deficit hyperactivity disorder and often struggles with tasks such as drafting difficult emails, with procrastination kicking in when she feels overwhelmed.

    “Sitting down to compose a complicated email is something I absolutely hate. I would have to use a lot of strategies and accountability to get it done, and I would feel depleted afterward,” says Joy, who is in her 30s and lives in Auckland. “But telling GPT ‘write an email apologising for a delay on an academic manuscript, blame family emergency, ask for consideration for next issue’ feels completely doable.”

    While the copy the AI chatbot produces usually needs editing, Joy says this comes at a smaller cost to her psychologically. “It is much easier to edit a draft than to start from scratch, so it helps me break through blocks around task initiation,” she says, adding that she has recommended using it this way to clients. “It avoids a psychological logjam for neurodiverse people. I think it would also potentially have value for people who struggle with professional norms due to neurodivergence and come across as curt.”

    • Like many students, Rezza, a 28-year-old in Yogyakarta, Indonesia, has been making use of the chatbot for academic purposes. “I have so many ideas but only enough time to act on a few of them because I need to write them,” he says, adding that writing is the “most time consuming” part of his work.

      He claims it has speeded up the time it takes to write an essay threefold. “With the improved workflow my hands are catching up with my brain,” he says. However, he says the chatbot’s output requires heavy editing, and has not been helpful in creating references; when he tried, it “gave out nonexistent academic citations”.

    • Atkinson is worried about the “misplaced confidence” the bot gives while providing factually incorrect information. These errors are known in tech jargon as “hallucinations”.

      He says: “People are more willing to believe a machine, even when it is telling outright lies. This is dangerous for a number of reasons. For example, if you rely on something like this for basic medical advice. Or if you write code, it can give you examples which are bad practice and error prone.”