AI, ChatGPT, and Why It Is Unhelpfully Helpful
Humans like to be tricked. The first time I heard this was from my older sister. I laughed and then thought about it. Thought about how very true it is as we like to understand that we are being tricked. Magic fairy dust, sleight of hand and card tricks.
This is the same game that artificial intelligence (AI) is playing. EXCEPT, the problem is that we don’t understand how we are being tricked.
AI, like ChatGPT, provides answers to questions in a chatlike model. One might ask what is known about a company or ask it to write a cover letter. We have to remember that it is not really an answer. It is a generated response made from finding patterns in information. It’s still a collection of patterns that happens to sound like a feeling, sensing human.
Years ago, one of my (Beth’s) favorite conversations about the future was about how research was going to be disrupted. In the best case, a lot of the rote tasks of research, like interviewing, could be done by AI. The parts that could get far more interesting were analysis and synthesis of findings. These are hard to do for anyone. I liked the idea of being able to spin up 15 versions of myself, or other super-smart researcher colleagues, to get to even better analysis. But, they would need to be very specific and trained to think like researchers. Synthesis gets even trickier. How do we take the information we are seeing and combine it in new ways to create new ways to solve problems or see information?
I’m nowhere near ready to let ChatGPT be my analysis partner. (I might actually try this out and see what happens.) I’m not sure how it is gathering and prioritizing information, what sort of logic it is using to code it and what sort of biases are being introduced. We are still in an information era where I can ask Alexa to play the soundtrack from the musical “Rent” and she chooses the movie. She hasn’t learned that I, Beth, definitely want the original broadway recording. People think in lots of different ways. We will need to figure out how our AI can do this, too.
The truth is that patterns are not wisdom and thought. Patterns are created from the collection of past data. The data is not combined with experience as one would find with humans. AI would provide a response of not to touch a hot stove because a burn is possible. A human will say if you are cold and need to warm up your hands that you can hover your hands above the heat to get warm and will not get burned.
I like AI. I think it is cool. I often hate getting started with writing about things and AI says, “Here you go.”. It is a starting point that still needs to be fine-tuned. I am positive that my experience is more valuable than anything that AI can provide as many times I have solved technical problems with unconventional solutions. I do not ask Alexa or Siri to play anything because I think it is weird to have something listening for me to say, “Hey, [name].”.
Our new generation of AI is reminding me of visits to the library. Great librarians had a way of helping you find the information you need and asking lots of great questions and relying on past information, wisdom, and, in general, using their intuition to assess what you need and don’t need — and at what level and how far you were willing to go down an information rabbit hole.
ChatGPT seems to be like a decent junior librarian who can send you to an encyclopedia of questionable publication. If you want to get the basics, see a Wikipedia style entry of information in narrative, but its not easy to see quality, sources, or figure out how to have that information be tailored to us.
The helpfulness of AI is that it can start the conversation after it has a prompt - the continued conversation is not a conversation, it is more a regurgitation of information.
So, this is where using testing is important ——
You need to figure out if your fidelity of trick is actually way better than what your neural network is providing. Does it seem too real? You are going to creep people out.
Are you providing the right sort of onboarding to your users. It seems like this is search and everybody needs to do it, but what are the things everyone wished they knew before starting conversations with chat? Are you prompting users to understand the limitations of the tool? And how to actually use it effectively for them?
And other stuff —- about how you don’t end up in a NYTimes reporter situation.