Here is a thought experiment.
An AI system is given the task of categorizing if a conversation is with a human or an AI. The human chooses to use an agentic system to respond to the AI during this conversation.
How should the AI categorize this?
The Binary Illusion
So clearly the user is a human, this is defined in the problem. Based on this it would seem that the AI should categorize the conversation as Human. If the AI were to instead choose AI, then we can verify that the categorization is incorrect, because the user is verifiable human.
Some of you are are deep in thought trying to express, or yelling at the screen that, the AI did not have a conversation with the user, but with the agent. Since the agent is not a human, clearly the categorization of AI is correct!
Yikes! Both of these arguments are very strong, and have some sound logical reasoning. So what is going on here?
There is perhaps a third possibility, the premise of the categorization is being violated by the user responding with an AI. This breaking of the “rules” invalidates the effort put in by the designer of the AI system. You might even be saying “Blake you are creating this rule set for what purpose? Just to break it? I mean that does sound like something you would do …”
First off, fair, that does sound like me.
What are you doing?
This actually came up from some of the work I have been doing with agents. This thought experiment is intended to illustrate the shifting reality of how humans communicate post AI. Now we will dive into why I think it is important and a problem that needs answering in the world of agents.
How many of you have sent an email that was just a lightly edited LLM response? I have, and I do fairly often. As much as I love writing, it is laborious for me as someone who generally thinks in pictures. It is hard to capture my thoughts as each one is a thousand words and I my attention span is growing shorter with each passing YouTube short.
From a practical perspective, I am able to talk to more people though out the day if I accelerate my communication with LLMs. This functional conversation is important as I have context and information that other people need to complete their work. Having an agent to relay this information is more effective and time efficient than me typing out a response. It is even more effective than me writing a guide, as the presentation becomes much more flexible and focused to the user’s need when processed through an LLM.
Now for relational conversation, conversation that is focused on building understanding and cooperation (thank you Sam for articulating this concept to me), I need to be present and listening. An agent to handle the functional conversation, frees up time for me to have more relational conversations, thus improving my feeling of connectives to those I work with.
Back to the Problem
So with this expanded understanding of how communication is changing, we have more tools to improve our AI system for categorizing conversations. Clearly we need new categories that are more useful than just AI or Human, as humans that are using AI for functional conversation are still humans.
So the correct answer to the original question, is that the AI should categorize this type of conversation as human. In fact I will boldly claim, with this new understanding, all conversations should be categorized as human. AI did not spring into existence, nor did it deploy itself into production (at least not yet). A human created this AI to converse, thus all conversations with it are human conversations.
This is not a satisfying conclusion, as our categorization feels like it has lost value, or we have discovered that it never had value. So making the best of a bad situation we should pivot, informed by this out come. When we next create a AI system to categorize communication we should strive to make the categories more meaningful, or at least useful. We would gain more functionality by categorizing the conversation by functionality instead of source in this case.
Practical Application
The source of this exercise is something that we all have probably ran into, that is human verification. This is when a system is trying to verify that the user it is interacting with is a human and not just an automated system.
The motivation for doing this is to conserve resources so that the system is being used fairly and no single user can abuse the bandwidth through automation. This provides a better user experience as the system stays responsive and more people are able to use it.
This is an important consideration, both for the system’s owner and the users. The system’s adoption can stagnate if overran by automation. Think of a forum that has been swamped with advertisements from bots. Allowing this will lower the value proposition of visiting the site as the content on the whole will become less useful due to the spam, spam, spam.
Because of this change in human communication, agents will need ways of determining if the conversation it is having is functional, IE an exchange of context and data, or it is relational, building understanding and setting boundaries on how interactions should take place.
Conclusion
The categorization of conversations as human or AI is losing value as human communication patters change. Humans increasingly using AI and agents to facilitate conversations, so our systems must adapt their communication awareness to accommodate this change. By categorizing conversations by functionality, systems can better understand and respond to the needs of users.