When OpenAI rolled out GPT4o a few weeks ago, it included the ability to speak — instead of just type — with the interface. The chipper voice assistant provided a first glimpse of an AI personality that could become our little employee, co-worker, and friend.
It also signals the next technological revolution: AI agents that will be integrated into every aspect of our lives. Agents that will do all the crappy admin tasks we hate, organize a vacation, fix our computers, and act as our daily coach and therapist … right in the palm of our hand.
But integrating technology into our lives at that granular level also means we’ll need to surrender all privacy. These bots will record our biometric data, facial expressions, arguments, secrets, and every movement in our lives. Microsoft just introduced AI built into computers (called Recall) that will record and retain every online interaction, every five seconds of your life.
This presents enormous ethical complications, and we need to start thinking about it now—for our businesses and for our families.
I recently read a fascinating research study called “The Ethics of Advanced AI Assistants.” It’s a monster report (as long as a book) sponsored by Google and written by 36 researchers from across the globe. Because I am a full-service blogger, I will summarize this significant report for you today.
As I read this report, I wondered: Are we creating AI, or is AI creating us?
Here is a brief summary:
AI Agents are coming
AI agents will have a profoundly personal impact on our lives, starting with an ability to incorporate personal information like conversations, email, and calendars, but also biometric data like health, wellness, and sleep.
It will interpret your facial expressions, tone of voice, and body language. For example, the agent may act differently toward you if it knows you are sleep-deprived. It will know when you’re sick before you do. It will know when you’re lying. It will know you much better than your spouse or mother.
AI agents will take direct control of your devices. You’ll edit images via voice command, for example. Agents will be “thought assistants,” creative directors, personal productivity assistants, personal trainers/coaches, and therapists.
Agents will be the primary interface between humans and the external world. The research team suggests that this will create a new paradigm of interaction with the web in which websites and content will be less important and perhaps irrelevant.
The Ethics of AI Agents
The research goes into great depth on ethical implications such as:
Access
Agents will provide such a cognitive advantage to users that the gap between the haves and the have-nots will increase. Using AI agents will be a life skill, like using the web effectively. Those with access to premium AI Agents might also have increased health benefits and economic advantages.
Security
AI Agents will have access to so much personal information that significant new levels of consent and security will be required. The threat level of information being used out of context is extremely high. Since agents will “plug in” to external services, we will place abnormally high trust in our agents and how information is stored and used. A data breach might mean that every fact of your life and health would be available on the web.
Agency
As AI Agents take action on behalf of users, it raises the question of how this impacts user autonomy. Agents will represent us in the world and negotiate with other bots on our behalf. What happens when bot-to-bot negotiations are at odds? What happens when computers make decisions for a company that result in financial losses or lawsuits?
Anthropomorphism
Agents will be human-like, with personality and “feelings.” How developers present these models to the world, especially defining the relationship between humans and bots, will raise ethical considerations. The economic incentive will be to create bots that make the user happy in a way that cultivates dependence. Should a bot be able to feign affection, or represent itself as something more human than it is to make you happy?
Connecting with a bot in a deeply personal way could adversely affect user well-being and create the risk of infringing on user privacy and autonomy. Anthropomorphic features may influence users to feel as though their bot plays a critical social and emotional role rather than a functional one (See the movie “Her.”)
As AI Agents are integrated with lifelike humanoid robots, these risks could increase.
Value alignment
If a bot is your representative to the world and follows your instructions, it must align with your ethics and worldview. What if you are a criminal? What if the user is engaging in self-harm? There is a risk that advanced AI assistants will be insensitive to local values and cultural contexts.
Decisions must be made to limit how an agent can be used in a way that puts society and others at risk, i.e. misinformation, harassment, and crime. The researchers name six ways values can be misaligned, arguing that this issue is extremely difficult and complex. Developers will have to determine ethical guidelines that are imposed on all users.
Moral implications
As we become dependent on bots to take over daily interactions, humans will be “out of the loop,” and disconnected from many normal human interactions. What is the impact on human socialization and mental health?
If agents are designed to promote “well being,” how is that defined? If we follow a path of automated, programmed self-improvement, are we improving as human beings or conforming to an algorithmic definition created by programmers? Will AI change society based on the coding preferences of developers?
Example: I recently saw a demo of an AI bot designed for children that reports to parents on the child’s development and mental health. By who’s definition? Will our children be programmed to conform to standards established by a small team in Silicon Valley? Where are the medical and psychology experts in this loop?
Safety
The research covers the risk of accidents, malicious misuse, and unintended consequences. These AI systems are so complex that we cannot account for many risks. Early LLM models exhibited hostility and “hallucinated” lies, for example. Could a developer inject a ghost in the machine that causes harm? Could AI bots trick humans into aiding them in achieving a criminal goal?
AI assistants have the potential to empower malicious actors to achieve harmful outcomes across four dimensions:
- offensive cyber operations, including malicious code generation and software vulnerability discovery;
- adversarial attacks to exploit vulnerabilities in AI assistants, such as jailbreaking and prompt injection attacks;
- high-quality and potentially highly personalized misinformation at scale, including non-consensual fakes;
- authoritarian surveillance.
Economics
While agents provide valuable utility, they are likely to create massive job losses, especially for any profession involved in human services.
Influence
We have seen that large language models like ChatGPT can be very influential and skillful negotiators. Based on this competency, there is a risk of rational persuasion, manipulation, deception, coercion, and exploitation.
A personal view of AI Agents
Reading this post might seem like a science fiction nightmare. But this is real, and it’s happening now. So we need to start these conversations.
You might believe you’ll exert personal agency and protect your privacy by not participating in this intrusive new world. But history doesn’t support that conclusion. Especially in America, we’re resigned to the fact that we’ll just turn over our personal information in exchange for free access to news, entertainment, and social media sites.
AI Agents will be so incredibly helpful and cool that we’ll all want to jump in. Humanoid AI Agents will be status symbols and vital if we’re to participate in contemporary society. And once again, we’ll gladly risk all our privacy to play along. Half of America is happy to be tracked by the Chinese government in exchange for access to memes and pranks on TikTok. In fact, they will march on Washington to protect their right to be surveilled by a Communist dictatorship. Why would our AI future be any different?
A very small group of mostly geeky white men are determining the future of the human race. That is not a sensational statement. In essence, the human race will soon have a new operating system. What makes us special as human beings is being systematically stripped away. Who is checking the work of these people?
Regulation? Yes. We will need that, but the irony is that rules could only be enforced at scale through AI Agents. The government cannot act at the speed of technology, so we must depend on our tech leaders to guide AI development with ethics and compassion. And who are we counting on for this? Mark Zuckerberg? Elon Musk? Sam Altman?
These megalomaniacs have signaled that AI is coming, and there is nothing stopping them. They’ve surged ahead at a comet’s pace and a scorched earth approach to any societal norms and laws in the way. There is only one goal: Win this inevitable race toward super-human intelligence, and the consequences be damned.
Nevertheless, it would be folly to ignore this technology. I’ll embrace AI Agents and try to accept them into my life as problem solvers. I need to understand them enough to participate in a smart and ethical way.
Am I concerned about the existential aspects of AI agency? Yes. But I’m also concerned about North Korea having nuclear weapons and climate change jeopardizing my ability to get home insurance next year. On an individual level, there is very little I can do on any of these issues except be part of the debate.
And hopefully that started for you in a small way today.
Need a keynote speaker? Mark Schaefer is the most trusted voice in marketing. Your conference guests will buzz about his insights long after your event! Mark is the author of some of the world’s bestselling marketing books, a college educator, and an advisor to many of the world’s largest brands. Contact Mark to have him bring a fun, meaningful, and memorable presentation to your company event or conference.
Follow Mark on Twitter, LinkedIn, YouTube, and Instagram
Illustration Pexels.com and Tara Winstead