AI Agents Raise New Possibilities for GAI and New Concerns

The latest generation of “AI agents” have transformed the way businesses operate and interact with customers; however, they have also raised some new ethical concerns. AI agents may be the next big thing in Generative AI tools; however, they also may present new challenges and concerns for users.

To date, GAI tools like ChatGPT have become remarkably good at being prompted by humans to create text, images, videos, and music, but they’re not all that good at doing things for us. That’s where “AI agents” come in.

A recent MIT newsletter says to think of AI agents as AI models “with a script and a purpose.” The piece went on to describe the two types of AI agents currently being developed and deployed.

The first, called tool-based agents, can be coached using natural human language -instead of any kind of coding – to complete digital tasks for us. Anthropic released one such agent in October, the first from a major AI model-maker. It can translate instructions, i.e., “fill in this form for me,” into actions on someone’s computer, moving the cursor to open a web browser, navigating to find data on relevant pages, and filling in a form using that data. Salesforce has released its own agent, too, and OpenAI reportedly plans to release one in January.

The other type of agent is called a “simulation agent,” and you can think of these as AI models designed to behave like human beings. The first people to work on creating these agents were social science researchers. They wanted to conduct studies that would be expensive, impractical, or unethical to do with real human subjects, so they used AI to simulate subjects instead. This trend particularly picked up with the publication of an oft-cited 2023 paper by Joon Sung Park, a PhD candidate at Stanford, and colleagues called “Generative Agents: Interactive Simulacra of Human Behavior.”

Recently, Park and his team published a new paper on arXiv called “Generative Agent Simulations of 1,000 People.” In this work, researchers had 1,000 people participate in two-hour interviews with an AI. Shortly after, the team was able to create simulation agents that replicated each participant’s values and preferences with stunning accuracy.

There are two really important developments here. First, it’s clear that leading AI companies think it’s no longer good enough to build dazzling generative AI tools; they now have to build agents that can accomplish things for people. Second, it’s getting easier than ever to get such AI agents to mimic the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—could soon become one thing: AI models that can not only mimic your personality but go out and act on your behalf.

Dangers and Ethical Concerns

If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal and even more harmful deepfakes. Image generation tools have already made it simple to create videos using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well.

The second is the fundamental question of whether we deserve to know whether we’re talking to an agent or a human. If you complete an interview with an AI and submit samples of your voice to create an agent that sounds and responds like you, are your friends or coworkers entitled to know when they’re talking to it and not to you? On the other side, if you ring your cell service provider or doctor’s office and a cheery customer service agent answers the line, are you entitled to know whether you’re talking to an AI?

Implementing AI agents can result in substantial cost savings for businesses. By reducing labor costs and optimizing resource allocation, organizations can achieve greater operational efficiency. AI agents also minimize the risk of human error, which can lead to costly mistakes. Industries such as manufacturing, logistics, and customer support have witnessed significant cost reductions through the integration of AI agents.

However, it is essential to consider the potential disadvantages and ethical challenges associated with their implementation.

How BigRio Helps Bring Advanced and Responsible AI Solutions to Healthcare

Personally, I could not agree more with the concerns about the proliferation of AI agents raised by MIT. This is why, at BigRio, we are dedicated to providing solutions and supporting AI startups that focus on the ethical and responsible use of AI.

Now that we are focusing our efforts company wide on GAI and LLM solutions, we are committed to the practice of building AI solutions that have clear and transparent rules on how they process data and use it to provide your company with the insights you need to be more productive from an ethical, responsible, and legal point of view.

AI startups face numerous challenges when it comes to demonstrating their value proposition, not the least of which is responsible AI (RAI). This is particularly true when it comes to advanced GAI solutions for pharma and healthcare – an area that BigRio focuses on. We have taken an award-winning and unique approach to incubating and facilitating startups that allow the R&D team and stakeholders to efficiently collaborate and craft the process to best suit actual ongoing needs and particular challenges of GAI adoption in the healthcare field.

You can read much more about how AI is redefining business in my new book Quantum Care: A Deep Dive into AI for Health Delivery and Research. It’s a comprehensive look at how AI and machine learning are being used to improve healthcare delivery at every touchpoint, but it also discusses the impact of AI on all operations and businesses.