AI Agents: The Good, The Risks, and The Way Forward — with Margaret Mitchell
A fascinating chat about who's really in control of AI agents.
Hi! In today's AI in the News: can you actually fall in love with a chatbot? Plus, how spies are using AI, and a fascinating chat with Dr. Margaret Mitchell about who's really in control of AI agents.
Here’s what caught my attention in the news:
She Is in Love With ChatGPT - The New York Times (featuring Giada Pistilli, PhD)
Google’s Gemini AI just shattered the rules of visual processing — here’s what that means for you
OpenAI’s AI reasoning model ‘thinks’ in Chinese sometimes and no one really knows why
Amazon races to transplant Alexa’s ‘brain’ with generative AI
Things I learned today
AI agents are coming, but who's in control? Margaret Mitchell , Avijit Ghosh, PhD, Dr. Sasha Luccioni and Giada Pistilli, PhD just tackled this hot topic in a fascinating blog post.They dive deep into what AI agents really are, weigh their risks and benefits, and map out a path forward.
I caught up with Margaret to unpack their findings. She makes a critical point about autonomy that'll make you think: fully autonomous systems carry unknowable risks because they operate on computer logic rather than human logic. The solution? Build systems that support & assist rather than override human decisions.
Q: What drove the team to focus on AI agents specifically?
A: We were noticing that more and more people were talking about AI agents, but there was a lot of confusion in the discourse. People were using the language of values, which is amazing and a big step forward in technology. However, functionality was being confused with values, which was also confused with marketing terms. It needed to be unpacked and organized following an ethical AI approach.
Q: Defining an AI agent seems straightforward, but you found it surprisingly tricky?
A: The concept of AI agents has been around for at least 100 years, and it gets into the big philosophical question of what agency is.
We ended up relying on some of the work that had been done on the difference between automation and autonomy, because those are really closely related concepts. We found a really helpful table of gradations from Caterpillar, which makes construction company equipment, with a scale from automatic to autonomous. It was really, really inspirational for what we were doing.
And then our colleagues who just put out the "smol agents library" had developed a scale of gradations that was also very similar. It was very clear and made a lot of sense. It's intuitive, it fits to how people are using the term agent.
So we spent some time thinking through what does this mean in terms of the concepts of agency and autonomy, and how do these different levels sort of pan out.
Q: What are the key considerations for developers working with AI agents?
A: The more autonomous the system, the more someone is ceding control. As we develop more autonomous agents, we're giving away more human control, making it harder to predict outcomes.
One of the big things to take away there is that fully autonomous systems have a lot of risk. They have unknowable kinds of risks as well, things that are really hard to predict in part because they're like in the logic of computers and not in the logic of humans.
We actually recommend against developing fully autonomous systems. Instead, focus on building systems that support and assist people without having the power to overrule human decisions.
Safety is another key value that connects to many concerns including privacy, security, and anthropomorphization.
When you're not clear about how a system might be working, which is part of the design of AI agents, you lose some control over what it might do. This can lead to privacy breaches, security issues, and possible reputational harms.
In the blog post, we speak to the different ways you can analyze safety, how this relates to all these other kinds of risks, and how to approach development in a way where you could hopefully make the systems as safe as possible.
So we really call out autonomy and safety as two of the key issues to really pay attention to when you're developing.
Q: What role does open source play in AI agent development?
A: Openness is crucial for understanding how systems work. The more eyes you have on something, the better we can identify loopholes and oversights. It brings in diverse inputs and provides accountability. With openness, you can verify claims about systems' capabilities and separate marketing from science by directly examining the underlying data and evaluating it yourself.
👉 Dive deeper into AI agents by checking out their full blog post - it's worth your time!
Tools You Can Use
We're launching the Hugging Face Agents course! This free, interactive and certified course will guide you through building and deploying your own AI agents. Enrol now: https://lnkd.in/eUYUzHBS
Tom Aarsen just introduced a new method to train static embedding models that run 100x to 400x faster on CPU than common embedding models, while retaining 85%+ of the quality.
Google is making AI in Gmail and Docs free — but raising the price of Workspace
ChatGPT can now handle reminders and to-dos. Ole Reissmann took it for a spin in a quick demo (and posed an excellent question: "How should we adapt our media offerings in response?")