2 / The agent will understand the context “Claude needs to learn enough about the specific situation and the obstacles you face to be useful. Things like the specific role you play, what writing style or what you need and your organization. ANTHROPIC “I think we will see improvements where Claude will be able to browse documents, Slack, etc., and learn about what is useful to you. That is less emphasized with the agent. It is necessary for the system to be not only useful but also secure, doing what you want it to do. “In addition, many tasks do not require Claude to do a lot of reasoning. You do not need to sit and think for hours before opening Google Docs or something. So I think that a lot of what we will see is not only more reasoning but the application of reasoning when it is really useful and important, but also do not waste time when it is not necessary. 3 / Agent will make a better coding assistant “We wanted to get a very initial beta of the computer to use out for developers to get advice when the system is relatively primitive. But as this system improves, it can be used more and can cooperate with you in various activities. “I think DoorDash, The Browser Company, and Canva are all trying to, like, sort of browser interactions and design with the help of AI. “My expectation is that we’ll also see more improvements for coding assistants. That’s a lot of fun for developers. There is just a ton of interest in using Claude 3.5 for coding, where it is not just autocomplete like a few years ago. It’s really knowing what’s wrong with the code, debugging – running the code, seeing what’s going on, and fixing it. 4 / Agents must be made safe “We founded Anthropic because we expect AI to advance rapidly and [thought] that, of course, safety concerns were to match. And I think it’s going to be more visceral this year, because I think the agents are going to be more integrated in the work they do. We must be ready for challenges, such as rapid injection. [Prompt injection is an attack in which a malicious prompt is passed to a large language model in ways that its developers did not foresee or intend. One way to do this is to add the prompt to websites that models might visit.]