Agent Lifetime
Currently it seems that the efficacy of agents/personas is ephemeral. The context window may be large, as in the case of big commercial models, but over time its capacity will be exhausted and early context will be lost. Use of system prompt pinning in various commercial models helps (that setup info is retained and/or prioritized) but context overrun can still occur. There is a lot of work on memory tactics to fix this, various RAG usage etc. Simply re-instantiating an agent after a given time is only viable if you can keep any accumulated expertise stuffed into the reboot context. This is questionable.
Some combination of vector DB and graph RAG seems to be where things are heading (today, lol). Agent cores would be setup code, experiential context and house lore. They should be mostly model independent. In essence such relocatable agents would be a significant subsystem of their own, including the setup prompt and an xRAG submemory system. This would seem to be a major area of innovation at the moment.
All agents would not be human-facing: some would be support for other agents. Communication between these should probably be not in interpretation-bound plaintext but rather in richer, more terse data formats such as json-LD, RDF etc. Some agents would be people agents and others would not; Tom Smykowski laughs maniacally here.
even human-directed outputs can benefit from structured formats