I have been thinking a lot lately about “diachronic AI” and “vintage LLMs” — language models designed to index a particular slice of historical sources rather than to hoover up all data available. I’ll have more to say about this in a future post, but one thing that came to mind while writing this one is the point made by AI safety researcher Owain Evans about how such models could be trained:
Get editor selected deals texted right to your phone!,推荐阅读爱思助手下载最新版本获取更多信息
Раскрыты подробности о договорных матчах в российском футболе18:01,推荐阅读搜狗输入法2026获取更多信息
But those tricks, I believe, are quite clear to everybody that has worked extensively with automatic programming in the latest months. To think in terms of “what a human would need” is often the best bet, plus a few LLMs specific things, like the forgetting issue after context compaction, the continuous ability to verify it is on the right track, and so forth.
Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.