
Fabi.ai December 2025 Product Updates: Application connectors
TL;DR: After a year where better tooling mattered more than better models, 2026 will be defined by the context retrieval challenge. Apple will finally announce a credible AI strategy that pressures OpenAI in the consumer market, while legacy data companies consolidate amid struggles to compete with AI-native products. MCP servers will reach mainstream adoption through major LLM marketplaces, OpenAI will declare AGI for legal reasons to split from Microsoft, and Perplexity will get acquired as Google's dominance in AI search becomes insurmountable.
As we close out 2025, the AI landscape feels contradictory. AI stocks are hitting record valuations while forums like Reddit and Twitter are filled with skepticism. Companies get acquired at eye-popping prices while people ask "who actually uses this?"
After a year of testing real products, most people have figured out what actually works and what's just hype. But investors keep betting like AI will triple the economy. The war chests are big enough that 2026 won't be the year of reckoning, but serious questions are starting.
Adoption is wildly uneven. Most organizations either haven't started with AI or can't figure out how to make it work. Meanwhile, certain companies are all-in, automating everything possible.
The biggest improvements this year weren't from better models—they came from better tooling. Agentic frameworks, tool calling, MCP servers, Skills. This makes me wonder: are the models themselves starting to plateau?
There's also been a major shift in thinking about competitive moats. At the start of 2025, saying that AI applications had stronger moats than infrastructure companies would have sounded crazy. Now it's obvious. OpenAI's headstart eroded faster than almost anyone predicted, and they haven't replaced every application. At Fabi, this has always been clear to us—building an AI platform that actually works for data analysis takes a lot of careful design work that the big AI players aren't focused on.
Context becomes the main challenge (and topic du jour)
Despite all the progress on tooling, the main challenge remains a context retrieval issue (RAG 2.0 anyone?) and continuous learning. The first half of the year will be all about context compression, retrieval, and memory. We're already seeing attempts, Anthropic's Skills, for example, but anyone who has played around with MCP servers of Skills immediately sees the issue: When the AI knows what Skill or tool to call it feels like magic, but still too often users need to remind the AI to use a certain skill or tool. This is fine for power-users, but the vast majority of the knowledge workforce will not grok this and it will have to be solved for them.
Apple will announce a concrete AI plan that actually holds up. Maybe not their own LLM, but a concrete timeline that puts real pressure on OpenAI as the consumer-focused AI provider given their massive distribution power. Given all the circular investments in AI, this will shake things up and likely put even more pressure on OpenAI to compete in the enterprise space where Anthropic and Google seem to be winning. By 2027 Siri is powered by an LLM running locally on your Apple device and everyone has a Her-style AI whispering in their ears via their Airpods.
Too many pre-AI players are struggling to adapt. At this stage they’ve all bolted on AI features, but they fail to impress. As these legacy data and BI players feel an increasing amount of pressure from new entrants, we'll see consolidation from legacy solutions acquiring AI-native ones.
Perhaps my hottest take on this list: MCP servers will reach wider distribution, but they won't be called MCP servers. Major LLM providers will open their marketplaces—especially as OpenAI tries to compete with Slack—and MCP will be core infrastructure. But there's a problem: AI models don't have enough context about what each tool does or what data it accesses (see point above). I don't think the context issue will be solved in 2026, but if anyone does it, it'll be Anthropic. They're leaning into enterprise use cases, and this is squarely in that territory.
Sam Altman has basically said OpenAI will compete with Slack, and the logic makes sense—chatbots need to be where people actually communicate. But OpenAI has struggled to launch and maintain these kinds of products. Slack will be safe in 2026, though they'll need to announce how they're integrating AI tools (maybe through a Salesforce-Glean acquisition?).
There won't be consensus on what AGI actually means, but OpenAI will claim they've achieved it anyway. The motivation? Legal strategy. Declaring AGI would let them start splitting from Microsoft per their partnership terms. 2026 could be the year where the definition of AGI gets settled in the court of law.
At the start of 2025, it looked like Google had missed the boat. Twelve months later, they've figured it out and they're the 10,000-pound gorilla. They have top-tier models (or close enough), virtually unlimited funding, and the most powerful search channel in the world. Perplexity can't compete with that. As their customer acquisition costs keep climbing, funding options will dry up and they'll look for an exit. The question is who buys them. OpenAI and Anthropic have web search handled, so the obvious buyer isn't obvious. Apple, maybe?