January 15, 2025

Intelligence 🧠 vs Integration 🔧 in AI

AI is developing through 2 distinct forces: intelligence and integration. Understanding them is an important step towards understanding the future of AI.

AI vs IntelligenceAI vs Intelligence

🧠 Intelligence is the sheer level of intelligence that the state of the art models exhibit–the IQ of the models, so to say 🔧 Integration is the depth of integration for a certain level of intelligence into applications, products, processes, etc

The interplay between the two goes like this: each growth spurt in intelligence is followed by months or years of integration work, which in turn funds the R&D to unlock the next growth spurt in intelligence.

For instance, since GPT-4 level intelligence came out 20 months ago, most of the industry has been focused on building integrations, and one could argue we still haven’t reached the full potential here.

🧠 Growth steps in intelligence are uncertain and lumpy (science); vs 🔧 Growth steps in integration are more certain and smooth (engineering)

🧠 Winning in intelligence requires the ability to take outsized risks, rare/specialized AI research and engineering skills, and a strong R&D effort; vs 🔧 Winning in integration requires existing market share / deep domain knowledge and a strong product & engineering execution.

Needless to say, winning in either requires larger and larger investments as competition rises.

🧠 Intelligence is dominated by AI labs such as DeepMind, OpenAI (the old OpenAI?), Meta AI, Anthropic, Safe Superintelligence (maybe?); vs 🔧 Integration is largely directed by hyperscalers (Microsoft, Amazon, Salesforce) and vertically focused startups who build on/around their platforms

Notably, many play at the intersection of the two but typically major in one, e.g. OpenAI, Anthropic, Meta AI, Mistral, Cohere, etc. Indeed, to sustain investing in growing intelligence, a company has to show revenues to investors, which requires some degree of involvement in integration.

Well, so where are we now?

Growth in intelligence has sort of stalled - there’s no strong evidence that “just scaling” the LLMs with more data and compute will continue to increase their intelligence like it did before. New paradigms are most likely needed, and no one can predict how quickly we will get to the next groundbreaking paradigm. But lots of smart folks are working on this (unless they get distracted/deprioditized by integration work - which seems to be happening to an extent with the likes of OpenAI).

Things are quite different in the integration space however. This is a buzzing space seeing more activity, investment, and consolidation than ever before, and I personally think Microsoft has the best chance at dominating this space, followed by other hyperscalers.

The path to AGI is most likely going to involve the continued interplay of these two forces. A superintelligent model sitting on a server cluster somewhere isn’t going to be that useful unless it’s integrated into the real world.




Previous post Versioning roles in startups It is widely accepted that the team that starts a company is rarely the same team that scales it to a large or public entity. Yet in practice this Next post The Three Bricklayers of AI Over the years, several mental models have been used for predicting the impact of AI on jobs: Back in 2016, Andrew Ng’s 1-second rule stated that