This view that I have described here of increasing feature interaction causing increasing essential complexity leads to the conclusion that such a component would end up suffering from the union of all the complexity and constraints of the masters it needs to serve. Ultimately it collapses of its own weight. The alternate strategy is to emphasize isolating complexity, creating simpler functional components, and extracting and refactoring sharable components on an ongoing basis. That approach is also strongly influenced by the end-to-end argument and a view that you want to structure the overall system in a way that lets applications most directly optimize for their specific scenario and design point. That sounds a little apple pie and in practice is a lot messier to achieve than it is to proclaim. Determining which components are worth isolating, getting teams to agree and unify on them rather than letting “a thousand flowers bloom” is hard ongoing work. It does not end up looking like a breakthrough — it looks like an engineering team that is just getting things done. That always seemed like a worthy goal to me.
The degree to which you can isolate complexity is a function of the type of software being designed. It’s a lot like the stallman-vs.-microsoft aproach, command line tools that do individual tasks and microkernels that isolate everything into discrete tasks, versus building complete systems and monolithic kernels that combine the tasks.
But, even for projects as large as an OS, Mach/Hurd loses to Linux or Windows, but the benefits presumably come from the greater ability to isolate the complexity. Designing isolated systems to generate complexity is harder than building complex systems .That’s why it’s hard to argue that the complexity-isolation approach works for sufficiently complex projects — it “should”, but in practice, it isn’t do-able!