Artwork by MFG Labs

Divide et impera

MFG Labs
The Programmable Chronicles
3 min readJun 18, 2015

--

« Divide and conquer ». Machiavelli’s maxim is widely spread across the tech world. That’s how engineers think: we break down a complex problem into several smaller and simpler tasks. We like small and agile, not big and strong. Computer science was developed around this idea. In the early 60’s, computers were powerful mainframes shared by dozens of people. After 1969 and the microprocessor revolution that allowed computer miniaturization, we shifted to a distributed architecture: several computers, more and more powerful.

Processors were developed following the same parallelism principle: GPUs (Graphical Processing Unit) allowed very efficient parallelization of computing, as well as multi-core processors.

We shifted from an architecture where the intelligence was concentrated in a single point to a distributed architecture where the intelligence is split between the nodes. But how could we take advantage of this tremendous power?

Internet brings the answer in 1972. All these computers become connected, and can join forces to share their computing power. Grid Computing is born. It starts with computers, but soon expands: Stanford University offered Playstation 3 owners the chance to use the console’s powerful processors to help research against Alzheimer or Parkinson.

But the rise of Cloud Computing and Big Data during the last few years has disrupted this way of doing things.

New services like Google, Facebook or Amazon needed to collect, store and compute incredible amounts of data, and nothing was available yet for them to handle the task. So they started building huge data centers with tens of thousands of servers, which run 24/7 to fit their needs. Amazon and Google were the first to realize that the architecture and tools they developed could be valuable to other companies. Hadoop was developed as an open source version of Google’s Map-Reduce — which is by the way totally divide-and-conquerish — while Infrastructure-as-a-Service was born with Amazon Web Services. Services as big as Netflix or Nasa now entirely rely on it. Our AWS bill at MFG Labs is probably a lot lower than Netflix’s, but we too use the service, as almost every single startup.

But there is a major problem: these server farms gather in a single node huge amount of computing and storage power, and take off-balance the very vision behind the last 40 years of computer development: we are back to 1962 and the good ole’ Mainframe.

Where are we heading? Is this just a temporary trend or will it last?

We think SaaS / IaaS is awesome, and it’s definitely here to stay, but the architecture on which they rely — data centers — will disappear.

Data-centers are wrong.

They are centralized beings in a distributed world — the Internet world. They will adapt sooner or later, probably by diluting themselves in the network: every connected device will be a small part of those data centers, dedicating its storage power to host an infinitely small share of the world’s data. This distributed architecture is a lot safer: if one node of the network goes down, it is painless, whereas if tomorrow an asteroid wipes out one of Amazon’s datacenter, it would be a disaster. The data would not be owned by a single entity in a single place either, improving its privacy.

Back to our beloved distributed architecture, the balance in the force will be restored, and us engineers can do once again what we do best: divide and conquer.

Alexis — @alexisbrichard

You should follow us on Twitter: @mfg_labs.

--

--