We do things a little differently at Mediafly. We run a highly distributed backend, with a large mix of programming, operational, and OS technology: .NET, Python, Ruby, Java (minimal), Docker, Apache, nginx, Flask, Linux and Windows. And, as we have more conversations about our company and technology operations, we often get asked questions about software stack on the backend. The conversation usually goes like this:
Person: “What software stack do you use for your infrastructure? Ruby on Rails? .NET? Java?
Mediafly: “We use the best tool for the job.”
Person: “What?? How can you possibly [control costs / recruit engineers / outsource / build quickly] without standardization??”
We are a polyglot engineering organization. Our engineers use the best tool for the job to solve the problems that need to be solved.
What are the benefits of the polyglot infrastructure?
- One single tool is NOT the best for every job. That’s like saying a hammer will solve all of your construction needs
- The best platform can be used for each given problem. Some platforms are better than others at different tasks. For example, let’s say you are in a Java shop that needs to build a highly efficient, actor-focused concurrent processing system. You could investigate each of the various frameworks that sit on top of Java (Kilim, Akka, Jetlang, FunctionalJava). You could pick one, dive into it until you butt heads against the inevitable roadblocks, back out, try another framework, repeat. You could easily spend months going down this exploration path, all at the expense of solving the actual business problems you are trying to solve. Now let’s say you are a polyglot shop. You could solve this quickly and very efficiently with an Erlang application server and a thoughtful API.
- When you build for many clients (iOS, Android, Win8), for the web (HTML, CSS, JavaScript, and even specific browsers), or even for databases (SQL), polyglot is in your nature anyway. The team usually has embraced it to some extent
- The approach keeps your edge sharp. The pace of new framework and new technology creation is accelerating. Anecdotally, the mere act of exploration makes developers stronger
- The approach forces you to more thoughtfully design your interfaces. Changing functional declarations in a statically linked application makes it easy to design sloppy interfaces. Designing a robust REST API that can grow and survive independent software lifecycles requires careful consideration and planning.
What are the drawbacks of the polyglot infrastructure?
- Overkill. If you are building a simple web app with a small team, you probably don’t need a polyglot infrastructure. Just go with your standard framework and ecosystem, and call it a day.
- Outsourcing. It can be harder to outsource. Most outsourcers have strong capabilities in a single framework. Those that do have expertise across multiple frameworks tend to orient their businesses by framework, so you’d be dealing with different groups entirely if you tried to get them to work as polyglot-ists.
- Recruiting. It can be harder to hire and recruit engineers. Most engineers fall into one of the “camps” denoted by the technology: a “.NET developer”, a “Java developer”.
- Describing a developer as a polyglot sends some candidates and some recruiters heading for the hills, as they simply don’t know how to classify what you do.
- Overhead. Engineers may find themselves having to build an API to satisfy only a single downstream consumer. This might feel like a waste of time, or induce the engineer to take shortcuts.
How can you make it work?
You need several things to make the ideas behind a polyglot infrastructure work for you:
1. Strong APIs.
A strong API has a few key characteristics:
- Each component can operate on a software lifecycle that is independent of other components
- Communication between components is “network”-connected. It can be REST APIs, JSON documents at a well-known URL, FTP uploads, whatever. It cannot be dynamically linked libraries, and it cannot be a file-based API localized to a single server.
- The APIs should be language-agnostic. For example, I should not be required to use .NET to connect to the Accounts API.
Without a strong API, communicating between systems is fraught with peril. In the past, we would allow independent systems to communicate directly with components of other systems, often with very odd side effects. In one extreme example that took place several years ago, all of the content libraries were randomly getting deleted multiple times a week. We searched far and wide to figure out the cause, and even had to whip up a restore script that proactively “restored” the deletions every few minutes until we could figure it out. We finally discovered the root cause: our reporting infrastructure was reading from one of our content management databases of another system directly to construct a list of content libraries. But there was a side effect: the ‘read’ that was done was actually directly deleting items from the database!
While we’ve always had a solid external device API through which our mobile and web apps communicate, we never applied that same principle internally. That all changed after this event: we began a very concerted effort to migrate more and more of our inter-system communication to sit behind APIs. The result has been a huge boost in reliability and increased ease in understanding our system.
2. Documentation (and the willingness to maintain documentation)
A robust, thoughtful API is only as good as the documentation that surrounds it. Engineers who create APIs must be willing to invest in the documentation as well. One can even think of the documentation as a form of test-driven-development: build up the documentation first (or at least its shell), then build the API to make the documentation come to life. It’s amazing what confusions and complexities we discover taking this path.
Simply building the documentation is not enough. Teams have to be willing to invest in maintaining that documentation. Interfaces change, parameters change, and the documentation must change as well.
3. Engineering and management teams that are willing to change.
The migration from single-framework to polyglot is not easy. Even for small teams it requires a big mental shift, lots of convincing and debate, and slow migrations. It requires extra investment in building a robust API that may only ever get consumed by a single downstream system. It requires building up expertise on deploying multiple kinds of systems. It requires trial and error, much refactoring, and much learning.
It also requires a team that is willing to change. Engineers who are willing to throw out their first, second,… and Nth attempt, before finally reaching the right approach. Management who understands that the long-term benefit exceeds the short-term cost.
Final thoughts
The truth of the matter is that we do standardize. But not in a way that is easy to describe at a cocktail party. (“I’m a dentist.” “I’m a .NET developer.”) We standardize on two things:
- The execution environment. An infrastructure that requires significant manual steps because of a new technology creates significant friction. Deployment decisions have a lot of weight in our choices, and tools like Docker help standardize the execution environment.
- The API definition.
Every other decision is centered around prioritization and around benefit vs. cost, just like all other business decisions.
We’d love to hear your thoughts. Does a polyglot infrastructure work for your team? Why or why not? What else is needed to make it successful?
Are you interested in continuing to develop as a polyglot engineer? Mediafly is hiring!
Comments are closed.