I’m a huge fan of the idea of taking our super-duper applications and deploying them where it makes most sense. I always have been. Those of us in application development spent forever bemoaning the fact that our applications weren’t all that portable between operating systems. In fact, looking back through the history of computers, you find there has always been one roadblock or another to stop portability.
First there was byte order and instruction set, and we’ve largely overcome those. Internationalization wreaked havoc on us all for a while, and we overcame that. We’re still pretty mightily struggling with OS-dependent things such as windowing systems, but browser-based apps have largely overcome those also. Java takes a lot of heat for a lot of reasons, but it did improve portability of apps written in it. Maybe not “Write once, run anywhere,” as was often claimed—things such as file locations were still OS-specific—but still a lot closer than compilers that were deeply tied to the operating system they ran on. Virtualization and containers have enhanced portability in another dimension, letting the developer take the OS with the app.
We have overcome a ton of issues, and advancing technology has introduced a ton more. Now we have to either choose mobile targets or find a tool we like that can generate code for wildly different mobile systems from a single source. DevOps really isn’t helping with this process—though it will, eventually. By automating the build process, the target will be switchable at the targeting end, meaning our app—given a tool that can multi-target—can be built for whatever mobile platform the cool kids are playing with today, and the ones that are considered “cost of entry” can be generated in the same run.
So far, so good, right? We find problems, we overcome problems, DevOps helps us overcome them faster with a wider selection of outputs …
And then we hit the data.
Vendors in a variety of spaces would love to paint you a picture where it doesn’t matter. It does. The mass of your data, wherever it resides, is a pull like gravity to new application development. And it should be.
Some will try to explain to you that in the age of APIs, it doesn’t matter where the data actually resides: You can get it.
But it really does matter, and this is not Utopia. Even if you ignore the performance implications of data being latency-infested miles away from applications that need to use it non-stop, those APIs are only a viable option if the data can be secured and protected. By definition, an API is an advertisement that whatever data/service it fronts can be found right here. For applications that traditionally are data center apps, it might be possible to expose your customer file or other critical data via API and then lock down access to the API by IP …
But then we quit ignoring the latency problem. If every bit of data an app needs has to flow from San Francisco (or Tokyo, or Bejing, or London, or wherever) to Seattle before the calling app can do what it needs to and then has to be sent off again to Albuquerque (or Rostov, or Madrid, or wherever) the delays will, by definition, be more than they would be if the app sat with the data and had to transfer over a 1GB (or 10GB) connection between servers before sending it to users.
This means most data-heavy apps, no matter the level of agile or DevOps, aren’t likely to move. Vendors with a stake in the process will trot out the one or two customers they convinced to make such a move to tell you how great it is, but performance math and security sense tell you it is only great until things get busy or that exposed API is used by bad actors to vacuum user data.
Some applications have smaller datasets and can move the data with the app. Other applications can be architected not to care where the data is. A few don’t need a database at all. Move these applications to whatever platform makes sense. Make the CD for them retargetable. Most serious business applications rely too much on core data to fall into these categories. And thus, they aren’t moving, unless someone comes up with a viable way to move a ton of data between platforms that is safe and fast and effectively translates where necessary.
So, that’s what we need to see next in the march of portability. We currently have containers that can be run pretty much anywhere and languages that will work on pretty much any platform. We need data tools that can alleviate the current set of non-portability/latency constraints and get data where it is needed, when it is needed, securely.
Unfortunately, I don’t see that happening soon. So addressing the issue, “Where do we target our new application?” will have to start with the question, “Where is the bulk of the data it needs?” At least for now.
Most of you knew that, because you’re living it. I just thought you might have a use for a third-party article about it. Because someday, you will have this conversation with someone who isn’t living it. Enjoy.