Spare a thought for the mainframe. For decades, business IT barely had to specify that the river of value it produces stems from mainframe computing. From the late Sixties to the new millennium, we lived in a big iron world, where everything hinged on the hulking machines in enterprise server rooms which processed our transactions, delivered our statistics, and connected the world in a radically new way.
That all changed, of course, with cloud technology: while the rooms full of machines got bigger, they were also suddenly elsewhere, out of businesses’ direct line of sight, and whole new business models emerged to provide the computation that the modern world relies on. By converting upfront capital costs into flexible, scalable operational expenditure on a service which always keeps pace with emerging technology, organisations had a way of taking a big leap towards competitive advantage.
As the hyperscalers grew, they flexed their advantage over mainframes with a continuous parade of innovation, rolling out new services and platforms on what can seem at times like a daily basis. With cloud services continually engaging IT teams and engineers with new possibilities, mainframes are all too easily viewed merely as machines running workloads which just haven’t moved to the cloud yet.
Taking care of the mainframe
This often results in what Gartner has referred to as “caretaking the mainframe”. Any professional who works with mainframes knows that the value they provide to businesses is far from dispensable. Indeed, mainframe workloads often sit at the very heart of critical business activity, and their ongoing relevance is shown by the fact that 44 of the world’s 50 largest banks, and 8 of the top 10 telecoms businesses, remain committed to them for day-to-day operations. Caretaking occurs when businesses like these recognise the importance of mainframe continuity, but don’t feel the need to invest in them.
One consequence of this has been a kind of decay in mainframe capacity and capabilities. Inefficiencies which emerge as the business context changes can be highly costly for organisations, and hunting down and improving on those inefficiencies can be a difficult task given the expansive codebases that many mainframe applications have grown over decades of adaptation. Troubleshooting, likewise, can be a slow and expensive affair, while redundant code can hinder operations by reducing the resources available for necessary work.
All of this is compounded by a decline in skills availability for mainframe development and maintenance. While the machines themselves remain a vital element of many businesses’ IT infrastructure, many skilled mainframe programmers are now reaching retirement age and taking knowledge of mainframe-specific languages out of the workforce with them.
In the most extreme cases, then, businesses might be approaching a situation where dialling back on mainframe engagement to a caretaking level has led to a lack of new employees with the skills to perform that caretaking.
Mainframe modernisation
It could be tempting to see this as an opportunity to move off the mainframes which, next to cloud platforms, seem like something of a relic. The reality, though, is not so simple: it’s not just that vital data and workloads have persisted in mainframe environments so far, but that mainframes offer real benefits around proven resilience and the manageability of highly-sensitive data.
The solution to this evident tension between sticking with and moving on from the mainframe lies in rejecting that binary option altogether. A third way can be found in investing in, innovating on, and integrating with long-standing mainframe solutions.
If the flexibility of cloud is a necessary ingredient in a modern business’s strategy, the best solution is often to bring that modern capability into the mainframe in ways that make the best of both worlds: automating management workflows, enabling data to move more efficiently between environments, and implementing modern DevOps or DevSecOps tooling can ease the skills crunch and raise the value that mainframes deliver without interrupting critical processes.
Of course, like any modernisation programme, this all relies on a well-informed strategy and a well-defined roadmap. In particular, organisations should take a holistic approach to identifying and describing the needs of different workloads – many of which will not be immediately obvious – before making decisions about what the right platform for each one looks like.
From there, decisions can be made around a variety of possible mainframe modernisation methods. Sometimes the priority will be to refactor and rebuild existing applications and data structures in order to find efficiencies. At other times, businesses will need to enable an ‘open mainframe’ mindset which integrates the mainframe into a hybrid architecture to allow the cloud to pick up parts of the workload as and when necessary. And sometimes, of course, the best solution really will be to shift applications off the mainframe and into the cloud entirely.
When they start from a position of drawing on the deep expertise available in the market to thoroughly assess their best approach to modernisation, though, few businesses will find that just one of these options is the right answer across the board. As organisations seek to capitalise on the value they have while creating a runway to new capabilities, transforming the mainframe will become a strategic priority which combines all three of them – and we will see that big iron infrastructure delivering value for many years to come.