Better the Devil you know? If it ain’t broken don’t fix it? Continuous improvement? Something else?

Which thought process do you subscribe to when budgeting and planning for your IT infrastructure, servers, endpoints and projects?  Do you replace it when it dies, or do you rotate equipment and replace it on a fixed schedule? How about your applications, do you have a frequency of upgrading major components? Do you manage to stay up to date with releases or do you struggle to schedule the testing and validation required to stay on top of new releases, so let it slip?

There are many competing thoughts when it comes to the attitude to change, especially within IT. Many people are cautious of change or resistant to change for a variety of reasons, (job security, familiarity, fear of the unknown, risk of project failure to name just a few). However should your IT road map be handled in the same way as say the staff canteen microwave; the office plants; the colour of the office walls; the break room TV; or the company phone system? Not that those things are any less important, but can the same adages be applied?

The point I’m looking at here is whether or not change is a good thing in the IT environment, and in the wider application of technology in the workplace, (and even the home or social venues).  There are many discussions on the merits of approaches that work and each can be carefully justified.

Planning for long term implementation

Back in 2007 there was much discussion about the ramification of potentially running warships on Windows software This article focuses on the potential for abuse of a windows system, but many were talking about the potential for obsolescence with an operating system with a life cycle measured in years in a ship with a life cycle of decades.  Especially as the Operating Systems could be out of mainstream usage and potentially support before the ships were even launched and commissioned.

In 2016 rumours existed that the new UK fleet of destroyers were also going to be based on historic Windows XP derived systems.  Luckily that proved to be unfounded, however the possibility of legacy systems still being used long past their expected end of life in many businesses across the world is a very real concern.

You may have heard about the fact that the International Space Station, (ISS), used to run on Windows PC’s and some have heard that they switched to Linux in around 2013.  A great example of successful complex system updating in action, but did it really happen?  Was the infrastructure up there based on an obsolete Windows operating system or was it always, as more recent articles state, based on a wide range of different solutions.  The latest information I can find indicates the Russian modules are based on some very old DMS-R computers using VxWorks 5.3, an old (vintage late 1990s) version of the VxWorks operating system.  It’s most likely that there are examples of all operating systems running on the ISS in different roles, which would come as little surprise given the ubiquity of IT in the modern workplace.  So are they relying on legacy architectures and bespoke embedded programming?

More interesting information on this discussion can be found here and with a wide range of qualified input and sources.  Is this a successful example of keeping legacy systems at bay or just an ill informed set of rumours?

Those examples cover areas where systems are built and specified many years in advance of production, in locations that are hard to replace and upgrade and are expected to be reliable for many years.  They are areas where you expect the system to be running for many years and therefore it would be expected that a management plan for that was in place before the systems were brought into production.

Longevity in general business

Areas possibly less obvious are air travel and banking, 2 industries that are both large and ubiquitous. A relevant article from the Financial Times, (, looks at these 2 industries and discusses the pitfalls and problems of trying to maintain legacy systems.

A timely article into the competition in the Banking sector, (, could have been a very different article if the large institutions weren’t so heavily reliant on legacy systems and could move to compete with new startups more directly. This article concludes that “The experts agree that the biggest challenge for banks when it comes to digital may have nothing to do with technology at all. “For us, it’s all about mindset. […] It’s about a different way of approaching things, a different way of working.” But would this article even have discussed technology if banks had kept their core systems continually updated, utilising new and developing technologies over the past 20+ years?  Would there have been the disparity between the older banks and the new disruptive startups.

What are the alternatives?

Is the ongoing and continual development via a process such as Continuous Improvement the solution? It worked well in Japanese manufacturing for many years but is it going out of fashion in those same industries. An article from the Harvard Business review thinks so, “It’s Time to Rethink Continuous Improvement” by Ron Ashkenas (

Yet most modern software development seems to be going towards a rolling development of a single suite of software, is that just an application of Continuous Improvement or is this something else? For example consider the Windows 10 quarterly updates, the Office 365 incremental updates, the Adobe Creative Cloud releases to name just a few.  Are we going to lose the jumps in productivity and functionality we have expected from major software releases?

Looking around for examples of legacy system replacement one can’t help but see the efforts of UK Gov where IT project failures seem to be common place. An article from July 2016 includes this heading “Successful completion of 17 of the government’s major ICT programmes is unachievable or highly unlikely, according to the Infrastructure and Projects Authority’s annual report” claiming that nearly half the projects it looked at are likely to fail.  (

With that in mind shouldn’t we just leave things alone if they are working?

There are always risks but if it ain’t broken don’t fix it

The Times had a very clear cut opinion about the dangers of allowing your systems to fall behind with updates and security fixes back in October 2015.

IBM System magazine also have a strong opinion on the lack of change within an organisation holding back progress. But they make an interesting point “But trying something different doesn’t always mean trying something new. Sometimes techniques you have never tried may not involve the very latest in technology.” Change isn’t always about new and shiny, but it is about improvement and progress.

The summary of the financial times article mentioned above says “Change that takes place while daily business is still being conducted is disruptive, resource intensive, expensive and may take years. But — given effective governance, a rigorous focus on architecture and an emphasis on simplification — it is possible.

Supporting that line of thinking is this one from CSO Online, a division of IDG, which looks at the risks and costs of maintaining legacy systems and the dangers of not updating and maintaining them over time.

How does all this apply to me?

Now it’s easy to say that as a small to midsize business, why should you update your accounts package that might be 5 or 6 years out of date, or update your CRM database that’s been working fine for the last 10 years?  The questions should be when is the right time to update those systems, not if. Can you still upgrade to a newer version with the same manufacturer; is it still supported? Can you upgrade to the newest version or are you so far behind you can’t get the data migrated? Has the vendor gone out of business and are you putting off the transition to a new solution.  What features could you be missing out on due to lagging behind on releases?

Even harder, if you’ve had a bespoke system written for your business how do you make sure that it’s maintained, up to date and kept relevant to changes in your business.  Do you have a support contract with the software company? Can you still get access to the source code to make changes and fix bugs?

The purpose of this post isn’t to tell you that you need to change what you have, but it seeks to provide motivation to look at what systems, hardware and processes that you have in place. To review your plans for keeping them up to date, staying effective and efficient, staying safe and supported and making sure the systems are working for the business as they should be.

Just because something still does what it’s always done doesn’t mean that the business still needs it to be doing that, there may be better, more efficient ways to do that thing, or there may be better solutions to the same problem it solves.  Even if the processes are unchanged and still 100% fit for purpose, is the software and hardware running those processes stable; is it supported, scaleable; and secure; is there a risk it will fail and can’t be repaired or replaced; can you still get parts for that old server or that ageing printer that’s not failed in 15 years, (they don’t make them like they used to).

A lot of articles and studies on the Internet talk about the security risks of not patching operating systems and software, others talk about the dangers of legacy system, or outdated technology, but I think the bigger picture is one of the mind set around IT and technology in general. Is it wise and is it safe to keep the same systems in place just because they are still doing what they were installed to do?

So when is the right time to review the systems you have in place?  

Probably sooner rather than later.