Five Reasons to Love your Legacy Systems

Five Reasons to Love your Legacy Systems

Nick Denning   |   26 November 2020

Nick Denning is CEO of Diegesis Limited, a business technology and IT systems integration company and Nick is an acknowledged relational database expert.


There are those who think mission critical business applications should be replaced just because they are based on old technology. Indeed, there can be reasons for adopting this approach and replacing systems built on older platforms. Surely, a complete re-write using today’s hot new software must be better than the tangled spaghetti of interdependent legacy systems, code and databases? It’s true that the potential impact of a small change to spaghetti code can make systems expensive to test and to maintain. In many cases though, these old core systems are mature, reliable and robust. They contain extensive business logic, have been thoroughly tested and contain a valuable history of business data.


The frustration with legacy systems can be lack of accessibility because interfaces are dated and clumsy and integration with other business applications is missing. However, history is littered with failed migration projects based on a rip and replace strategy. It is often lower risk, cheaper and faster to add a new “wrapper” to older platforms to rejuvenate the diamond at the core, than to chuck out much-loved operational systems of record. It just needs a bit of know-how.


Here are five reasons to love and cherish your legacy systems:



First class databases

Many of the RDBMS products we refer to as legacy have actually been developed and maintained over many years to make them leading technology platforms, supporting mission critical systems. They continue to provide excellent availability, performance, storage, analysis and security. However, they get labelled “old” because the 4GL green screen and Windows development tools, created in the 80’s and 90s, give the impression of obsolescence. We can “sweat this asset” by refactoring code to add a web interface that exploits the robust business logic.



Get to know your code

To replace mission critical systems while continuing to exploit the data they contain, you need an understanding of the current environment and code to ensure vital functionality or data is not lost in the process of replacement. A business-as-usual (BAU) team probably does not have that understanding, so for a big bang migration to a new platform it is necessary to reverse engineer the design of the system or to have a detailed knowledge of the functionality utilised by the data in the current system. This may take many man-years to acquire while keeping reliable systems running. Further data migration headaches occur when the replacement system isn’t able to take on the existing data because fields are missing from the new data model. The level of testing of new code to achieve the same level of integrity as the well tested and robust business logic, which does exactly what the business wants, can consume significant resources. A simple alternative is to refactor the legacy code to separate the GUI from the business logic, which can then be service enabled with a replacement revitalised web GUI. Think of it as redecorating a room in your house.


The key to success is to do this process by process, or perhaps role by role giving priority to processes most frequently used and continuing with old applications where there is no business case to re-write.



Reveal the gems

Progressively service enabling systems and developing web interfaces is likely to reveal the business functions actually used, possibly only a small part of the system. Logging use of legacy GUI applications might help identify components that are redundant and hence, never need to be web enabled or considered for future migration. Of course, do not forget the applications run as overnight background tasks, but the analysis of the logic used is generally relatively straightforward.


Overtime, as code needs to be changed, there is a choice of continuing to develop services in proven and efficient 4GL programming tools or to replace component by component, with services in a new language. This progressively exploits the highly reliable existing business logic that works while also providing a future and safe migration route to development in new languages and architectures.


If ultimately a bottom-up strategy moves much of the logic into stored procedures and a top-down approach replaces all the original 4GL code, the effort to re-platform to an alternative RDBMS has been reduced given tools exist to convert stored procedures from one database to another. However, there is seldom a business case for this, so why bother? The real gem in this strategy is the ability to “sweat the asset” at minimum cost and risk while creating APIs for exploitation across the business.



A surprising availability of skills

Skills availability is often cited as a reason to move to new systems because techies of a certain vintage are reaching retirement age. We often find on projects in large enterprises that there might be 20 technologies in use and perhaps only a couple that are considered legacy. Once our highly able staff realise that committing to learn legacy systems earns them the “right” to programme the replacement web front-ends they can see opportunities opening-up. Organisations engage with us because they benefit from the rapid increase in the number of staff with the 4GL skills needed to support their applications in the short and medium term. What we now need is for the product vendors to bring forward web development environments based on their 4GL languages. A single statement in these 4GLs replaces perhaps several lines of Java/jdbc code.



Timing is always tricky

Although there may be compelling reasons to build new systems there is always risk. One option is to continue to run the existing systems which effectively support certain functions in the business while running a new system in parallel for new products and services. However, this throws up a different class of problem i.e. managing master data such as customer details, when there is scope for duplication across systems creating inconsistent data. This can be avoided by implementing Master Data Management (MDM) to ensure a single view of the truth. If MDM is implemented at the core of an architecture, linked to elastic search with queries that can navigate the links through MDM gold nominal records to the underlying individual database records; then adding new systems to an architecture or removing redundant ones becomes a perfectly viable approach.



Conclusion

“Time in reconnaissance is seldom wasted” therefore conduct a risk assessment looking at all the factors and options. You may be surprised to find there is still life in your legacy systems. In a difficult business environment, things which are proven and reliable gain a new premium and could be worth reviving. A light touch on the tiller may be all that’s needed to keep you on course to effective mission critical business systems for many years to come.