Working on legacy IT systems

Introduction

Oftentimes, companies budget their software development as a project. Meaning that they fund a team to create a software to do one or multiple tasks. At the end of the development phase, the team is scaled down. Not a problem in itself but the issue comes often from maintenance. Those projects often don’t get the funds and allocated time needed to be properly patched over time.

I have been consulting with multiple companies where this have been the case and where managers expect me (as an expert) to update, upgrade and/or patch their software as if it was as simple as installing a new version of Excel or running windows update on their laptops. A couple of commands and it should be good to go…

Well… My dear managers, if that was so simple, why do you need people like me? And the answer is, usually, for covering themselves. Note that I don’t mean it in a pejorative way. I go to specialists/mechanics to fix my car even when I believe it is quite simple. The challenge is when, even if looks simple at the beginning, something goes wrong. And with older softwares, you would never know.

So here are some of my experiences I wanted to share.

Google doesn’t know much about it

In my years working on legacy systems, I think the hardest issue I had to work around was when a library was so old (and dead for so long) that there was basically no hits when trying to search the web. Here. I’m talking about an archived Apache product that lived for 1 year almost 20 years ago. The documentation is scarce and there are no StackOverflow questions about it in the index. The only things you might find are old mail-group threads.

The most important thing to remember here: Don’t panic

Even if you can’t find much, chances are it is because there isn’t much to find. Those old libraries are usually quite simple and very focused on one job. Also, you do have some codebase internally you can check and, if you are lucky, some information about the person(s) that worked on it earlier.

It will take time to get started but, usually, you should get a good grip on it quite fast.

The non-existant funding or the “it is easy and should go fast”

When consulting, you usually get managers from your customer so you always want to be amenable to help your client. However, you should not go to far even when you meet cases where the manager says it is easy and/or we don’t have budget for it so it needs to get done within an unreasonable timeframe.

Remember that, in many cases, the direct manager you are talking to, just get a budget and have no say in it. Also, keep in mind that, even when frustration takes over, most people do want what is best.

The solution I found works best (for both cases) is to actually list everything that needs to be done with a list of risks if it is not. That being said, it might also sound easier than it is. Some cases are but others not and are full of surprises.

As an example, our team at a customer developed around 180 libraries and applications over the years. Most of them are noe in use any more but we still run around 30 applications and maintain around 10 libraries most of them in Java. Some times back, there were a security issue known as log4shell that could potentially give full access to the customer servers to a hacker. A fix was released and we “just” needed to upgrade a library. In itself, it was an easy fix except that we had 20 apps and 3 libraries where we needed to do a quick upgrade.

The customers did not allow overtime but the security team required a fix by a end of day or would shut down the applications not upgraded. Managers said it was a quite simple update so it should go fast and 2 persons working a day on it should be able to fix it all. And it would seem reasonable to most, except that, even with CI/CD, the release processes are taking time. Take one of the library, you need to

  1. create an issue ticket

  2. branch the repo

  3. change the code

  4. test the change than push the code

  5. build and approved through the CI/CD

  6. create a PR and get someone to review it

  7. merge the PR, re-build again

  8. release a new version

  9. create a new issue ticket for each app using the library

  10. go through 2 to 6 for each app

  11. close all issues

Also, since the all Java development teams had to work on the same issue at the same time, the build agent requests were often queue for some time.

This meant that we realized quite quickly that we couldn’t make it by end of working day so we made the list to our manager: “Here is what we need to do. We did it for those already and it took so long. So we either get shut down by the security team or work overtime.” As some systems were critical and others not, we quickly reached an agreement on which one we needed to fix by end of day and which we could work on the next.

My point being that people will always try to work towards what is best and it could have easily gone very wrong if any one person was not able and willing to set aside frustration and work toward a solution. It would have been easy to think that, since the policy on no-overtime is in place, we just do our best and drop it at end of working day. So my tip is that, if you see any problems or challenges coming your way, discuss it as soon as possible with everyone involved.

The upgrade that keeps on giving

The next one is an example of something even you as the developer in charge of the upgrade, you think it will be easy, simple and quick but you hit a wall that changes the whole thing.

To give an example, I’ve worked on a project where they need to upgrade from Java 7 to Java 8 as 7 was EOL and it was a company policy that nothing EOL can run in production. We all thought this would be a simple upgrade as Java is usually good with backward compatibility.

Simple enough we thought but, since we had some experience in the team, we didn’t say so before we actually checked. And that was a good idea, the application was running an old (Spring) library that checked if it was running the required java version and the check was not “check if the version is greater” but, check if it is “part of the allowed list”.

This meant that we needed to do a major upgrade Spring framework as well. A quick check quickly made us aware that we also needed to upgrade the major version of Hibernate. That also caused a requirement to update an open source test library that was dead for too many years so we created a new fork of it and upgraded it to the more recent Hibernate version. Also many Spring components needed an upgrade. The whole thing ending up in a month long upgrade.

My tip here is, ALWAYS check before saying that it is a simple quick one. You’ll never know what can happen.

The system replacement coming soon

Many times, I’ve come across decision made at high level that one legacy system is too old and isn’t worth upgrading and we need something brand new. It will be ready in X weeks (or month) so we just do a strict minimum maintenance on the old one. Quick and dirty fix only!

And, 5 years later, you are looking at fixing the earlier quick fixes that are making no sense whatsoever nowadays…

Here are multiple suggestions:

  1. There is nothing wrong with a quick fix but isolate them to what I would call leaf code and comment it to say so. Most of the issues I’ve seen of bug whack-a-mole is where you implement a quick fix in a code that is re-used multiple places and the fix breaks something else.

  2. Quick fix is not exclusive of refactoring. Usually it is hard to do some refactoring when going extremely fast for a quick fix but, in more complex bug, it might still be worth it. The difficulty here is that it is really hard to see on the first try. However, if you need to go back fixing the same bug multiple times, it probably is worth it.

  3. Letting an application die, doesn’t mean you can’t do any well timed and placed major improvements. Sometimes, you get a CTO (or other high level management) making a decision and, looking a their budget sheets, it makes sense to spend as little as possible on the one you want to let die and replace. However, in some cases, those are very long projects and, as a lowly consultant, you know that you will spend more time and get more frustrated trying to fix old code coming than actually do a total overhaul and fix bugs way faster after. Here, I’d say that sometimes, it is worth going against the current but be careful! A miscalculated bet here would probably result in you being shown the door.

Conclusion

Working on legacy softwares is not always easy and can quickly get very scary. Don’t panic, keep your head steady on your shoulder and things will work themselves out most of the time.

Remember that, as a rule, most people in the company are there to make the company work better (even if you sometimes think they don’t). Talk to people, be reasonable, precise and have a sensible argumentation when you have a strong opinion of what or how to do something.

Always remember that time and money is limited so there are always hard choices to make even if those are ones you don’t like. You might want to remind your manager that short term saving will result in more long term costs but you also need to remember that current cash flow still might allow anything else. Make your case but accept that you don’t always see the big picture.

Neste
Neste

Vanlig App router feil fra NextJS “utviklere”