In an Enterprise context, there are “inherited” legacy systems and obsolete solutions from another technological age or resulting from adaptations, modifications, and forcing that have followed over time. Moreover, you can find these situations especially in those parts of IT systems that are crucial for the business and for which companies historically are more reluctant to make changes.
However, today modernizing some key points of corporate IT systems can be decisive for a long-term growth and competitiveness strategy: the continuous evolution of infrastructure and technologies involved is the key to guaranteeing a real efficiency of the processes and services offered.
Why legacy systems modernization is important
The answer to this need for the modernization of legacy systems comes from the market: the offering of sophisticated and cutting-edge solutions is getting wider and wider. Relying on a market solution entails numerous long-term benefits - including economic ones.
The first benefit is undoubtedly the opportunity to have a solid, reliable, and advanced system that brings improved performance and governance of the company's product and service offering.
Furthermore, the benefit of being able to take advantage of the latest technological updates stands out: applications and services’ orchestration platforms guarantee access to cutting-edge technologies, often administered in open‑source mode. In this way, the company will have access to the best performing techs on the market, guaranteed by constant updates and maintenance.
As for the economic benefits, this is certainly a considerable initial investment, but it allows you to save in the long term in terms of new applications and services production costs, time‑to‑market, and architecture maintenance.
Finally, the ability to evolve its architecture quickly and easily, as business needs grow and change, allows the company to gain the business agility required to effectively respond to the ever-changing needs of customers.
A success story: legacy systems modernization of a big insurance company
As digital catalyst, Mia‑Platform faced the migration of IT systems of an important Italian insurance group.
The company, which has been Mia‑Platform’s customer for some years, needed to migrate its IT systems with the goal to improve maintenance, governance, and software durability in the long run.
The project started in June 2020 and ended in September of the same year. It is about the process of black box recorder management for insured customers. The involved processes range from the sale and installation of the devices to the recovery of the metrics detected by them.
Specifically, our goal was to allow the various systems involved to pass from asynchronous communication, based on the exchange of batch flows, to communication via API Rest and event-based. The nature of the particular insurance product delivered through this system requires that the asynchronicity of the communication between the data recorders provider and the insurance agencies is maintained while discarding the old infrastructure based on the FTP file exchange.
The solution we have implemented is divided into two parts: on the one hand, introducing Apache Kafka into the infrastructure of the project, while on the other, transforming the entire codebase from monolith to microservices, which interface with various touchpoints via REST API.
To better illustrate the solution adopted, we will describe below one of the migrating project main flows, namely the sale of black box recorders.
Sales flow
The actors involved in the sales flow are three:
- The agencies that sell insurance contracts;
- The black box recorders supplier that manages the installation and gathers the driving metrics of the clients;
- The CRM for the monitoring of the entire selling process.
In this context, the role of Mia‑Platform is to mediate the communication among the three actors and to save on a MongoDB database all the exchanges that take place between the parties involved.
The legacy flow used to have these steps:
- The agency sends a request to create a contract following the sale of a new policy;
- At the end of the day, the supplier receives, via FTP flows, the list of all contracts sold that day;
- For each contract, the supplier, after verifying the correctness of the data, communicates the identifier of the relevant black box recorder through a new FTP stream;
- The supplier starts the installation process of the black box recorders and communicates the various stages of the installation;
- Any communication between the supplier and the agencies is notified to the CRM.
The mediation role used to be the following: supplying REST-APIs to the selling agencies and scheduling jobs to process files handled via FTP.
The new flow is articulated as follows:
- The agency sends a request to create a contract following the sale of a new policy;
- The supplier receives the request for every single contract in real-time, immediately responding with an Acknowledge signal which indicates that the request has been accepted;
- For each contract, the supplier, after verifying the correctness of the data, communicates the identifier of the relevant black box recorder to the agency through a new REST call;
- For each of these communications via API, an event is produced on Kafka and will be consumed by a service that has the sole task of notifying the CRM.
Using a message broker like Kafka allows to:
- Transform the CRM notification system from a set of scripts scheduled at the end of the day to an event-based real-time system;
- Automate the management of errors, relieving IT technicians of the burden of manually checking the correct alignment of the systems involved, at the end of the day;
- Centralize the passage of information in a single point, with the natural consequence of being able to easily connect monitoring tools that allow consultation of entire asynchronous transactions;
The main advantages of switching from FTP to REST API communication are:
- Increasing the responsiveness of the system by always providing immediate feedback to clients, even for those operations that follow an asynchronous path;
- Using data in JSON format that allows you to avoid parsing of CSV files, greatly reducing the complexity of the code.
The strategy for the legacy systems modernization
The modernization of legacy systems is usually a very delicate project as it involves some critical components for the company's business. Therefore, suddenly replacing the old system all at once would introduce unacceptable operational risk. Furthermore, the greater the scope of the intervention, the greater the probability of introducing malfunctions, with more or less serious repercussions on the entire process involved.
Consequently, legacy systems should be migrated incrementally. Initially, the system consists entirely of legacy code; as the increment is implemented, tested, and released into production, the percentage of legacy code progressively decreases until the end of the migration.
A good migration strategy must ensure that the system remains fully functional throughout the entire modernization process.
In the initial phase of the analysis, it is essential to break down the system into several parts, in order to define and schedule the increments. In this case, for example, the first macro-division identified was that between the sale of data recorders and the reporting of detected claims.
Within these two business areas, we have further divided the domain: for the sales part, for example, we separated the flow of creating a contract from the status updating flow.
Once the application domain is properly broken down, you can start planning the migration strategy. The case study led us to make an important logical separation between the read and write operations.
Finally, the chosen migrating strategy was the following:
- For each increment, we kept both the old and the new systems in production;
- At the writing stage, both systems are functional;
- The reading operations continue to take place in the old system throughout the testing period;
- At the end of the test, the old system is turned off and both the reading and writing operations take place on the new system.
This migration methodology allowed us to silently test the new system in production, drastically reducing the risk for the business.
The microservices and API architecture proposed by Mia‑Platform enabled the implementation of this strategy.
On a technical level, in fact, we have implemented a generic microservice, called requests-duplicator, from which all writing requests pass. This component, reading an input configuration, duplicates the incoming http requests, forwarding them to more than one system (two in our case: the legacy system and its modernized version).
The requests-duplicator returns to the client the response of the master system (indicated in its configuration) ignoring the responses of the other systems.
Another big advantage of this approach is being able to change the master system with a simple configuration flag, making it easier both to promote the new system as master and to perform any rollbacks .
The Data Model
As with almost all legacy modernization projects, we had to revise the data model too. In fact, if the technology remains the same (MongoDB), the structure of the data model changes.
This presents us with two further problems:
- To maintain unchanged the APIs exposed to the agencies ;
- To migrate the data contained in the collections of the old database to the new ones.
The first problem can be solved by implementing a set of microservices that map the old model to the new one in the writing phase and vice versa in the reading phase.
For the second problem we have chosen to use automated pipelines that populate the new database starting from the old one. The main advantage of using a pipeline for this type of operation is being able to test it in a pre-production environment, by only changing the only environment variables, such as the database connection string.
Conclusion
As we have seen, the migration projects of legacy systems are complex and articulated, and present some critical issues to be taken into consideration. For this reason, we at Mia‑Platform prefer an incremental approach, which allows us to maintain system operation in all phases of the migration.
For example, in the case study mentioned above, the particular nature of the product and the indispensable role of the systems required even greater attention to ensure the continuous operation of all parts of the system and the continuity of the service.
Using Mia‑Platform's ecosystem of products and tools allowed the team, consisting of 4 people only, not involved in the full-time project, to significantly accelerate migration and development times while maintaining complete visibility and clear governance of processes.
In particular, the Mia‑Platform tools that have proved particularly advantageous in this project are:
- The Marketplace, that offers a wide range of templates from which to start implementing new microservices that interface with Kafka. The use of these templates greatly simplifies the implementation of consumers and producers;
- The API Gateway, that allows you to quickly integrate new microservices into an existing project, and is particularly useful for configuring the request-duplicator component;
- The DevOps Console, that allows you to further reduce development times by providing a convenient graphical interface from which to create new services, configure the various routes, and set up an e2e test suite.
At the conclusion of the project we delivered to the customer a modern, efficient, fast and secure system, which can be easily managed and controlled by the company's operators, by using Mia‑Platform’s suite of tools.
This article was written by Nastassja Bellisario, Full Stack Expert, and Luca Scannapieco, Full Stack Expert & Scrum Master of Mia‑Platform.
© MIA s.r.l. All rights reserved