IFS Cloud integration: how to manage data migration
Migrating to IFS Cloud isn't just an import/export exercise. It's a translation of your information assets - repositories, historical data, open transactions - into a standardized data model and processes. Properly managed, migration secures start-up, accelerates adoption and avoids months of post-go-live "catch-up". Poorly managed, it creates stock discrepancies, billing breaks and loss of traceability. Here's a practical, tried-and-tested approach.
What "migrate" really means in IFS Cloud
In most projects, three families of data need to be processed differently. Repositories (customers, suppliers, items, bills of material, sites, warehouses, VAT codes, etc.) need to be cleaned up, deduplicated and aligned with the values expected by IFS Cloud. Open transactions (open orders, open production orders, inventory, open invoices) are used to ensure operational continuity from D+1. Finally, historical data can be partially migrated (for regulatory or auditing purposes) or outsourced to a searchable archive. From the outset, the decision is made as to what is to go, what is to stay, and at what level of detail.
Mapping and framing before transformation
Success starts at the very beginning. We draw up a clear cartography: which source applications, which business owners, which qualities and volumes, which known gaps. We set measurable objectives (completeness rate, % of duplicates to be eliminated, maximum cutover time) and governance rules: who decides in case of mapping ambiguity, who validates the samples, who signs the switchover. This step avoids processing "as you go" and discovering blocking exceptions too late.
Restoring data: quality and standardization
Before we talk scripts, we talk hygiene. Addresses are standardized, bank accounts verified, orphan items attached to the right families, units and currencies harmonized, VAT codes aligned. Inactive accounts and items beyond a defined threshold are deleted, third-party mergers are managed, and simple rules (ID nomenclature, field length, character encoding) are fixed. Each correction applied in the source avoids costly patching in migration pipelines.
Designing the target model and mapping
IFS Cloud imposes structures and dependencies (for example, an item depends on the company, site and logistics parameters). Mapping is therefore not a "flat" correspondence table: hierarchies, technical and functional keys, and "authorized" reference values are described. Transformation rules (concatenations, breakdowns, unit conversions) are documented, and boundary cases (multi-entity customers, multi-stock sites, configurable items) are provided for. This documentation becomes the migration contract, shared by IT and the business.
Tooling a reproducible pipeline
Migration must not be a "one-shot job". We set up an industrialized pipeline, capable of replaying the same steps several times: extraction from sources, staging, controlled transformations, loading into IFS Cloud via the appropriate mechanisms (APIs/standard loads), then control reports. The pipeline must be idempotent (loading twice must not corrupt the target), traced (readable logs) and segmented into batches to manage volume.
Repeating: blank tests and reconciliations
Several trial runs are planned, with volumes close to actual. Each trial ends with a quantified reconciliation: number of items loaded vs. expected, stock discrepancies by site, customers/suppliers in error and reasons, justified rejections. Trades validate against samples (order taking, receiving, manufacturing, invoicing) and indicators (balances, VAT). Deviations go back to the backlog and are corrected in the source, mapping or pipeline - never by hand in the target.
Organizing the changeover without losing business
The choice between "big bang" and progressive switchover depends on your context. In production or maintenance, a big bang over a weekend with a transaction freeze is often appropriate. In services, a perimeter-based switchover (countries, service lines) limits the risk, but may require temporary interfaces. Whatever the scenario, a freeze window is set, a migration iteration is repeated very close to go-live, a roll-back plan is documented, and a hypercare team is mobilized for the first week of operation.
Governance: who decides, who validates
Migration is as much a business project as an IT one. Data owners on the functional side arbitrate quality and validate data sets. A data manager ensures cross-functional consistency (shared repositories, coding). IT designs and operates pipelines, secures environments and guarantees traceability. This clear distribution avoids blind spots and "too late" decisions.
Security, compliance and RGPD
Test environments receive realistic but protected data: pseudonymization of sensitive elements, restricted access, regular purging. We trace who imported what, when and with which source set. Retention periods are respected, the location of non-migrated history is documented, and requests for deletion or portability are anticipated.
Performance and volume: think flows, not blocks
Large loads are managed in orderly batches to respect dependencies (repositories first, transactions later). We monitor the duration of each stage, parallelize carefully, and size the batches accordingly. On the network side, plan B (relaunch, error recovery, throttling) to avoid interminable or unstable migrations.
Business testing: proving continuity, not just import
In addition to technical reports, tests focus on realistic scenarios: creating a sales order from a migrated account, consuming migrated stock in a production order, lettering a migrated invoice with a real payment. Migration is accepted when the key operations follow one another without any bypasses, and the financial and logistical indicators are accurate to the penny.
A sample plan for planning ahead
A typical project follows a manageable rhythm: four to six weeks of scoping and mapping, eight to twelve weeks of data cleansing, mapping and scripting, two to three iterations of dry runs, then a two-week prepared switchover with freezing, validation of the switchover plan and hypercare. Timescales vary according to the number of sources, initial quality and functional scope, but the sequence remains the same: understand, sanitize, automate, repeat, receive, switch, stabilize.
Avoid common pitfalls
Migrations become more complicated when you try to embed the entire history "just in case", when you pile up undocumented exceptions, or when you correct at target rather than at source. Conversely, limiting the history to what is of use or regulatory value, formalizing transformation rules and keeping a replayable pipeline greatly reduces risk. Deviation transparency and correction discipline make all the difference in the final stretch.
And after go-live: sustaining quality
Migration is just the beginning. Putting in place continuous quality controls (duplicates, orphan repositories), creation rules in IFS Cloud (mandatory fields, codifications) and an improvement cycle with the business lines avoids "recreating" data debt. A small monthly data committee, health reports and a few automations are often enough to maintain the level. Setting up a team to manage the repository is also a good practice.
In short, a successful data migration to IFS Cloud is based on early project mapping, data cleansing, an industrialized, replayable pipeline, measured blank iterations, and a scripted, tested switchover. With this method, you get off to a reliable start and immediately capitalize on standard IFS Cloud processes.