DATA MIGRATION

Data to value during data migration 

Problem:

While planning data migration, the organization needs to gain an overview over information types and where these are stored, to avoid migrating the same data with the same issues to a new source. 

Solution:

Configure IntOp Context Layer to identify information types and categories. Categories may be processes, high-level information groups, departments or activities.  IntOp Engine ingests the data from the current sources and presents these in IntOp Fetch. Based on selections of filters in Fetch, reports may be sent to the migration team to migrate data in the same contexts to the new repository. 

Intended impact:

Due to the contextualization and categorization done in the IntOp Context Layer, the migration gives the data more value while also giving the migration team tools to clean up the data.

As the data is migrated in context, users will easier find what they need in the new source. Because IntOp Fetch already is configured to identify information types, all users may Fetch to find information in the new source as well.

 

 

Scenario 

In an organization, most users produce and consume text-based, unstructured data. Most of this data is stored in files on different applications and repositories. Over time, this production comes to represent what the employee knows and works on and should be a natural record of each person’s activity. The objective of data migration is to have a better insight into what types of data are stored where in the AS IS situation and sources, use this in planning and as a basis for decision on how to move forward with migration to the future (TO BE) situation and source(s). After the migration the information will be cleansed more available to users. 

Solution 

The IntOp solution offers several capabilities that may be relevant and useful for pre-migration analysis and preparation, and for insights and structuring of data during and after migration. 

IntOp Engine as data hub 

IntOp Engine is set up on premises or in cloud, depending on what is most efficient regarding security and data ingestion. The IntOp Connector is then connected to some or all sources where data is stored. During data ingestion, statistics regarding the last ingestion and all ingested data will be made available. For editable files, all data and metadata may be ingested by default, or only metadata. For non-editable files, all data and metadata may also be ingested, but data ingestion (OCR) is dependent on the quality of the files. If required, machine learning algorithms could be run on the ingested data to boost metadata. 

IntOp Context Layer for data sorting 

During or before the data ingestion, contexts for use in sorting the data should be defined. No IT experience or developer skills are needed to manage the contexts in the IntOp Context Management Tool. The IntOp Context Layer may be changed accordingly, so it continues to adapt to new scenarios, user stories or business cases. 

IntOp Fetch and BI dashboards for analysis and migration preparation 

For the analysis of data, all data will be available in IntOp Fetch. Analyst, project members and management may navigate, discover and filter data in Fetch using the filters based on the defined contexts in the IntOp Context Layer and appropriate facets such as Filetype, Extension type, Author, Owner, Date and so on. Any combination of filters will result in a data selection. This selection may be saved, shared and re-used. As a selection is dynamic, the number and types of files will change as more data is ingested, or the Context Layer is changed and improved. In addition, statistics may be made available in BI dashboards for further insights and analysis. 

Give added value to the migration process 

Any selection of data made in IntOp Fetch may be exported to a report, giving the migration team a tool to use for migrating selections of files that belong to the same context from several different sources in the AS IS scenario to one or more sources in the TO BE scenario. This means that instead for migrating data site by site, and folder by folder, the system enables a contextual, sorted migration across sites and folders.  

In addition, there are possibilities of doing a cleanup of data while preparing for migration. This would require defining rules and criteria and developing intelligent contexts and filters to catch the unwanted or unclean data. There are several categories of data that would be possible to target in this way, for instance: 

  1. Sort, delete data with low or no business value (Christmas invitations, drafts, empty files, private files).  
  2. Data that is overdue for deletion due to retention rules. Action: Treat according to retention rules. 
  3. Data that is currently stored with the wrong access rights 
  4. Sensitive/confidential data stored openly. Action: migrate to repository with correct access rights. 
  5. Data of business value stored with limited access rights. Action: migrate to repository with correct access rights. 

Read more