Lookup and Normalizer Transformation in Informatica

Last updated on Dec 16 2021
Santosh Singh

Table of Contents

Lookup and Normalizer Transformation in Informatica

Lookup transformation is employed to seem up a source, source qualifier, or target to urge the relevant data.

It is a sort of join operation during which one among the joining tables is that the source data, and therefore the other joining table is that the lookup table.

The Lookup transformation is employed to retrieve data supported a specified lookup condition. for instance, we will use a Lookup transformation to retrieve values from a database table for codes utilized in source data.

When a mapping task includes a Lookup transformation, then the task queries the lookup source supported the lookup fields and a lookup condition. The Lookup transformation returns the results of the lookup to the target or another transformation.

We can configure the Lookup transformation to return one row or multiple rows. this is often the passive transformation which allows performing the lookup on the flat files, relational table, views, and synonyms.

When we configure the Lookup transformation to return multiple rows, the Lookup transformation is a lively transformation. The lookup transformation supports horizontal merging, like equijoin and no equijoin.

When the mapping contains the workout transformation, the mixing service queries the lock up data and compares it with lookup input port values.

The lookup transformation is made with the subsequent sort of ports, such as:

  • Input port (I)
  • Output port (O)
  • Look up Ports (L)
  • Return Port (R)

Perform the subsequent tasks employing a Lookup transformation, such as:

  • Get a related value: Retrieve a worth from the lookup table on the idea of a worth within the source. for instance, the source features a student roll no. Retrieve the scholar name from the lookup table.
  • Get multiple values: Retrieve the multiple rows from a lookup table. for instance, return all students during a class.
  • Perform a calculation: Retrieve any value from a lookup table and use it during a calculation. for instance, retrieve the marks, calculate the share, and return the share to a target.
  • Update slowly changing dimension tables: Determine the rows that exist within the target.

Configure the Lookup Transformation

Configure the Lookup transformation to perform the various sorts of lookups, such as:

  • Relational or flat-file lookup: Perform a lookup on a file or a relational table. once we create a Lookup transformation by employing a relational table because the lookup source, we will hook up with the lookup source using ODBC and import the table definition because the structure for the Lookup transformation.

When we create a Lookup transformation by employing a flat-file as a lookup source, the Designer invokes the Flat-file Wizard.

  • Pipeline lookup: Perform a lookup on the appliance sources like JMS or MSMQ. Drag the source into the mapping and associate the Lookup transformation with the source qualifier. When the mixing Service retrieves source data for the lookup cache then configure the partitions to enhance performance.
  • Connected or unconnected lookup: A connected Lookup transformation receives source data, performs a lookup, and returns data to the pipeline. Or the unconnected Lookup transformation isn’t connected to a target.

A transformation within the pipeline calls the Lookup transformation with a :LKP expression. The unconnected Lookup transformation returns one column to the calling transformation.

  • Cached or uncached lookup: Cache the lookup source to enhance performance. we will use static or dynamic cache for caching the lookup source.

By default, the lookup cache remains static and doesn’t change during the session. With a dynamic cache, the mixing Service inserts or updates rows within the cache. once we cache the target table because the lookup source, we will search values within the cache to work out if the values exist within the target. The Lookup transformation marks rows to insert or update the target.

Normalizer Transformation

The Normalizer is a lively transformation. it’s wont to convert one row into multiple rows. When the Normalizer transformation receives a row that contains multiple-occurring data, it returns a row for every instance of the multiple-occurring data.

If during a single row, there’s repeating data in multiple columns, then it are often split into multiple rows. Sometimes we’ve data in multiple occurring columns.

For example, a relational source includes four fields with flat sales data. we will configure a Normalizer transformation to get a separate output row for every flat.

When the Normalizer returns multiple rows from an incoming row, it returns duplicate data for single-occurring incoming columns.

The Normalizer transformation receives a row that contains multiple-occurring columns and returns a row for every instance of the multiple-occurring data. The transformation processes multiple-occurring columns or multiple-occurring groups of columns in each source row.

Here are the subsequent properties of Normalizer transformation within the Properties panel, such as:

  • Normalized Fields Tab: Define the multiple-occurring fields and specify additional fields that you simply want to use within the mapping.
  • Field Mapping Tab: Connect the incoming fields to the normalized fields.

We need the acceptable license to use the Normalizer transformation.

The Normalizer transformation parses multiple-occurring columns from COBOL sources, relational tables, or other sources. It can process multiple record types from a COBOL source that contains a REDEFINES clause.

Normalizer Transformation Types

Here are the 2 sorts of Normalizer transformation, such as:

  • VSAM Normalizer Transformation: A non-reusable transformation that’s a Source Qualifier transformation for a COBOL source. The Mapping Designer creates VSAM Normalizer columns from a COBOL source during a mapping.

The column attributes are read-only. The VSAM Normalizer receives a multiple-occurring source column through one input port.

  • Pipeline Normalizer Transformation: a change that processes multiple-occurring data from relational tables or flat files.

We create the columns manually and edit them within the Transformation Developer or Mapping Designer. The pipeline Normalizer transformation represents multiple-occurring columns with one input port for every source column occurrence.

Example

We create the subsequent table that represents the scholar marks records of various classes, such as:

Step 1: Create the source table “stud_source” and target table “stud_target” using the script and import them in Informatica.

Student Name Class 7 Class 8 Class 9 Class 10
Joy 60 65 75 80
Edward 65 70 80 90

 

Step 2: Create a mapping having source stud_source and target table stud_target.

Step 3: From the transformation menu create a replacement transformation

  1. Select normalizer as transformation.
  2. Enter the name nrm_stud.
  3. and click on the Create

Step 4: The transformation is going to be created, then click on the Done button.

Step 5: Double click on the normalizer transformation, then

  1. Select the normalizer tab.
  2. Click on icon to make two columns.
  3. Enter column names.
  4. Set number of occurrences to 4 for marks and 0 for student name.
  5. Click on the OK

Columns are going to be generated within the transformation. we’ll see 4 number of marks column as we set the number of occurrences to 4.

Step 6: Then within the mapping

  1. Link the four-column of source qualifier of the four class to the normalizer columns, respectively.
  2. Link the scholar name column to the normalizer column.
  3. Link student_name & marks columns from normalizer to the target table.

Save the mapping and execute it after creating session and workflow. the category score column is repeating in four columns. for every class score of the scholar , a separate row are going to be created by using the Normalizer transformation.

The output of the above mapping will appear as if the following:

Student Name Class Score
Joy 7 60
Joy 8 65
Joy 9 75
Joy 10 80
Edward 7 65
Edward 8 70
Edward 9 80
Edward 10 90

 

The source data had repeating columns, namely class7, class 8, class 9, and sophistication 10. we’ve rearranged the data to suit into one column of sophistication, and for one source record, four records are created within the target by using Normalizer.

In this way, we will normalize data and make multiple records for one source of data.

So, this brings us to the end of blog. This Tecklearn ‘Lookup and Normalizer Transformation in Informatica’ blog helps you with commonly asked questions if you are looking out for a job in Informatica. If you wish to learn Informatica and build a career in Datawarehouse and ETL domain, then check out our interactive, Informatica Training, that comes with 24*7 support to guide you throughout your learning period. Please find the link for course details:

https://www.tecklearn.com/course/informatica-training-and-certification/

Informatica Training

About the Course

Tecklearn’s Informatica Training will help you master Data Integration concepts such as ETL and Data Mining using Informatica PowerCenter. It will also make you proficient in Advanced Transformations, Informatica Architecture, Data Migration, Performance Tuning, Installation & Configuration of Informatica PowerCenter. You will get trained in Workflow Informatica, data warehousing, Repository Management and other processes.

Why Should you take Informatica Training?

  • Informatica professionals earn up to $130,000 per year – Indeed.com
  • GE, eBay, PayPal, FedEx, EMC, Siemens, BNY Mellon & other top Fortune 500 companies use Informatica.
  • Key advantages of Informatica PowerCenter: Excellent GUI interfaces for Administration, ETL Design, Job Scheduling, Session monitoring, Debugging, etc.

What you will Learn in this Course?

Informatica PowerCenter 10 – An Overview

  • Informatica & Informatica Product Suite
  • Informatica PowerCenter as ETL Tool
  • Informatica PowerCenter Architecture
  • Component-based development techniques

Data Integration and Data Warehousing Fundamentals

  • Data Integration Concepts
  • Data Profile and Data Quality Management
  • ETL and ETL architecture
  • Brief on Data Warehousing

Informatica Installation and Configuration

  • Configuring the Informatica tool
  • How to install the Informatica operational administration activities and integration services

Informatica PowerCenter Transformations

  • Visualize PowerCenter Client Tools
  • Data Flow
  • Create and Execute Mapping
  • Transformations and their usage
  • Hands On

Informatica PowerCenter Tasks & Workflows

  • Informatica PowerCenter Workflow Manager
  • Reusability and Scheduling in Workflow Manager
  • Workflow Task and job handling
  • Flow within a Workflow
  • Components of Workflow Monitor

Advanced Transformations

  • Look Up Transformation
  • Active and Passive Transformation
  • Joiner Transformation
  • Types of Caches
  • Hands On

More Advanced Transformations – SQL (Pre-SQL and Post-SQL)

  • Load Types – Bulk, Normal
  • Reusable and Non-Reusable Sessions
  • Categories for Transformation
  • Various Types of Transformation – Filter, Expression, Update Strategy, Sorter, Router, XML, HTTP, Transaction Control

Various Types of Transformation – Rank, Union, Stored Procedure

  • Error Handling and Recovery in Informatica
  • High Availability and Failover in Informatica
  • Best Practices in Informatica
  • Debugger
  • Performance Tuning

Performance Tuning, Design Principles & Caches

  • Performance Tuning Methodology
  • Mapping design tips & tricks
  • Caching & Memory Optimization
  • Partition & Pushdown Optimization
  • Design Principles & Best Practices

Informatica PowerCenter Repository Management

  • Repository Manager tool (functionalities, create and delete, migrate components)
  • PowerCenter Repository Maintenance

Informatica Administration & Security

  • Features of PowerCenter 10
  • Overview of the PowerCenter Administration Console
  • Integration and repository service properties
  • Services in the Administration Console (services, handle locks)
  • Users and groups

Command Line Utilities

  • Infacmd, infasetup, pmcmd, pmrep
  • Automate tasks via command-line programs

More Advanced Transformations – XML

  • Java Transformation
  • HTTP Transformation

Got a question for us? Please mention it in the comments section and we will get back to you.

0 responses on "Lookup and Normalizer Transformation in Informatica"

Leave a Message

Your email address will not be published. Required fields are marked *