DATABASE SQL MINUS SET OPERATOR IN INFORMATICA
How to Implement the mapping logic for the database minus set operator in Informatica?
The SQL Minus set operator returns the distinct rows from the first select statement which are not in second select statement. Lets consider the employees table as an example:
The SQL Minus set operator returns the distinct rows from the first select statement which are not in second select statement. Lets consider the employees table as an example:
Table Name: EMP_A emp_id, name ------------ 1, Scott 1, Scott 2, Matt 2, Matt 3, Chris Table Name: EMP_B emp_id, name ------------ 2, Matt 2, Matt 3, Chris
INFORMATICA SCENARIOS - REAL TIME SCENARIOS
The following list of informatica scenarios helps you in learning informatica transformations and also for facing interviews easily.
1. Get Previous Row Value
Retrieve the previous row value when processing the current row.
2. Load all rows except last N rows
Skip Last N rows from the source and load the reamining rows
3. Cumulative Sum Calculation
Find the cumulative sum of values using expression transformation.
1. Get Previous Row Value
Retrieve the previous row value when processing the current row.
2. Load all rows except last N rows
Skip Last N rows from the source and load the reamining rows
3. Cumulative Sum Calculation
Find the cumulative sum of values using expression transformation.
GET PREVIOUS ROW VALUE IN INFORMATICA
How to get the previous row value while processing the current row in informatica?
One of my blog readers asked this question. The source data is shown below:
One of my blog readers asked this question. The source data is shown below:
Table Name: Customers cust_id, Year, City ----------------------- 10, 2001, BLR 10, 2002, MUM 10, 2003, SEA 10, 2004, NY 20, 2001, DEL 20, 2002, NCR 20, 2003, HYD
LOAD ALL RECORDS EXCEPT LAST N - INFORMATICA
Q) I want to load all the records from my source, which is a file, except the last 5 records. This question can be asked interview as "How to remove the footer record which is last record"
Example: My source file contains the following records:
After excluding the last 5 records, i want to load A,B into the target. How to implement a mapping logic for this in informatica?
Example: My source file contains the following records:
Name ---- A B C D E F G
After excluding the last 5 records, i want to load A,B into the target. How to implement a mapping logic for this in informatica?
CUMULATIVE SUM CALCULATION IN INFORMATICA
Q) How to find the cumulative sum of salaries of employees in informatica?
I have employees table as a source. The data in the employees table is shown below:
I have employees table as a source. The data in the employees table is shown below:
Table name: employees Dept_Id, emp_id, salary --------------------- 10, 201, 10000 10, 202, 20000 10, 203, 30000 20, 301, 40000 20 302, 50000
INFORMATICA REAL TIME SCENARIOS - SOLUTIONS
This is continuation to my previous post on Informatica Real Time Complex scenarioswhich contains around 50 problems. Here i am providing few more real time Informatica scenarios with answers.
REVERSE THE CONTENTS OF FLAT FILE – INFORMATICA
Q1) I have a flat file, want to reverse the contents of the flat file which means the first record should come as last record and last record should come as first record and load into the target file.
As an example consider the source flat file data as
As an example consider the source flat file data as
Informatica Enterprise Solution Informatica Power center Informatica Power exchange Informatica Data quality
SOURCE QUALIFIER TRANSFORMATION EXAMPLES - INFORMATICA
Here i am providing some basic examples of using source qualifier transformation in your mappings. This is a continuation to my previous posts, Learn Source Qualifier Transformation, Quiz on Source Qualifier Transformation.
To solve these examples create the employees and departments tables in your database. The "create table" statements are provided below.
Try Solving the below examples:
1. Create a mapping to join employees and departments table on " DEPARTMENT_ID " column using source qualifier transformation?
Solution:
2. Create a mapping to join employees and departments table on "MANAGER_ID" column using source qualifier transformation?
Solution:
Solution:
This is very simple. Go to the properties tab of source qualifier-> Source Filter. Open the editor and enter EMPLOYEES.MANAGER_ID IS NOT NULL
4. Create a mapping to sort the data of employees table on DEPARTMENT_ID, SALARY?
Solution:
Make sure the ports order in the source qualifier transformation as shown below
The first two ports should be DEPARTMENT_ID, SALARY and the rest of the ports can be in any order.
Now go to the properties tab of source qualifier-> Number Of Sorted Ports. Make the Number Of Sorted Ports value as 2.
5. Create a mapping to get only distinct departments in employees table?
Solution:
To solve these examples create the employees and departments tables in your database. The "create table" statements are provided below.
create table DEPARTMENTS ( DEPARTMENT_ID NUMBER(4) not null, DEPARTMENT_NAME VARCHAR2(15) not null, MANAGER_ID NUMBER(6) ) alter table DEPARTMENTS add primary key (DEPARTMENT_ID); create table EMPLOYEES ( EMPLOYEE_ID NUMBER(6) not null, NAME VARCHAR2(10), LAST_NAME VARCHAR2(10), SALARY NUMBER(10,2), MANAGER_ID NUMBER(6), DEPARTMENT_ID NUMBER(4) ) alter table EMPLOYEES add primary key (EMPLOYEE_ID); alter table EMPLOYEES add foreign key (DEPARTMENT_ID) references DEPARTMENTS (DEPARTMENT_ID); Table Name: Employees EMPLOYEE_ID NAME SALARY MANAGER_ID DEPARTMENT_ID ------------------------------------------------- 101 PAT 1000 201 10 102 KEVIN 2000 201 10 201 MIKE 5000 NULL 10 301 JOHN 7000 NULL NULL Table Name: Departments DEPARTMENT_ID DEPARTMENT_NAME MANAGER_ID ----------------------------------------- 10 Account 201 20 HR 501
Try Solving the below examples:
1. Create a mapping to join employees and departments table on " DEPARTMENT_ID " column using source qualifier transformation?
Solution:
- Source qualifier transformation can be used to join sources only from the same database.
- Connect the source definitions of departments and employees to the same qualifier transformation.
- As there is a primary-key, foreign-key relationship between the source tables, the source qualifier transformation by default joins the two sources on the DEPARTMENT_ID column.
2. Create a mapping to join employees and departments table on "MANAGER_ID" column using source qualifier transformation?
Solution:
- Connect the source definitions of departments and employees to the same qualifier transformation.
- Go to the properties tab of source qualifier ->User Defined Join and then open the editor. Enter the join condition as DEPARTMENTS.MANAGER_ID = EMPLOYEES.MANAGER_ID. Click Ok.
- Now connect the required ports from the source qualifier transformation to the target.
Solution:
This is very simple. Go to the properties tab of source qualifier-> Source Filter. Open the editor and enter EMPLOYEES.MANAGER_ID IS NOT NULL
4. Create a mapping to sort the data of employees table on DEPARTMENT_ID, SALARY?
Solution:
Make sure the ports order in the source qualifier transformation as shown below
DEPARTMENT_ID SALARY EMPLOYEE_ID NAME LAST_NAME MANAGER_ID
The first two ports should be DEPARTMENT_ID, SALARY and the rest of the ports can be in any order.
Now go to the properties tab of source qualifier-> Number Of Sorted Ports. Make the Number Of Sorted Ports value as 2.
5. Create a mapping to get only distinct departments in employees table?
Solution:
- The source qualifier transformation should only contain the DEPARTMENT_ID port from EMPLOYEES source definition.
- Now go to the properties tab of source qualifier-> Select Distinct. Check the check box of Select Distinct option.
INFORMATICA PROBLEMS WITH SOLUTIONS - PART 1
1. In this problem we will see how to implement the not equal operator, greater than, greater than or equal to, less than and less than or equal to operators when joining two tables in informatica.
Consider the below sales table as an example?
Table name: Sales
Now the problem is to identify the products whose sales is less than in the current year (In this example: 2011) when compared to the last year.
Here in this example, Product A sold less in 2011 when compared with the sales in 2010.
This problem can be easily implemented with the help of SQL query as shown below
In informatica, you can specify only equal to condition in joiner. Now we will see how to implement this problem using informatica.
Solution:
STEP1: Connect two source qualifier transformations to the source definition. Call the first source qualifier transformation as sq_cy (cy means current year) and the other as sq_py (py means previous year).
STEP2: In the sq_cy source qualifier transformation, specify the source filter as price=2011. In the sq_py, specify the source filter as price=2010
STEP3: Now connect these two source qualifier transformations to joiner transformation and make sq_cy as master, sq_py as detail. In the join condition, select the product port from master and detail.
STEP4: Now connect all the master ports and only the prod_quantity port from detail to the filter transformation. In the filter transformation specify the filter condition as prod_quantity < prod_quantity1. Here pord_quantity port is from master port and prod_quantity1 is from detail port.
STEP4: Connect all the ports except the prod_quantity1 of filter transformation to the target definition.
2. How to implement the not exists operator in informatica which is available in database?
Solution:
Implementing the Not Exists operator is very easy in informatica. For example, we want to get only the records which are available in table A and not in table B. For this use a joiner transformation with A as master and B as detail. Specify the join condition and in the join type, select detail outer join. This will get all the records from A table and only the matching records from B table.
Connect the joiner to a filter transformation and specify the filter condition as B_port is NULL. This will give the records which are in A and not in B. Then connect the filter to the target definition.
For informatica scenario questions: Informatica Scenarios
Consider the below sales table as an example?
Table name: Sales
product, prod_quantity, price , Year
A , 10 , 100 , 2010
B , 15 , 150 , 2010
A , 8 , 80 , 2011
B , 26 , 260 , 2011
Now the problem is to identify the products whose sales is less than in the current year (In this example: 2011) when compared to the last year.
Here in this example, Product A sold less in 2011 when compared with the sales in 2010.
This problem can be easily implemented with the help of SQL query as shown below
SELECT cy.*
FROM SALES cy,
SALES py
WHERE cy.product = py.product
AND cy.year=2011
AND py.year=2010
AND cy.prod_quantity < py.prod_quantity;
In informatica, you can specify only equal to condition in joiner. Now we will see how to implement this problem using informatica.
Solution:
STEP1: Connect two source qualifier transformations to the source definition. Call the first source qualifier transformation as sq_cy (cy means current year) and the other as sq_py (py means previous year).
STEP2: In the sq_cy source qualifier transformation, specify the source filter as price=2011. In the sq_py, specify the source filter as price=2010
STEP3: Now connect these two source qualifier transformations to joiner transformation and make sq_cy as master, sq_py as detail. In the join condition, select the product port from master and detail.
STEP4: Now connect all the master ports and only the prod_quantity port from detail to the filter transformation. In the filter transformation specify the filter condition as prod_quantity < prod_quantity1. Here pord_quantity port is from master port and prod_quantity1 is from detail port.
STEP4: Connect all the ports except the prod_quantity1 of filter transformation to the target definition.
2. How to implement the not exists operator in informatica which is available in database?
Solution:
Implementing the Not Exists operator is very easy in informatica. For example, we want to get only the records which are available in table A and not in table B. For this use a joiner transformation with A as master and B as detail. Specify the join condition and in the join type, select detail outer join. This will get all the records from A table and only the matching records from B table.
Connect the joiner to a filter transformation and specify the filter condition as B_port is NULL. This will give the records which are in A and not in B. Then connect the filter to the target definition.
For informatica scenario questions: Informatica Scenarios
INFORMATICA SCENARIO BASED QUESTIONS - PART 5
Q1. The source data contains only column 'id'. It will have sequence numbers from 1 to 1000. The source data looks like as
Create a workflow to load only the Fibonacci numbers in the target table. The target table data should look like as
In Fibonacci series each subsequent number is the sum of previous two numbers. Here assume that the first two numbers of the fibonacci series are 1 and 2.
Solution:
STEP1: Drag the source to the mapping designer and then in the Source Qualifier Transformation properties, set the number of sorted ports to one. This will sort the source data in ascending order. So that we will get the numbers in sequence as 1, 2, 3, ....1000
STEP2: Connect the Source Qualifier Transformation to the Expression Transformation. In the Expression Transformation, create three variable ports and one output port. Assign the expressions to the ports as shown below.
Ports in Expression Transformation:
id
v_sum = v_prev_val1 + v_prev_val2
v_prev_val1 = IIF(id=1 or id=2,1, IIF(v_sum = id, v_prev_val2, v_prev_val1) )
v_prev_val2 = IIF(id=1 or id =2, 2, IIF(v_sum=id, v_sum, v_prev_val2) )
o_flag = IIF(id=1 or id=2,1, IIF( v_sum=id,1,0) )
STEP3: Now connect the Expression Transformation to the Filter Transformation and specify the Filter Condition as o_flag=1
STEP4: Connect the Filter Transformation to the Target Table.
Q2. The source table contains two columns "id" and "val". The source data looks like as below
Here the "val" column contains comma delimited data and has three fields in that column.
Create a workflow to split the fields in “val” column to separate rows. The output should look like as below.
Solution:
STEP1: Connect three Source Qualifier transformations to the Source Definition
STEP2: Now connect all the three Source Qualifier transformations to the Union Transformation. Then connect the Union Transformation to the Sorter Transformation. In the sorter transformation sort the data based on Id port in ascending order.
STEP3: Pass the output of Sorter Transformation to the Expression Transformation. The ports in Expression Transformation are:
id (input/output port)
val (input port)
v_currend_id (variable port) = id
v_count (variable port) = IIF(v_current_id!=v_previous_id,1,v_count+1)
v_previous_id (variable port) = id
o_val (output port) = DECODE(v_count, 1,
SUBSTR(val, 1, INSTR(val,',',1,1)-1 ),
2,
SUBSTR(val, INSTR(val,',',1,1)+1, INSTR(val,',',1,2)-INSTR(val,',',1,1)-1),
3,
SUBSTR(val, INSTR(val,',',1,2)+1),
NULL
)
STEP4: Now pass the output of Expression Transformation to the Target definition. Connect id, o_val ports of Expression Transformation to the id, val ports of Target Definition.
For those who are interested to solve this problem in oracle sql, Click Here. The oracle sql query provides a dynamic solution where the "val" column can have varying number of fields in each row.
Id
1
2
3
4
5
6
7
8
....
1000
Create a workflow to load only the Fibonacci numbers in the target table. The target table data should look like as
Id
1
2
3
5
8
13
.....
In Fibonacci series each subsequent number is the sum of previous two numbers. Here assume that the first two numbers of the fibonacci series are 1 and 2.
Solution:
STEP1: Drag the source to the mapping designer and then in the Source Qualifier Transformation properties, set the number of sorted ports to one. This will sort the source data in ascending order. So that we will get the numbers in sequence as 1, 2, 3, ....1000
STEP2: Connect the Source Qualifier Transformation to the Expression Transformation. In the Expression Transformation, create three variable ports and one output port. Assign the expressions to the ports as shown below.
Ports in Expression Transformation:
id
v_sum = v_prev_val1 + v_prev_val2
v_prev_val1 = IIF(id=1 or id=2,1, IIF(v_sum = id, v_prev_val2, v_prev_val1) )
v_prev_val2 = IIF(id=1 or id =2, 2, IIF(v_sum=id, v_sum, v_prev_val2) )
o_flag = IIF(id=1 or id=2,1, IIF( v_sum=id,1,0) )
STEP3: Now connect the Expression Transformation to the Filter Transformation and specify the Filter Condition as o_flag=1
STEP4: Connect the Filter Transformation to the Target Table.
Q2. The source table contains two columns "id" and "val". The source data looks like as below
id val
1 a,b,c
2 pq,m,n
3 asz,ro,liqt
Here the "val" column contains comma delimited data and has three fields in that column.
Create a workflow to split the fields in “val” column to separate rows. The output should look like as below.
id val
1 a
1 b
1 c
2 pq
2 m
2 n
3 asz
3 ro
3 liqt
Solution:
STEP1: Connect three Source Qualifier transformations to the Source Definition
STEP2: Now connect all the three Source Qualifier transformations to the Union Transformation. Then connect the Union Transformation to the Sorter Transformation. In the sorter transformation sort the data based on Id port in ascending order.
STEP3: Pass the output of Sorter Transformation to the Expression Transformation. The ports in Expression Transformation are:
id (input/output port)
val (input port)
v_currend_id (variable port) = id
v_count (variable port) = IIF(v_current_id!=v_previous_id,1,v_count+1)
v_previous_id (variable port) = id
o_val (output port) = DECODE(v_count, 1,
SUBSTR(val, 1, INSTR(val,',',1,1)-1 ),
2,
SUBSTR(val, INSTR(val,',',1,1)+1, INSTR(val,',',1,2)-INSTR(val,',',1,1)-1),
3,
SUBSTR(val, INSTR(val,',',1,2)+1),
NULL
)
STEP4: Now pass the output of Expression Transformation to the Target definition. Connect id, o_val ports of Expression Transformation to the id, val ports of Target Definition.
For those who are interested to solve this problem in oracle sql, Click Here. The oracle sql query provides a dynamic solution where the "val" column can have varying number of fields in each row.
INFORMATICA INTERVIEW QUESTIONS - PART3
1. What is polling?
Polling displays the updated information about the session in the monitor window. The monitor window displays the status of each session when you poll the informatica server.
Polling displays the updated information about the session in the monitor window. The monitor window displays the status of each session when you poll the informatica server.
When the informatica server encounters the DD_Reject in update strategy transformation, violates the database constraints, filed in the rows were truncated or overflowed.
3. What are the data movement modes in informatica?
Data movement mode determines how informatica server handles the character data. You can choose the data movement mode in the informatica server configuration settings. Two types of data movement modes are available in informatica. They are ASCII mode and Unicode mode.
4. Define mapping and session?
- Mapping: It is a set of source and target definitions linked by transformation objects that define the rules for transformation.
- Session: It is a set of instructions that describe how and when to move data from source to targets.
Yes. By using Metadata reporter we can generate reports in informatica.
6. What is metadata reporter?
It is a web based application that enables you to run reports against repository metadata. With a metadata reporter, you can access information about the repository without having knowledge of SQL, transformation language or underlying tables in the repository.
It is a web based application that enables you to run reports against repository metadata. With a metadata reporter, you can access information about the repository without having knowledge of SQL, transformation language or underlying tables in the repository.
7. What is the default source option for update strategy transformation?
Data driven.
8. What is Data driven?
The informatica server follows the instructions coded in the update strategy transformations with in the mapping and determines how to flag the records for insert, update, delete or reject. If you do not choose data driven option setting, the informatica server ignores all update strategy transformations in the mapping.
9. What is source qualifier transformation?
When you add a relational or a flat file source definition to a mapping, you need to connect it to a source qualifier transformation. The source qualifier transformation represents the records that the informatica server reads when it runs a session.
10. What are the tasks that source qualifier perform?
- Joins the data originating from same source data base.
- Filter records when the informatica server reads source data.
- Specify an outer join rather than the default inner join
- specify sorted records.
- Select only distinct values from the source.
- Create custom query to issue a special SELECT statement for the informatica server to read the source data.
Equi Join
12. What are the basic requirements to join two sources in a source qualifier transformation using default join?
- The two sources should have primary key and foreign key relationship.
- The two sources should have matching data types.
INFORMATICA INTERVIEW QUESTIONS - PART 2
1. What are the differences between joiner transformation and source qualifier transformation?
2. What are the limitations of joiner transformation?
The join types are
6. What is the look up transformation?
7. Why use the lookup transformation?
Lookup transformation is used to perform the following tasks.
9. What is meant by lookup caches?
The informatica server builds a cache in memory when it processes the first row of a data in a cached look up transformation. It allocates memory for the cache based on the amount you configure in the transformation or session properties. The informatica server stores condition values in the index cache and output values in the data cache.
10. What are the types of lookup caches?
12. In which transformation you cannot drag ports into it?
Normalizer Transformation.
13. How the informatica server sorts the string values in Rank transformation?
When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sort order. If you configure the session to use a binary sort order, the informatica server calculates the binary value of each string and returns the specified number of rows with the highest binary values for the string.
14. What are the rank caches?
During the session, the informatica server compares an input row with rows in the data cache. If the input row out-ranks a stored row, the informatica server replaces the stored row with the input row. The informatica server stores group information in an index cache and row data in a data cache.
15. What is the Rankindex port in Rank transformation?
The Designer automatically creates a RANKINDEX port for each Rank transformation. The Informatica Server uses the Rank Index port to store the ranking position for each record in a group.
16. What is the Router transformation?
A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. However, a Filter transformation tests data for one condition and drops the rows of data that do not meet the condition. A Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group.
If you need to test the same input data based on multiple conditions, use a Router Transformation in a mapping instead of creating multiple Filter transformations to perform the same task.
17. What are the types of groups in Router transformation?
The different types of groups in router transformation are
Three types of data passes between the informatica server and stored procedure.
19. What is the status code in stored procedure transformation?
Status code provides error handling for the informatica server during the session. The stored procedure issues a status code that notifies whether or not stored procedure completed successfully. This value cannot seen by the user. It only used by the informatica server to determine whether to continue running the session or stop.
20. What is the target load order?
You can specify the target load order based on source qualifiers in a mapping. If you have the multiple source qualifiers connected to the multiple targets, you can designate the order in which informatica server loads data into the targets.
A joiner transformation can join heterogeneous data sources where as a source qualifier can join only homogeneous sources. Source qualifier transformation can join data from only relational sources but cannot join flat files.
- Both pipelines begin with the same original data source.
- Both input pipelines originate from the same Source Qualifier transformation.
- Both input pipelines originate from the same Normalizer transformation.
- Both input pipelines originate from the same Joiner transformation.
- Either input pipelines contains an Update Strategy transformation.
- Either input pipelines contains a connected or unconnected Sequence Generator transformation.
3. What are the settings that you use to configure the joiner transformation?
The following settings are used to configure the joiner transformation.- Master and detail source
- Type of join
- Condition of the join
The join types are
- Normal (Default)
- Master outer
- Detail outer
- Full outer
When a Joiner transformation occurs in a session, the Informatica Server reads all the records from the master source and builds index and data caches based on the master rows. After building the caches, the Joiner transformation reads records from the detail source and performs joins.
Lookup transformation is used to lookup data in a relational table, view and synonym. Informatica server queries the look up table based on the lookup ports in the transformation. It compares the lookup transformation port values to lookup table column values based on the look up condition.
Lookup transformation is used to perform the following tasks.
- Get a related value.
- Perform a calculation.
- Update slowly changing dimension tables.
The types of lookup transformation are Connected and unconnected.
The informatica server builds a cache in memory when it processes the first row of a data in a cached look up transformation. It allocates memory for the cache based on the amount you configure in the transformation or session properties. The informatica server stores condition values in the index cache and output values in the data cache.
10. What are the types of lookup caches?
- Persistent cache: You can save the lookup cache files and reuse them the next time the informatica server processes a lookup transformation configured to use the cache.
- Re-cache from database: If the persistent cache is not synchronized with the lookup table, you can configure the lookup transformation to rebuild the lookup cache.
- Static cache: you can configure a static or read only cache for only lookup table. By default informatica server creates a static cache. It caches the lookup table and lookup values in the cache for each row that comes into the transformation. When the lookup condition is true, the informatica server does not update the cache while it processes the lookup transformation.
- Dynamic cache: If you want to cache the target table and insert new rows into cache and the target, you can create a look up transformation to use dynamic cache. The informatica server dynamically inserts data to the target table.
- Shared cache: You can share the lookup cache between multiple transactions. You can share unnamed cache between transformations in the same mapping.
Normalizer Transformation is used to normalize the data.
Normalizer Transformation.
13. How the informatica server sorts the string values in Rank transformation?
When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sort order. If you configure the session to use a binary sort order, the informatica server calculates the binary value of each string and returns the specified number of rows with the highest binary values for the string.
14. What are the rank caches?
During the session, the informatica server compares an input row with rows in the data cache. If the input row out-ranks a stored row, the informatica server replaces the stored row with the input row. The informatica server stores group information in an index cache and row data in a data cache.
15. What is the Rankindex port in Rank transformation?
The Designer automatically creates a RANKINDEX port for each Rank transformation. The Informatica Server uses the Rank Index port to store the ranking position for each record in a group.
16. What is the Router transformation?
A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. However, a Filter transformation tests data for one condition and drops the rows of data that do not meet the condition. A Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group.
If you need to test the same input data based on multiple conditions, use a Router Transformation in a mapping instead of creating multiple Filter transformations to perform the same task.
17. What are the types of groups in Router transformation?
The different types of groups in router transformation are
- Input group
- Output group
- User defined groups
- Default group
Three types of data passes between the informatica server and stored procedure.
- Input/Output parameters
- Return Values
- Status code.
19. What is the status code in stored procedure transformation?
Status code provides error handling for the informatica server during the session. The stored procedure issues a status code that notifies whether or not stored procedure completed successfully. This value cannot seen by the user. It only used by the informatica server to determine whether to continue running the session or stop.
20. What is the target load order?
You can specify the target load order based on source qualifiers in a mapping. If you have the multiple source qualifiers connected to the multiple targets, you can designate the order in which informatica server loads data into the targets.
INFORMATICA INTERVIEW QUESTIONS - PART 1
1. While importing the relational source definition from the database, what are the metadata of source that will be imported?
The metadata of the source that will be imported are:
- Source name
- Database location
- Column names
- Data type’s
- Key constraints
2. How many ways a relational source definition can be updated and what are they?
There are two ways to update the relational source definition:
- Edit the definition
- Re-import the definition
Place the flat file in local folder in the local machine
4. To provide support for Mainframes source data, which files are used as a source definitions?
COBOL files
COBOL files
5. Which transformation is needed while using the cobol sources as source definitions?
As cobol sources consists of denormalized data, normalizer transformation is required to normalize the data.
6. How to create or import flat file definition in to the warehouse designer?
We cannot create or import flat file definition into warehouse designer directly. We can create or import the file in source analyzer and then drag it into the warehouse designer.
7. What is a mapplet?
A mapplet is a set of transformations that you build in the mapplet designer and can be used in multiple mappings.
8. What is a transformation?
It is a repository object that generates, modifies or passes data.
9. What are the designer tools for creating transformations?
- Mapping designer
- Transformation developer
- Mapplet designer
An active transformation can change the number of rows that pass through it. A passive transformation does not change the number of rows that pass through it.
11. What are connected or unconnected transformations?
An unconnected transformation is not connected to other transformations in the mapping. Connected transformation is connected to other transformations in the mapping pipeline.
An unconnected transformation is not connected to other transformations in the mapping. Connected transformation is connected to other transformations in the mapping pipeline.
12. How many ways are there to create ports?
There are two ways to create the ports:
- Drag the port from another transformation
- Click the add button on the ports tab.
13. What are the reusable transformations?
Reusable transformations can be used in multiple mappings and mapplets. When you need to include this transformation into a mapping or a mapplet, an instance of it is dragged into the mapping or mapplet. Since, the instance of reusable transformation is a pointer to that transformation, any change in the reusable transformation will be inherited by all the instances.
Reusable transformations can be used in multiple mappings and mapplets. When you need to include this transformation into a mapping or a mapplet, an instance of it is dragged into the mapping or mapplet. Since, the instance of reusable transformation is a pointer to that transformation, any change in the reusable transformation will be inherited by all the instances.
14. What are the methods for creating reusable transformations?
Two methods:
- Design it in the transformation developer.
- Promote a standard transformation (Non reusable) from the mapping designer. After adding a transformation to the mapping, we can promote it to the status of reusable transformation.
15. What are the unsupported repository objects for a mapplet?
- COBOL source definition
- Joiner transformations
- Normalizer transformations
- Non reusable sequence generator transformations.
- Pre or post session stored procedures
- Target definitions
- Power mart 3.5 style Look Up functions
- XML source definitions
- IBM MQ source definitions
16. What are the mapping parameters and mapping variables?
- Mapping parameter represents a constant value which is defined before running a session. A mapping parameter retains the same value throughout the entire session. A parameter can be declared either in a mapping or mapplet and can have a default value. We can specify the value of the parameter in the parameter file and the session reads the parameter value from the parameter file.
- Unlike a mapping parameter, a mapping variable represents can change throughout the session. The informatica server saves the value of mapping variable in the repository at the end of session run and uses that value next time when the session runs.
17. Can we use the mapping parameters or variables created in one mapping into another mapping?
NO. We can use the mapping parameters or variables only in the transformations of the same mapping or mapplet in which we have created the mapping parameters or variables.
NO. We can use the mapping parameters or variables only in the transformations of the same mapping or mapplet in which we have created the mapping parameters or variables.
18. Can we use the mapping parameters or variables created in one mapping into any other reusable transformation?
Yes. As an instance of the reusable transformation created in the mapping belongs to that mapping only.
Yes. As an instance of the reusable transformation created in the mapping belongs to that mapping only.
19. How can we improve session performance in aggregator transformation?
Use sorted input. Sort the input on the ports which are specified as group by ports in aggregator.
20. What is aggregate cache in aggregator transformation?
The aggregator stores data in the aggregate cache until it completes aggregate calculations. When we run a session that uses an aggregator transformation, the informatica server creates index and data caches in memory to process the transformation. If the informatica server requires more space, it stores overflow values in cache files.
INFORMATICA SCENARIO BASED QUESTIONS - PART 4
Take a look at the following tree structure diagram. From the tree structure, you can easily derive the parent-child relationship between the elements. For example, B is parent of D and E.
The above tree structure data is represented in a table as shown below.
c1, c2, c3, c4
A, B, D, H
A, B, D, I
A, B, E, NULL
A, C, F, NULL
A, C, G, NULL
Here in this table, column C1 is parent of column C2, column C2 is parent of column C3, column C3 is parent of column C4.
Q1. Design a mapping to load the target table with the below data. Here you need to generate sequence numbers for each element and then you have to get the parent id. As the element "A" is at root, it does not have any parent and its parent_id is NULL.
id, element, parent_id
1, A, NULL
2, B, 1
3, C, 1
4, D, 2
5, E, 2
6, F, 3
7, G, 3
8, H, 4
9, I, 4
I have provided the solution for this problem in Oracle Sql query. If you are interested you can Click Here to see the solution.
Q2. This is an extension to the problem Q1. Let say column C2 has null for all the rows, then C1 becomes the parent of C3 and c3 is parent of C4. Let say both columns c2 and c3 has null for all the rows. Then c1 becomes the parent of c4. Design a mapping to accommodate these type of null conditions.
The above tree structure data is represented in a table as shown below.
c1, c2, c3, c4
A, B, D, H
A, B, D, I
A, B, E, NULL
A, C, F, NULL
A, C, G, NULL
Here in this table, column C1 is parent of column C2, column C2 is parent of column C3, column C3 is parent of column C4.
Q1. Design a mapping to load the target table with the below data. Here you need to generate sequence numbers for each element and then you have to get the parent id. As the element "A" is at root, it does not have any parent and its parent_id is NULL.
id, element, parent_id
1, A, NULL
2, B, 1
3, C, 1
4, D, 2
5, E, 2
6, F, 3
7, G, 3
8, H, 4
9, I, 4
I have provided the solution for this problem in Oracle Sql query. If you are interested you can Click Here to see the solution.
Q2. This is an extension to the problem Q1. Let say column C2 has null for all the rows, then C1 becomes the parent of C3 and c3 is parent of C4. Let say both columns c2 and c3 has null for all the rows. Then c1 becomes the parent of c4. Design a mapping to accommodate these type of null conditions.
For more scenario based questions visit
Part1
Part2
Part3
Part4
Part5
INFORMATICA INTERVIEW QUESTIONS ON TRANSFORMATIONS
The transformations which used mostly are listed in the below table. Click on the transforamtion to see the interview questions on the particular transformation.
1. What is a transformation?
A transformation is a repository object that generates, modifies, or passes data.
2. What is an active transformation?
An active transformation is the one which changes the number of rows that pass through it.
Example: Filter transformation
3. What is a passive transformation?
A passive transformation is the one which does not change the number of rows that pass through it.
Example: Expression transformation
4. What is a connected transformation?
A connected transformation is connected to the data flow or connected to the other transformations in the mapping pipeline.
Example: sorter transformation
5. What is an unconnected transformation?
An unconnected transformation is not connected to other transformations in the mapping. An unconnected transformation is called within another transformation and returns a value to that transformation.
Example: Unconnected lookup transformation, unconnected stored procedure transformation
6. What are multi-group transformations?
Transformations having multiple input and output groups are called multi-group transformations.
Examples: Custom, HTTP, Joiner, Router, Union, Unstructured Data, XML source qualifier, XML Target definition, XML parser, XML generator
7. List out all the transformations which use cache?
Aggregator, Joiner, Lookup, Rank, Sorter
8. What is blocking transformation?
Transformation which blocks the input rows are called blocking transformation.
Example: Custom transformation, unsorted joiner
9. What is a reusable transformation?
A reusable transformation is the one which can be used in multiple mappings. Reusable transformation is created in transformation developer.
10. How do you promote a non-reusable transformation to reusable transformation?
Edit the transformation and check the Make Reusable option
11. How to create a non-reusable instance of reusable transformations?
In the navigator, select an existing transformation and drag the transformation into the mapping workspace. Hold down the Ctrl key before you release the transformation.
12. Which transformation can be created only as reusable transformation but not as non-reusable transformation?
External procedure transformation.
Aggregator | Active/Connected |
Expression | Passive/Connected |
Filter | Active/Connected |
Joiner | Active/Connected |
Lookup | Passive/Connected or Unconnected |
Normalizer | Active/Connected |
Rank | Active/Connected |
Router | Active/Connected |
Sequence Generator | Passive/Connected |
Sorter | Active/Connected |
Source Qualifier | Active/Connected |
SQL | Active or Passive/Connected |
Stored Procedure | Passive/Connected or Unconnected |
Transaction Control | Active/Connected |
Union | Active/Connected |
Update Strategy | Active/Connected |
1. What is a transformation?
A transformation is a repository object that generates, modifies, or passes data.
2. What is an active transformation?
An active transformation is the one which changes the number of rows that pass through it.
Example: Filter transformation
3. What is a passive transformation?
A passive transformation is the one which does not change the number of rows that pass through it.
Example: Expression transformation
4. What is a connected transformation?
A connected transformation is connected to the data flow or connected to the other transformations in the mapping pipeline.
Example: sorter transformation
5. What is an unconnected transformation?
An unconnected transformation is not connected to other transformations in the mapping. An unconnected transformation is called within another transformation and returns a value to that transformation.
Example: Unconnected lookup transformation, unconnected stored procedure transformation
6. What are multi-group transformations?
Transformations having multiple input and output groups are called multi-group transformations.
Examples: Custom, HTTP, Joiner, Router, Union, Unstructured Data, XML source qualifier, XML Target definition, XML parser, XML generator
7. List out all the transformations which use cache?
Aggregator, Joiner, Lookup, Rank, Sorter
8. What is blocking transformation?
Transformation which blocks the input rows are called blocking transformation.
Example: Custom transformation, unsorted joiner
9. What is a reusable transformation?
A reusable transformation is the one which can be used in multiple mappings. Reusable transformation is created in transformation developer.
10. How do you promote a non-reusable transformation to reusable transformation?
Edit the transformation and check the Make Reusable option
11. How to create a non-reusable instance of reusable transformations?
In the navigator, select an existing transformation and drag the transformation into the mapping workspace. Hold down the Ctrl key before you release the transformation.
12. Which transformation can be created only as reusable transformation but not as non-reusable transformation?
External procedure transformation.
INFORMATICA INTERVIEW QUESTIONS ON UNION TRANSFORMATION
1. What is a union transformation?
A union transformation is used merge data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements.
2. As union transformation gives UNION ALL output, how you will get the UNION output?
Pass the output of union transformation to a sorter transformation. In the properties of sorter transformation check the option select distinct. Alternatively you can pass the output of union transformation to aggregator transformation and in the aggregator transformation specify all ports as group by ports.
3. What are the guidelines to be followed while using union transformation?
The following rules and guidelines need to be taken care while working with union transformation:
4. Why union transformation is an active transformation?
Union is an active transformation because it combines two or more data streams into one. Though the total number of rows passing into the Union is the same as the total number of rows passing out of it, and the sequence of rows from any given input stream is preserved in the output, the positions of the rows are not preserved, i.e. row number 1 from input stream 1 might not be row number 1 in the output stream. Union does not even guarantee that the output is repeatable
Recommended Posts:
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
Joiner Transformation Interview Questions
Normalizer Transformation Interview Questions
Rank Transformation Interview Questions
A union transformation is used merge data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements.
Pass the output of union transformation to a sorter transformation. In the properties of sorter transformation check the option select distinct. Alternatively you can pass the output of union transformation to aggregator transformation and in the aggregator transformation specify all ports as group by ports.
The following rules and guidelines need to be taken care while working with union transformation:
- You can create multiple input groups, but only one output group.
- All input groups and the output group must have matching ports. The precision, datatype, and scale must be identical across all groups.
- The Union transformation does not remove duplicate rows. To remove duplicate rows, you must add another transformation such as a Router or Filter transformation.
- You cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.
- The Union transformation does not generate transactions.
Union is an active transformation because it combines two or more data streams into one. Though the total number of rows passing into the Union is the same as the total number of rows passing out of it, and the sequence of rows from any given input stream is preserved in the output, the positions of the rows are not preserved, i.e. row number 1 from input stream 1 might not be row number 1 in the output stream. Union does not even guarantee that the output is repeatable
Recommended Posts:
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
Joiner Transformation Interview Questions
Normalizer Transformation Interview Questions
Rank Transformation Interview Questions
INFORMATICA INTERVIEW QUESTIONS ON TRANSACTION CONTROL TRANSFORMATION
1. What is a transaction control transformation?
A transaction is a set of rows bound by a commit or rollback of rows. The transaction control transformation is used to commit or rollback a group of rows.
2. What is the commit type if you have a transaction control transformation in the mapping?
The commit type is "user-defined".
3. What are the different transaction levels available in transaction control transformation?
The following are the transaction levels or built-in variables:
Recommended Posts:
Union Transformation Interview Questions
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
Joiner Transformation Interview Questions
Normalizer Transformation Interview Questions
A transaction is a set of rows bound by a commit or rollback of rows. The transaction control transformation is used to commit or rollback a group of rows.
The commit type is "user-defined".
The following are the transaction levels or built-in variables:
- TC_CONTINUE_TRANSACTION: The Integration Service does not perform any transaction change for this row. This is the default value of the expression.
- TC_COMMIT_BEFORE: The Integration Service commits the transaction, begins a new transaction, and writes the current row to the target. The current row is in the new transaction.
- TC_COMMIT_AFTER: The Integration Service writes the current row to the target, commits the transaction, and begins a new transaction. The current row is in the committed transaction.
- TC_ROLLBACK_BEFORE: The Integration Service rolls back the current transaction, begins a new transaction, and writes the current row to the target. The current row is in the new transaction.
- TC_ROLLBACK_AFTER: The Integration Service writes the current row to the target, rolls back the transaction, and begins a new transaction. The current row is in the rolled back transaction.
Recommended Posts:
Union Transformation Interview Questions
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
Joiner Transformation Interview Questions
Normalizer Transformation Interview Questions
INFORMATICA INTERVIEW QUESTIONS ON SORTER TRANSFORMATION
1. What is a sorter transformation?
Sorter transformation is used to sort the data. You can sort the data either in ascending or descending order according to a specified sort key.
2. Why sorter is an active transformation?
As sorter transformation can suppress the duplicate records in the source, it is called an active transformation.
3. How to improve the performance of a session using sorter transformation?
Sort the data using sorter transformation before passing in to aggregator or joiner transformation. As the data is sorted, the integration service uses the memory to do aggregate and join operations and does not use cache files to process the data.
Recommended Posts:
Transaction Control Transformation Interview Questions
Union Transformation Interview Questions
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
Joiner Transformation Interview Questions
Sorter transformation is used to sort the data. You can sort the data either in ascending or descending order according to a specified sort key.
2. Why sorter is an active transformation?
As sorter transformation can suppress the duplicate records in the source, it is called an active transformation.
3. How to improve the performance of a session using sorter transformation?
Sort the data using sorter transformation before passing in to aggregator or joiner transformation. As the data is sorted, the integration service uses the memory to do aggregate and join operations and does not use cache files to process the data.
Recommended Posts:
Transaction Control Transformation Interview Questions
Union Transformation Interview Questions
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
Joiner Transformation Interview Questions
INFORMATICA INTERVIEW QUESTIONS ON EXPRESSION TRANSFORMATION
1. What is an expression transformation?
An expression transformation is used to calculate values in a single row.
Example: salary+1000
2. How to generate sequence numbers using expression transformation?
Create a variable port in expression transformation and increment it by one for every row. Assign this variable port to an output port.
3. Consider the following employees data as source?
Q1. Design a mapping to load the cumulative sum of salaries of employees into target table?
The target table data should look like as
Q2. Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null.
The output should look like as
4. Consider the following employees table as source
Q1. Design a mapping to load a target table with the following values from the above source?
Q2. Design a mapping to load a target table with the following values from the above source?
Click Here to know the solutions for the question 3 and 4
Recommended Posts:
Sorter Transformation Interview Questions
Transaction Control Transformation Interview Questions
Union Transformation Interview Questions
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
An expression transformation is used to calculate values in a single row.
Example: salary+1000
2. How to generate sequence numbers using expression transformation?
Create a variable port in expression transformation and increment it by one for every row. Assign this variable port to an output port.
3. Consider the following employees data as source?
Employee_id, Salary ------------------- 10, 1000 20, 2000 30, 3000 40, 5000
Q1. Design a mapping to load the cumulative sum of salaries of employees into target table?
The target table data should look like as
Employee_id, Salary, Cumulative_sum ----------------------------------- 10, 1000, 1000 20, 2000, 3000 30, 3000, 6000 40, 5000, 11000
Q2. Design a mapping to get the pervious row salary for the current row. If there is no pervious row exists for the current row, then the pervious row salary should be displayed as null.
The output should look like as
Employee_id, Salary, Pre_row_salary ----------------------------------- 10, 1000, Null 20, 2000, 1000 30, 3000, 2000 40, 5000, 3000
4. Consider the following employees table as source
Department_no, Employee_name ---------------------------- 20, R 10, A 10, D 20, P 10, B 10, C 20, Q 20, S
Q1. Design a mapping to load a target table with the following values from the above source?
Department_no, Employee_list ---------------------------- 10, A 10, A,B 10, A,B,C 10, A,B,C,D 20, A,B,C,D,P 20, A,B,C,D,P,Q 20, A,B,C,D,P,Q,R 20, A,B,C,D,P,Q,R,S
Q2. Design a mapping to load a target table with the following values from the above source?
Department_no, Employee_list ---------------------------- 10, A 10, A,B 10, A,B,C 10, A,B,C,D 20, P 20, P,Q 20, P,Q,R 20, P,Q,R,S
Click Here to know the solutions for the question 3 and 4
Recommended Posts:
Sorter Transformation Interview Questions
Transaction Control Transformation Interview Questions
Union Transformation Interview Questions
Aggregator Transformation Interview Questions
Filter Transformation Interview Questions
INFORMATICA SCENARIO BASED QUESTIONS - PART 3
1. Consider the following product types data as the source.
Product_id, product_type
10, video
10, Audio
20, Audio
30, Audio
40, Audio
50, Audio
10, Movie
20, Movie
30, Movie
40, Movie
50, Movie
60, Movie
Assume that there are only 3 product types are available in the source. The source contains 12 records and you dont know how many products are available in each product type.
Q1. Design a mapping to select 9 products in such a way that 3 products should be selected from video, 3 products should be selected from Audio and the remaining 3 products should be selected from Movie.
Solution:
Step1: Use sorter transformation and sort the data using the key as product_type.
Step2: Connect the sorter transformation to an expression transformation. In the expression transformation, the ports will be
product_id
product_type
V_curr_prod_type=product_type
V_count = IIF(V_curr_prod_type = V_prev_prod_type,V_count+1,1)
V_prev_prod_type=product_type
O_count=V_count
Step3: Now connect the expression transformaion to a filter transformation and specify the filter condition as O_count<=3. Pass the output of filter to a target table.
Q2. In the above problem Q1, if the number of products in a particular product type are less than 3, then you wont get the total 9 records in the target table. For example, see the videos type in the source data. Now design a mapping in such way that even if the number of products in a particular product type are less than 3, then you have to get those less number of records from another porduc types. For example: If the number of products in videos are 1, then the reamaining 2 records should come from audios or movies. So, the total number of records in the target table should always be 9.
Solution:
The first two steps are same as above.
Step3: Connect the expression transformation to a sorter transformation and sort the data using the key as O_count. The ports in soter transformation will be
product_id
product_type
O_count (sort key)
Step3: Discard O_count port and connect the sorter transformation to an expression transformation. The ports in expression transformation will be
product_id
product_type
V_count=V_count+1
O_prod_count=V_count
Step4: Connect the expression to a filter transformation and specify the filter condition as O_prod_count<=9. Connect the filter transformation to a target table.
2. Design a mapping to convert column data into row data without using the normalizer transformation.
The source data looks like
col1, col2, col3
a, b, c
d, e, f
The target table data should look like
Col
a
b
c
d
e
f
Solution:
Create three expression transformations with one port each. Connect col1 from Source Qualifier to port in first expression transformation. Connect col2 from Source Qualifier to port in second expression transformation. Connect col3 from source qualifier to port in third expression transformation. Create a union transformation with three input groups and each input group should have one port. Now connect the expression transformations to the input groups and connect the union transformation to the target table.
3. Design a mapping to convert row data into column data.
The source data looks like
id, value
10, a
10, b
10, c
20, d
20, e
20, f
The target table data should look like
id, col1, col2, col3
10, a, b, c
20, d, e, f
Solution:
Step1: Use sorter transformation and sort the data using id port as the key. Then connect the sorter transformation to the expression transformation.
Step2: In the expression transformation, create the ports and assign the expressions as mentioned below.
id
value
V_curr_id=id
V_count= IIF(v_curr_id=V_prev_id,V_count+1,1)
V_prev_id=id
O_col1= IIF(V_count=1,value,NULL)
O_col2= IIF(V_count=2,value,NULL)
O_col3= IIF(V_count=3,value,NULL)
Step3: Connect the expression transformation to aggregator transformation. In the aggregator transforamtion, create the ports and assign the expressions as mentioned below.
id (specify group by on this port)
O_col1
O_col2
O_col3
col1=MAX(O_col1)
col2=MAX(O_col2)
col3=MAX(O_col3)
Stpe4: Now connect the ports id, col1, col2, col3 from aggregator transformation to the target table.
For more scenario based questions visit
Part1
Part2
Part3
Part4
Part5
If you like this post, please share it by clicking on +1 Button.
Product_id, product_type
10, video
10, Audio
20, Audio
30, Audio
40, Audio
50, Audio
10, Movie
20, Movie
30, Movie
40, Movie
50, Movie
60, Movie
Assume that there are only 3 product types are available in the source. The source contains 12 records and you dont know how many products are available in each product type.
Q1. Design a mapping to select 9 products in such a way that 3 products should be selected from video, 3 products should be selected from Audio and the remaining 3 products should be selected from Movie.
Solution:
Step1: Use sorter transformation and sort the data using the key as product_type.
Step2: Connect the sorter transformation to an expression transformation. In the expression transformation, the ports will be
product_id
product_type
V_curr_prod_type=product_type
V_count = IIF(V_curr_prod_type = V_prev_prod_type,V_count+1,1)
V_prev_prod_type=product_type
O_count=V_count
Step3: Now connect the expression transformaion to a filter transformation and specify the filter condition as O_count<=3. Pass the output of filter to a target table.
Q2. In the above problem Q1, if the number of products in a particular product type are less than 3, then you wont get the total 9 records in the target table. For example, see the videos type in the source data. Now design a mapping in such way that even if the number of products in a particular product type are less than 3, then you have to get those less number of records from another porduc types. For example: If the number of products in videos are 1, then the reamaining 2 records should come from audios or movies. So, the total number of records in the target table should always be 9.
Solution:
The first two steps are same as above.
Step3: Connect the expression transformation to a sorter transformation and sort the data using the key as O_count. The ports in soter transformation will be
product_id
product_type
O_count (sort key)
Step3: Discard O_count port and connect the sorter transformation to an expression transformation. The ports in expression transformation will be
product_id
product_type
V_count=V_count+1
O_prod_count=V_count
Step4: Connect the expression to a filter transformation and specify the filter condition as O_prod_count<=9. Connect the filter transformation to a target table.
2. Design a mapping to convert column data into row data without using the normalizer transformation.
The source data looks like
col1, col2, col3
a, b, c
d, e, f
The target table data should look like
Col
a
b
c
d
e
f
Solution:
Create three expression transformations with one port each. Connect col1 from Source Qualifier to port in first expression transformation. Connect col2 from Source Qualifier to port in second expression transformation. Connect col3 from source qualifier to port in third expression transformation. Create a union transformation with three input groups and each input group should have one port. Now connect the expression transformations to the input groups and connect the union transformation to the target table.
3. Design a mapping to convert row data into column data.
The source data looks like
id, value
10, a
10, b
10, c
20, d
20, e
20, f
The target table data should look like
id, col1, col2, col3
10, a, b, c
20, d, e, f
Solution:
Step1: Use sorter transformation and sort the data using id port as the key. Then connect the sorter transformation to the expression transformation.
Step2: In the expression transformation, create the ports and assign the expressions as mentioned below.
id
value
V_curr_id=id
V_count= IIF(v_curr_id=V_prev_id,V_count+1,1)
V_prev_id=id
O_col1= IIF(V_count=1,value,NULL)
O_col2= IIF(V_count=2,value,NULL)
O_col3= IIF(V_count=3,value,NULL)
Step3: Connect the expression transformation to aggregator transformation. In the aggregator transforamtion, create the ports and assign the expressions as mentioned below.
id (specify group by on this port)
O_col1
O_col2
O_col3
col1=MAX(O_col1)
col2=MAX(O_col2)
col3=MAX(O_col3)
Stpe4: Now connect the ports id, col1, col2, col3 from aggregator transformation to the target table.
For more scenario based questions visit
Part1
Part2
Part3
Part4
Part5
If you like this post, please share it by clicking on +1 Button.
No comments:
Post a Comment