Tuesday, 24 November 2015

Performance Booster BULK COLLECT /BULK BIND/ CURRENT OF

While  doing programming   in PL/SQL  following are the common scenario of data manipulation


1. selecting multiple  records from the cursor or table (BULK COLLECT)
2. Inserting multiple record in the database(BULK  BIND )
3. Updating the multiple records to the cursor latest row fetched  from the cursor  (CURRENT OF )

 1. selecting multiple  records from the cursor or table (BULK COLLECT)
 
 Concept of Context Switching
=======================
All most every PL/SQL developer writes the SQL and  PL/SQL statement in code.  The  SQL statement are  executed by the SQL engine and PL/SQL statement are executed  by the PL/SQL engine. When PL/SQL engine encounter the SQL statement then it pass the control to the  SQL engine and  again control come backs to PL/SQL engine when PL/SQL  statement encounters.
This is called as context switching.

Following is the procedure  which   will accept the departmentId and salary percentage increase and gives to each employee in the provided department

The following procedure use the cursor (FOR LOOP cursor ) to fetch the employee for the provided departmentId and then update the employee table with increased salary


increase_salary procedure with FOR loop

PROCEDURE increase_salary (
   department_id_in   IN employees.department_id%TYPE,
   increase_pct_in    IN NUMBER)
IS
BEGIN
   FOR employee_rec
      IN (SELECT employee_id
            FROM employees
           WHERE department_id =
                    increase_salary.department_id_in)
   LOOP
      UPDATE employees emp
         SET emp.salary = emp.salary + 
             emp.salary * increase_salary.increase_pct_in
       WHERE emp.employee_id = employee_rec.employee_id;
   END LOOP;
END increase_salary;


Suppose there are 100 employees in department 15. When I execute this block,

BEGIN
   increase_salary (15, .10);
END;
 

When we are executing the above procedure there will be 100 context switching between SQL and PL/SQL engine this is row by row switching  which is performance overhead.




Simplified increase_salary procedure without FOR loop

PROCEDURE increase_salary (
   department_id_in   IN employees.department_id%TYPE,
   increase_pct_in    IN NUMBER)
IS
BEGIN
   UPDATE employees emp
      SET emp.salary =
               emp.salary
             + emp.salary * increase_salary.increase_pct_in
    WHERE emp.department_id = 
             increase_salary.department_id_in;
END increase_salary;



Single context switch to execute update statement  all work is done in single  context switch. By default the update statement is BULK update.


In Real time the code is not that much simple need to perform sever data manipulation operation then updating the data .Suppose that, for example, in the case of the increase_salary procedure, I need to check employees for eligibility for the increase in salary and if they are ineligible, send an e-mail notification. My procedure might then look like below.


PROCEDURE increase_salary (
   department_id_in   IN employees.department_id%TYPE,
   increase_pct_in    IN NUMBER)
IS
   l_eligible   BOOLEAN;
BEGIN
   FOR employee_rec
      IN (SELECT employee_id
            FROM employees
           WHERE department_id =
                    increase_salary.department_id_in)
   LOOP
      check_eligibility (employee_rec.employee_id,
                         increase_pct_in,
                         l_eligible);

      IF l_eligible
      THEN
         UPDATE employees emp
            SET emp.salary =
                     emp.salary
                   +   emp.salary
                     * increase_salary.increase_pct_in
          WHERE emp.employee_id = employee_rec.employee_id;
      END IF;
   END LOOP;
END increase_salary;

Now no longer  everything in single context switch.


Bulk Processing in PL/SQL

The bulk processing features of PL/SQL are designed specifically to reduce the number of context switches required to communicate from the PL/SQL engine to the SQL engine.

Use the BULK COLLECT clause to fetch multiple rows into one or more collections with a single context switch.

Use the FORALL statement when you need to execute the same DML statement repeatedly for different bind variable values. The UPDATE statement in the increase_salary procedure fits this scenario; the only thing that changes with each new execution of the statement is the employee ID.



 CREATE OR REPLACE PROCEDURE increase_salary (
 2     department_id_in   IN employees.department_id%TYPE,
 3     increase_pct_in    IN NUMBER)
 4  IS
 5     TYPE employee_ids_t IS TABLE OF employees.employee_id%TYPE
 6             INDEX BY PLS_INTEGER;
 7     l_employee_ids   employee_ids_t;
 8     l_eligible_ids   employee_ids_t;
 9
10     l_eligible       BOOLEAN;
11  BEGIN
12     SELECT employee_id
13       BULK COLLECT INTO l_employee_ids
14       FROM employees
15      WHERE department_id = increase_salary.department_id_in;
16
17     FOR indx IN 1 .. l_employee_ids.COUNT
18     LOOP

19        check_eligibility (l_employee_ids (indx),
20                           increase_pct_in,
21                           l_eligible);
22
23        IF l_eligible
24        THEN
25           l_eligible_ids (l_eligible_ids.COUNT + 1) :=
26              l_employee_ids (indx);
27        END IF;
28     END LOOP;
29
30     FORALL indx IN 1 .. l_eligible_ids.COUNT
31        UPDATE employees emp
32           SET emp.salary =
33                    emp.salary
34                  + emp.salary * increase_salary.increase_pct_in
35         WHERE emp.employee_id = l_eligible_ids (indx);
36  END increase_salary;



The highlighted green part is the BULK select which will fetch all employee id  in  l_employee_ids 
collection .

The code snippet highlighted in yellow FORALL BULK update
 Rather than move back and forth between PL/SQL and SQL Engine  the FORALL handles all the updates and passes them to the sql engine in single context switch.


IMPORTANT THINK to know when starting to use the advantage of the  BULK COLLECT

Trade of the BULK COLLECT run faster consume more memory .

 There are two types of memory SGA (System Global Area) and PGA(Program Global Area)
the SGA is shared for all the session  connected to the database.  whie PGA is alloacted for each session .  Memory for collection is stored in the PGA.

Thus, if a program requires 5MB of memory to populate a collection and there are 100 simultaneous connections, that program causes the consumption of 500MB of PGA memory, in addition to the memory allocated to the SGA.


The PL/SQL made the life of developer easy  by by introducing the  LIMIT clause on the   BULK COLLECT to control amount of memory used.

  FETCH employees_cur 
            BULK COLLECT INTO l_employees LIMIT limit_in;


 Due to LIMIT it will fetch the number_record specified in the limit_in parameter at a time .PL/sQL resuse the same limit_in and same memory for  subsequent fetch. Even though the table size grows the PGA consumption will be constant.

When you are using BULK COLLECT and collections to fetch data from your cursor, you should never rely on the cursor attributes to decide whether to terminate your loop and data processing.
EXIT WHEN 
l_table_with_227_rows.COUNT = 0; 




Generally, you should keep all of the following in mind when working with BULK COLLECT: 

  1.  The collection is always filled sequentially, starting from index value 1.
  2.  It is always safe (that is, you will never raise a NO_DATA_FOUND exception) to iterate through a collection from 1 to collection .COUNT when it has been filled with BULK COLLECT.
  3.  The collection is empty when no rows are fetched.
  4.  Always check the contents of the collection (with the COUNT method) to see if there are more rows to process.
  5.  Ignore the values returned by the cursor attributes, especially %NOTFOUND.




Saturday, 21 November 2015

Leading Ranks and Lagging Percentages: Analytic Functions

This post explains 7 analytical functions to manipulate the  way you display the result

1.DENSE RANK  : over (partition by department_id order by salary desc)
2. RANK :  over (partition by department_id order by salary desc)

3. FIRST_VALUE : FIRST(column)  over (partition by department_id order by salary desc)
4. LAST_VALUE

5.LEAD  LAG(column | expressionoffsetdefault)   over (partition by department_id order by salary desc)
6. LAG  LAG(column | expression, offset, default over (partition by department_id order by salary desc)  offset :  how many previous rows it should go back

7.RATIO_TO_REPORT


1. Rank the data

        Display the three employees with highest salaries by department

 The query that retrieves top or bottom N rows from the database that satisfy the certain condition refered as the TOP N query .
  Business Requirement : the most highly paid employees are or which department has the lowest sales figures 


Example :

select department_id, last_name, first_name, salary,
          DENSE_RANK() over (partition by department_id
                                 order by salary desc) dense_ranking
      from employee
      order by department_id, salary desc, last_name, first_name;
 
=========================================================================================
 
 
 
 
 
DEPARTMENT_ID LAST_NAME    FIRST_NAME                    SALARY DENSE_RANKING
————————————— ———————————  —————————————————————————     —————— —————————————
           10 Dovichi      Lori                                             1
           10 Eckhardt     Emily                         100000             2
           10 Newton       Donald                         80000             3
           10 Michaels     Matthew                        70000             4
           10 Friedli      Roger                          60000             5
           10 James        Betsy                          60000             5
           20 peterson     michael                        90000             1
           20 leblanc      mark                           65000             2
           30 Jeffrey      Thomas                        300000             1
           30 Wong         Theresa                        70000             2
              Newton       Frances                        75000             1

11 rows selected.
 
 This result revels the an interesting analytical function DENSE RANK  . When query 
uses a descending order   a NULL  value can affect the outcome of analytical function.
 
By default , with descending sort , SQL views NULL as being higher than any other value .
 
In the result the record Dovichi Lori  has an NULL salary and the DENSE RANK analytical function 
assigns the highest RANK 1 in Department 10.
 
-- You can  eliminate the null by adding the where clause salary is not null.
 
 Alternatively  , yo can use  the   NULL last as the extension to the  Order by clause 
==========================================================================================
 
select department_id, last_name, first_name, salary,
        DENSE_RANK() over (partition by department_id
                               order by salary desc NULLS LAST) dense_ranking
      from employee
    order by department_id, salary desc, last_name, first_name;

DEPARTMENT_ID LAST_NAME    FIRST_NAME                    SALARY DENSE_RANKING
————————————— ———————————  —————————————————————————     —————— —————————————
           10 Dovichi      Lori                                             5
           10 Eckhardt     Emily                         100000             1
           10 Newton       Donald                         80000             2
           10 Michaels     Matthew                        70000             3
           10 Friedli      Roger                          60000             4
           10 James        Betsy                          60000             4
           20 peterson     michael                        90000             1
           20 leblanc      mark                           65000             2
           30 Jeffrey      Thomas                        300000             1
           30 Wong         Theresa                        70000             2
              Newton       Frances                        75000             1

11 rows selected. 
 
 Still the NULL record appears at the top due to the outer order by clause.
  
 --  Quick notes :
 
1. In the query outcome the highlighted  two rows have the same salary 
    and ranked the same rank as 4.
 
2. The next record having just less than salary will have the next rank i.e 5 highlighted in green
 
  DENS_RANK return the ranking number without any gaps , regardless of any records that 
   have same  value for expression in the order by clause.
   In contrast the rank analytical function find the same value records and assign the same rank 
    the subsequent rank number take in account of this by skipping ahead.
 
 
 
 select department_id, last_name, first_name, salary,
           RANK() over (partition by department_id
                            order by salary desc NULLS LAST) regular_ranking
      from employee
    order by department_id, salary desc, last_name, first_name;

DEPARTMENT_ID LAST_NAME    FIRST_NAME                SALARY REGULAR_RANKING
————————————— ———————————  ———————————————————————   —————— ———————————————
           10 Dovichi      Lori                                           6
           10 Eckhardt     Emily                     100000               1
           10 Newton       Donald                     80000               2
           10 Michaels     Matthew                    70000               3
           10 Friedli      Roger                      60000               4
           10 James        Betsy                      60000               4
           20 peterson     michael                    90000               1
           20 leblanc      mark                       65000               2
           30 Jeffrey      Thomas                    300000               1
           30 Wong         Theresa                    70000               2
              Newton       Frances                    75000               1

11 rows selected.

  In department 10 the highlighted  rows shows ranking difference in RANK and DENSE_RANK 
 Same value row got the same rank 4 but subsequent record got the rank 6 instead of 5 in DENSE_RANK.


FINISHING FIRST  OR LAST :

For reporting purpose it might occasionally  useful to include the first value obtained in the perticular group or window.
 then you can use FIRST_VALUE analytical function

Display the first value returned per window, using FIRST_VALUE 
SQL> select last_name, first_name, department_id, hire_date, salary,
  2       FIRST_VALUE(salary)
  3       over (partition by department_id order by hire_date) first_sal_by_dept
  4   from employee
  5  order by department_id, hire_date;

LAST_NAME     FIRST_NAME   DEPARTMENT_ID HIRE_DATE  SALARY FIRST_SAL_BY_DEPT
————————— ——————————————  —————————————— ————————— ——————— —————————————————
Eckhardt      Emily                   10 07-JUL-04  100000            100000
Newton        Donald                  10 24-SEP-06   80000            100000
James         Betsy                   10 16-MAY-07   60000            100000
Friedli       Roger                   10 16-MAY-07   60000            100000
Michaels      Matthew                 10 16-MAY-07   70000            100000
Dovichi       Lori                    10 07-JUL-11                    100000
peterson      michael                 20 03-NOV-08   90000             90000
leblanc       mark                    20 06-MAR-09   65000             90000
Jeffrey       Thomas                  30 27-FEB-10  300000            300000
Wong          Theresa                 30 27-FEB-10   70000            300000
Newton        Frances                    14-SEP-05   75000             75000


In the Lead and Lagging Behind

Its  common requirement to get access to record which is precedes or follow the current row .  By using the lead and lagging function one can obtain the side by side view of current row

SQL> select last_name, first_name, department_id, hire_date,    
  2         LAG(hire_date, 1, null) over (partition by department_id
  3                                 order by hire_date) prev_hire_date
  4    from employee
  5  order by department_id, hire_date, last_name, first_name;

LAST_NAME     FIRST_NAME              DEPARTMENT_ID HIRE_DATE PREV_HIRE
————————— —————————————— —————————————————————————— ————————— —————————
Eckhardt      Emily                              10 07-JUL-04
Newton        Donald                             10 24-SEP-06 07-JUL-04
Friedli       Roger                              10 16-MAY-07 24-SEP-06
James         Betsy                              10 16-MAY-07 16-MAY-07
Michaels      Matthew                            10 16-MAY-07 16-MAY-07
Dovichi       Lori                               10 07-JUL-11 16-MAY-07
peterson      michael                            20 03-NOV-08
leblanc       mark                               20 06-MAR-09 03-NOV-08
Jeffrey       Thomas                             30 27-FEB-10
Wong          Theresa                            30 27-FEB-10 27-FEB-10
Newton        Frances                               14-SEP-05

11 rows selected.

LAG(column | expression, offset, default)

Offset is a positive integer that defaults to a value of 1. This parameter tells the LAG function how many previous rows it should go back. A value of 1 means, “Look at the row immediately preceding the current row within the current window.” Default is the value you want to return if the offset value (index) is out of range for the current window. For the first row in a group, the default value will be returned.

RATIO_TO_RAPORT :
Business usage often need to report on percentage .sales  amounts, overall cost and annual  salries.
“What percentage of the total annual salary allotment does each employee receive?” The syntax for the RATIO_TO_REPORT analytic function is 
RATIO_TO_REPORT( column | expression)

Code Listing 9: Use RATIO_TO_REPORT to obtain the percentage of salaries 
SQL> select last_name, first_name, department_id, hire_date, salary, 
round(RATIO_TO_REPORT(salary) over ()*100, 2) sal_percentage
  2    from employee
  3  order by department_id, salary desc, last_name, first_name;

LAST_NAME      FIRST_NAME    DEPARTMENT_ID  HIRE_DATE  SALARY  SAL_PERCENTAGE
———————————  ————————————   —————————————— ——————————  ——————  ——————————————
Dovichi        Lori                     10  07-JUL-11
Eckhardt       Emily                    10  07-JUL-04  100000          10.31
Newton         Donald                   10  24-SEP-06   80000           8.25
Michaels       Matthew                  10  16-MAY-07   70000           7.22
Friedli        Roger                    10  16-MAY-07   60000           6.19
James          Betsy                    10  16-MAY-07   60000           6.19
peterson       michael                  20  03-NOV-08   90000           9.28
leblanc        mark                     20  06-MAR-09   65000            6.7
Jeffrey        Thomas                   30  27-FEB-10  300000          30.93
Wong           Theresa                  30  27-FEB-10   70000           7.22
Newton         Frances                      14-SEP-05   75000           7.73

 Note the analytical function in this query entire set of rows as window , because over doesn't specify any order by clause

or additional windowing clause.

Use RATIO_TO_REPORT to obtain the percentage of salaries, by department 
SQL> select last_name, first_name, department_id, hire_date, salary, 
round(ratio_to_report(salary)
  2         over(partition by department_id)*100, 2) sal_dept_pct
  3    from employee
  4  order by department_id, salary desc, last_name, first_name;

LAST_NAME      FIRST_NAME    DEPARTMENT_ID  HIRE_DATE  SALARY  SAL_DEPT_PCT
——————————  —————————————  ———————————————  —————————  ——————  ————————————
Dovichi        Lori                     10  07-JUL-11
Eckhardt       Emily                    10  07-JUL-04  100000        27.03
Newton         Donald                   10  24-SEP-06   80000        21.62
Michaels       Matthew                  10  16-MAY-07   70000        18.92
Friedli        Roger                    10  16-MAY-07   60000        16.22
James          Betsy                    10  16-MAY-07   60000        16.22
peterson       michael                  20  03-NOV-08   90000        58.06
leblanc        mark                     20  06-MAR-09   65000        41.94
Jeffrey        Thomas                   30  27-FEB-10  300000        81.08
Wong           Theresa                  30  27-FEB-10   70000        18.92
Newton         Frances                      14-SEP-05   75000          100

11 rows selected.
 
 
 In this query the analytical function use the row set from window created on departmentId 
aover defines the window on Department-id
 
 




Saturday, 7 November 2015

Having Sums, Averages, and Other Grouped Data

When you strive to get an  average 

1. The business requirement is what is current average salary for all Employees.

select AVG(salary) from employee;
 
  The avg aggregate function sum up the salary values and then divides it by total number of employee
  records those doesn't have the NULL salary .
 
  Thus the avg function ignores the NULL values 
 
  To get the business required answer substitute the Non NULL value for NULL value.
 
   
select AVG(NVL(salary,0)) from employee;
 
  This will return the exact average salary of all employee.
 
 
The Difference between Count(*) and  Count(Coulmn_name) 
 

 The count (*) return the all the records which satisfy the  query condition and count(*)  does not ignore  the null value, however the count(Column_name)  ignores the null records from the count.


Categorization and aggregation  of data.

  The group by clause enables  you to collect the data from multiple records and tclub it by one or more columns.
 The Aggregate function and the group by clause used to tandem  to determine the aggregate value for every group.

 count of employees in each department

select COUNT(employee_id), department_id
    from employee
    GROUP BY department_id
    ORDER BY department_id;


When the group by is followed by the order by then the clumn listed in the order should be listed in the select  , otherwise it will flag an error message.
  similarly  if the column listed in the group by should be listed in the select.

--,ASC, DESC, NULLS FIRST, and NULLS LAST options behave and how null values are handled by default in an ORDER BY clause


HAVING the last word   
   
 Just like the select list can use the where clause to filter records from the result set those satisfying the condition 
  mentioned in the where clause , similarly to filter the result of group by clause (Categorized data)   the having function is used .
 

 

Friday, 6 November 2015

Purging the SOA Infra

This is very useful and straight forward process to clean up SOA database schema. In real world , server are receiving millions of requests in a day and keeping these all data as instances in SOA Suite database schema is very costly. It can affect a performance of the server up to some extent. After few days or month probably you will start receiving table space errors as allotted all the table sapce is already been used by the instances created within SOA database schema. For this reason you need to plan your tablesapce accordingly and generally it should be in between 50 GB - 80 GB in loaded server. And still it requires regular purging for data on the SOA database.

What data does Oracle SOA Suite 11g (PS6 11.1.1.7) store?

Composite instances utilising the SOA Suite Service Engines (BPEL, mediator, human task, rules, BPM, OSB, EDN etc.) will write data to tables residing within the SOAINFRA schema. Each of the engines will either write data to specific engine tables (e.g. the CUBE_INSTANCE table is used solely by the BPEL engine) or common tables that are shared by the SOA Suite engines such as the AUDIT_TRAIL table.

Which data will be purged by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

The purge script will delete composite instances that are in the following states:

Completed

Faulted
Terminated by user
Stale
Unknown
The purge script will NOT delete composite instances that are in the following states:

Running (in-flight)
Suspended
Pending Recovery

1. First of all you will required Repository creation utility for 11.1.1.4. This installable contain the all required purging script provided by oracle to purge the database schema  You can find the purge script at location RCU_HOME/rcu/integration/soainfra/sql/soa_purge


2. In SQL*Plus, connect to the database AS SYSDBA:

3. Execute the following SQL commands:
                      GRANT EXECUTE ON DBMS_LOCK to dev_soainfra;
                      GRANT CREATE ANY JOB TO dev_soainfra;

4. RCU_HOME/rcu/integration/soainfra/sql/soa_purge/soa_purge_scripts.sql


6.  execute below SQL block and description of each variable is given below

    min_creation_date : minimum date when instance was created
    max_creation_date : Maximum date when instance was created
    batch_size :Batch size used to loop the purge. The default value is 20000.
    max_runtime :Expiration at which the purge script exits the loop. The default value is 60. This value is specified in minutes.
    retention_period :Retention period is only used by the BPEL process service engine only (in addition to using the creation time parameter). The default value is null
    purge_partitioned_component  : Users can invoke the same purge to delete partitioned data. The default value is false


DECLARE

   MAX_CREATION_DATE timestamp;
   MIN_CREATION_DATE timestamp;
   batch_size integer;
   max_runtime integer;
   retention_period timestamp;

BEGIN

   MIN_CREATION_DATE := to_timestamp('2011-06-23','YYYY-MM-DD');
   MAX_CREATION_DATE := to_timestamp('2011-07-03','YYYY-MM-DD');
    max_runtime := 15;
    retention_period := to_timestamp('2011-07-04','YYYY-MM-DD');
   batch_size := 5000;
     soa.delete_instances(
     min_creation_date => MIN_CREATION_DATE,
     max_creation_date => MAX_CREATION_DATE,
     batch_size => batch_size,
     max_runtime => max_runtime,
     retention_period => retention_period,
     purge_partitioned_component => false);
  END;


 Here is very important to note that this script provided is able to delete instances from database schema however it will not free up the memory of that table / tablespace.

For freeing up the memory you can try this option below on tables.

alter table enable row movement.
alter table shrink space;

JBO-27024: Failed to validate a row with key oracle.jbo.Key

Seems that the key you use is a composite key and the first attribute in this key is null, which is why the row cannot be retrieved



JBO-27024: Failed to validate a row with key oracle.jbo.Key usually occurs
1. Not providing value for a mandatory attribute.
2. Incorrect value for an attribute of different data type
3. any of your other validation rules failed on EO.




the validation is failing because the id attribute may not be getting created properly. Because of this it could not validate the row as the id is null. row.validateEntity() will help to identify the error by calling it when you commit the record and check where exactly the error pops up.
4. when the primary key is not based on a sequence and depends on the composite. The solution is to create a surrogate key. If you want to override the db constraint you must have a surrogate key populated programmatic way every time a new row is created. So that the data is unique all the time and proceed with a customized error message for each attribute, make the surrogate key hidden and read only.
if you want to suppress the validation then use skipValidation=true in pageDef or have the immediate set to true.

use the thread.dumpstack(); which will let you know which attribute is updating with nulll value

Monday, 22 June 2015

Java: One use-case for having a singleton class

Let's consider the following class:

class StringLengthComparator {
public int compare(String s1, String s2) {
return s1.length() - s2.length();
}
}

The StringLengthComparator class is stateless: it has no fields, hence all instances of the class are functionally equivalent.
Thus it should be a singleton to save on unnecessary object creation costs:

class StringLengthComparator {
private StringLengthComparator() {
}

public static final StringLengthComparator INSTANCE = new StringLengthComparator();

public int compare(String s1, String s2) {
return s1.length() - s2.length();
}
}

Improvement:
-----------------
To be able to work with other comaprison strategies, we go with intaerface approach as follows:

// Strategy interface
public interface Comparator<T> {
public int compare(T t1, T t2);
}

The Comparator interface is generic so that it's applicable to comparators for objects other than strings

class StringLengthComparator implements Comparator<String> {
... // class body is identical to the one shown above
}

Saturday, 20 June 2015

Usecase of setting immediate property on UI components

Firstly, let's take a look at what immediate property implies with an example:
=======================================================
Suppose you have a page with a button and a field for entering the quantity of a book in a shopping cart.
If the immediate attributes of both the button and the field are set to true, the new value entered in the field will be available for any processing
associated with the event that is generated when the button is clicked.

The event associated with the button as well as the events, validation, and conversion associated with the field are all handled when request parameter values are applied.
If the button's immediate attribute is set to true but the field's immediate attribute is set to false, the event associated with the button is processed
without updating the field's local value to the model layer.

The reason is that any events, conversion, and validation associated with the field occur after request parameter values are applied.

Now, let's take the following use-case:

The quantity field for each book (see below) does not set the immediate attribute, so the value is false (the default).

<h:inputText id="quantity"
size="4"
value="#{item.quantity}"
title="#{bundle.ItemQuantity}">
<f:validateLongRange minimum="0"/>
...
</h:inputText>

The immediate attribute of the Continue Shopping hyperlink is set to true, while the immediate attribute of the Update Quantities hyperlink is set to false:

<h:commandLink id="continue"
action="bookcatalog"
immediate="true">
<h:outputText value="#{bundle.ContinueShopping}"/>
</h:commandLink>
...
<h:commandLink id="update"
action="#{showcart.update}"
immediate="false">
<h:outputText value="#{bundle.UpdateQuantities}"/>
</h:commandLink>

If we click the Continue Shopping hyperlink, none of the changes entered into the quantity input fields will be processed.
If we click the Update Quantities hyperlink, the values in the quantity fields will be updated in the shopping cart.

Monday, 1 June 2015

Unix: Default permissions on files and folders

First of all let's get familiar with the 3 levels of permission and their octal representation in Unix:

read:         100 (4 in decimal)
write:        010 (2 in decimal)
execute:    001 (1 in decimal)

Therefore,
only read => 4
read/write => 6
read/write/execute => 7


Now, in Unix, if we run the following command:
umask
we'll find the output as 0022
The first '0' says that's it's an octal number and '022' represents the mask to be applied.

Let's see how it get's applied to set default permission on files and folders in Unix.

Files:
====
Initial permission after file creation: 666 (read/write for owner, owner's group and others)
Now,
666-022 = 644 happens
leading to =>
for owner                  : 6 (read/write/execute)
for owner's group     : 4 (read)
for others                  : 4 (read)


Folders:
======
Initial permission after folder creation: 777 (read/write/execute for owner, owner's group and others)
Now,
777-022 = 755 happens
leading to =>
for owner                   : 7 (read/write/execute)
for owner's group      : 5 (read/execute)
for others                   : 5 (read/execute)


Comprehensive ADF Life Cyle Demo :Post I

This post is intended to load the life cycle of ADF in to the brain.

Phase I .Restore View Phase
=============================
During this phase builds the component tree of view and wires the component to the validator, converters and Listeners to the components in the view , and stores the view in the ADFFacesContext object.

Two Types of request :
 1.Initial Request : It creates empty view and life-cycle advanced to the Render Response Phase.
 2.Post Back Request : During this request the view exist in the ADFFacesContext objcet , hence it restores the view from the client or server.




Phase II. Apply Request Value
=======================
 After the component tree is restored during the post-back request  each component in the tree
extract  its new value from the request parameter by using decode method.The value is then stored on each component.

If immediate=true is set on any component then the validation ,conversion and event association  with this component processed into this phase.

At this point, if the application needs to redirect to a different web application resource or generate a response that does not contain any Java-server Faces components, it can call the FacesContext.responseComplete method




Phase III. Process Validation Phase
==========================

During this phase It process all validators registered on component in the tree using validate 
processValidatate method. It examines the the component attribute that's specify the rules for the validations and compares the rule against the value stored in the component and also completes the conversions on the component .If the local value is invalid or conversion failed in this phase then life cycle directly advance to the render response phase , to render the error message.

At this point, if the application needs to redirect to a different web application resource or generate a response that does not contain any Java-server Faces components, it can call the FacesContext.responseComplete method.

Phase IV.Update Model
====================
During this Phase  implementation determines that the data is valid, it traverses the component tree and sets the corresponding server-side object properties to the components' local values.

It  updates only the bean properties pointed at by an input component's value attribute. 

If the local data cannot be converted to the types specified by the bean properties, the lifecycle
advances directly to the Render Response phase so that the page is re-rendered with errors displayed.


At this point, if the application needs to redirect to a different web application resource or generate a response that does not contain any Java-server Faces components, it can call the FacesContext.responseComplete method.


Phase V. Invoke Application .
=========================
During this phase  implementation handles any application-level events, such as submitting a form or linking to another page.

At this point, if the application needs to redirect to a different web application resource or generate a response that does not contain any Java-server Faces components, it can call the FacesContext.responseComplete method.






Phase VI. Render Response Phase
==========================
This phase build the view and delegates the task of rendering to appropriate resource.

If this is an initial request, the components that are represented on the page will be added to the component tree (ADFFacesContext). If this is not an initial request, the components are already added to the tree and need not be added again. 

If the request is a post-back and errors were encountered during the Apply Request Values phase, Process Validations phase, or Update Model Values phase, the original page is rendered again during this phase. If the pages contain h:message or h:messages tags, any queued error messages are displayed on the page.

After the content of the view is rendered, the state of the response is saved so that
subsequent requests can access it. The saved state is available to the Restore View
phase.


 What happens when  you click on reset button ?



1.It fires the validation on the  name and date field in the phase of process validation by going through the life cycle.
 2. The fields are not cleared.

What expected when we click on the reset button?.
 -- The validation on the name and date field should not be trigger that is we want to skip the some of the life cycle phases (Process Validation and  Update Model).
-- The Name and Date field should be cleared .


What about Immediate="True" ? What is  it ? What will happen ?
Immediate=true 
    An immediate command executes the action and action listener in Phase II Apply Request Value.  and then  jumps to phase VI. Render Response by skipping the  validation , update model and invoke application .

 <af : CommandButton text="Reset" id="cb2" immediate="true" 
                                            actionListener="#{}"/>


And Action Listener Code :
  public void reset(ActionEvent event){
   setName(NULL);
  setDate(NULL);
   setHelloMessage(NULL);
  }


Now what will happen when I am clicking on the Reset Button ?

 and here  is out put 






  When executing an immediate command the UI component  do not re-evaluate their underlying binding , so showing stealing value.

 Three options are there to solve this problem :

1. Use af:resetActionListener
2. oracle.adf.view.rich.util.ResetUtils.reset()
3. UIComponent.resetValue()

Re-commanded :  af:resetActionListener does NOT reset child regions, use ResetUtils.reset() instead. 

  Continue to post Comprehensive ADF Life Cyle Demo :Post II