Use LOV without af:selectoneChoice

A question on the JDev & ADF forum caught my attention. A user asked how to get the attribute value from a list of value (LOV) without using an af:selectOneChoise component. To make the use case clear, let’s look at a listview from the Departments table of the HR DB schema.

this will produce a very rudimentary output like

Selection_030

This doesn’t look charming. OK we can change this to something more meaningful like

Selection_031

But still we see only the key values instead meaningful attribute values like we get if we use a af:selectOneChoce component.

To get the output using an af:selectOneChoise we need to define list of values at the attributes in the view object, DepartmentsView in this case:

Now, when we drag the DepartmentsView onto a page and drop it as a form or table we would get the af:selectOneChoice component. However, if we create the listview again, nothing changes. JDev uses af:outputText components in this case.

To show the managers name behind the ManagerId, we can e.g. add another attribute to the view and get the manager name via a join in the sql query.

Or we put a af:selectOnChoice in the list view cell like we get for a cell in a table. This would look like

 <af:panelGroupLayout id="pgl3" layout="horizontal">
   <af:outputFormatted value="ID: #{item.bindings.ManagerId.inputValue} Name:" id="of2"/>
   <af:selectOneChoice value="#{item.bindings.ManagerId.inputValue}" label="#{row.bindings.ManagerId.label}"
     required="#{bindings.DepartmentsView1.hints.ManagerId.mandatory}"
     shortDesc="#{bindings.DepartmentsView1.hints.ManagerId.tooltip}" id="soc3" disabled="true">
     <f:selectItems value="#{item.bindings.ManagerId.items}" id="si3"/>
     <f:validator binding="#{item.bindings.ManagerId.validator}"/>
   </af:selectOneChoice>
 </af:panelGroupLayout>

and generate

Selection_039

The gray rectangle is because we have set the disabled property to true to disable the component. To get a better look we can set the readOnly property instead to get

Selection_040

which look much better. However to get this result we have to add a lot of tags to the page.

The final solution is to use the data which is present in the model to show the attribute name instead of the value like it’S done ba the framework for af:selectOneChoice. For this we only need one af:outputText tag like

 <af:outputFormatted value="ID: #{item.bindings.ManagerId.inputValue} Name: #{item.bindings.ManagerId.items[item.bindings.ManagerId.inputValue].label}"
 id="of1"/>

This will generate

Selection_041.png

The magic is the expression language

#{item.bindings.ManagerId.items[item.bindings.ManagerId.inputValue].label}"

which uses the items defined for the selectOneChoice and located the right display attribute in the collection using the attribute value.

You can download the sample application which is build with JDev 12.2.1.2 and uses the HR DB schema from GitHub BlogShowLOVattributeWithoutLOV

JDev12c: Searching an af:tree

On the JDev & ADF OTN space I got a question on how to search an af:tree and select and disclose the nodes found matching the search criteria.

Problem description

We like to search an af:tree component for string values and if we find the value we like to select the node where we found the string we searched for. If the node where we found the string is a child node we disclose the node to make it visible.

Final sample Application

I started with building a sample application and show the final result here:

selection_935

We see a tree and a check box and a search field. The checkbox is used to search only the data visible in the tree or the whole data model the tree is build on. The difference is that you build the tree from view objects which can hold more attributes than you like to show in the tree node. This is the case with the sample tree as we see when we search for e.g. ‘sa’ in the visible data

selection_936

When we unmark the check box and repeat the search we get

selection_937

As you see we found another node ‘2900 1739 Geneva’ which doesn’T have the searched string ‘sa’. A look into the data model, the row behind this node shows

selection_938

We see that the street address which we don’t show in the node has the search string. To show that the search works for every node we set the search field to ‘2’ and get hits in different levels

selection_939

The sample application can be downloaded from GitHub. For details on this see the end of this blog.

Implementation

Now that we saw the running final application let’s look at how to implement this. We start by creating a small ADF Fusion Web Application. Is you like to you can start by following the steps given in  Why and how to write reproducible test cases.

Model Layer

Once the base application is created we setup the data model we use to build the tree. For this sample we use ‘Regions’, ‘Countries’ and ‘Location’ of the HR DB schema. To build the model we can use the ‘Create Business Components from Table’ wizard and end up with

selection_942

As you see I’ve renamed the views. The names now show what you’ll see when you use them. We only have one top level view object ‘RegionsView’ which will be the root of our tree in the UI. The child view are used to show detailed data.

View Controller

For the view controller layer we start by a simple page from the ‘Quick Layout’ section

selection_943

Now we add a title and add an af:splitter to the content area. Here we set the width of the first facet to 250 px to have enough room for the search field. We start with building the af:tree from the data control by dragging the ‘RegionsView’ from the data control onto the content area and dropping it as af:tree

Here we don’t select to show all attributes available but only a few.  Later we see that we can search the whole data model and not just only the visible data. Finally we bind the tree to a bean attribute to have access to the tree from the bean when we have searched it. This is a pure convenience, we could search the component tree each time we need the component to avoid the binding to a bean attribute.  When we create the bean we name it ‘TreeSelectionBean’ and set its scope to ‘Request’.  The bean will end in the adfc-config.xml

selection_950

the final code for the af:tree looks like

<af:tree value="#{bindings.RegionsView.treeModel}" var="node"
selectionListener="#{bindings.RegionsView.treeModel.makeCurrent}"
rowSelection="single" id="t1"
binding="#{TreeSelectionBean.tree}">
  <f:facet name="nodeStamp">
    <af:outputText value="#{node}" id="ot2"/>
  </f:facet>
</af:tree>

Now we create two pageDef variables as java.lang.String to hold the search string and the selection for the check box. If you need more information on how to create pageDef variables see Creating Variables and Attribute Bindings to Store Values Temporarily in the PageDef.

selection_949

In the first facet we add a check box and an af:inputText inside an af:panelGroupLayout and bind the value properties to the pageDef variables as

<af:panelGroupLayout id="pgl2" layout="vertical">
  <af:selectBooleanCheckbox text="node only" label="Seach" id="sbc1"
value="#{bindings.myNodeOnly1.inputValue}"/>
  <af:inputText label="Search for" id="it1" value="#{bindings.mySearchString1.inputValue}"/>
  <af:button text="Select" id="b1"
actionListener="#{TreeSelectionBean.onSelection}"/>
</af:panelGroupLayout>

The final thing to do is to wire the button to a bean method which does all the hard work. In the code above this is done with an actionListener which is pointing to the same bean created for the tree binding.


<span></span>public void onSelection(ActionEvent actionEvent) {
<span></span>JUCtrlHierBinding treeBinding = null;
// get the binding container
<span></span>BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
<span></span> // get an ADF attributevalue from the ADF page definitions
<span></span> AttributeBinding attr = (AttributeBinding) bindings.getControlBinding("mySearchString1");
 String node = (String) attr.getInputValue();

// nothing to search!
 // clear selected nodes
<span></span> if (node == null || node.isEmpty()){
<span></span> RichTree tree = getTree();
<span></span> RowKeySet rks = new RowKeySetImpl();
<span></span> tree.setDisclosedRowKeys(rks);
 //refresh the tree after the search
<span></span> AdfFacesContext.getCurrentInstance().addPartialTarget(getTree());

return;
 }

<span></span> // get an ADF attributevalue from the ADF page definitions
<span></span> AttributeBinding attrNodeOnly = (AttributeBinding) bindings.getControlBinding("myNodeOnly1");
<span></span> String strNodeOnly = (String) attrNodeOnly.getInputValue();
<span></span> // if not initializued set it to false!
<span></span> if (strNodeOnly == null) {
<span></span> strNodeOnly = "false";
 }
<span></span> _logger.info("Information: search node only: " + strNodeOnly);

<span></span>//Get the JUCtrlHierbinding reference from the PageDef
<span></span> // For JDev 12c use the next two lines to get the treebinding
<span></span> TreeModel tmodel = (TreeModel) getTree().getValue();
<span></span> treeBinding = (JUCtrlHierBinding) tmodel.getWrappedData();
<span></span> // For JDev 11g use the next two lines to get the treebinding
<span></span> // CollectionModel collectionModel = (CollectionModel)getTree().getValue();
<span></span> // treeBinding = (JUCtrlHierBinding)collectionModel.getWrappedData();
<span></span> _logger.info("Information tree value:" + treeBinding);

//Define a node to search in. In this example, the root node
 //is used
<span></span> JUCtrlHierNodeBinding root = treeBinding.getRootNodeBinding();
 //However, if the user used the "Show as Top" context menu option to
 //shorten the tree display, then we only search starting from this
 //top mode
<span></span> List topNode = (List) getTree().getFocusRowKey();
<span></span> if (topNode != null) {
 //make top node the root node for the search
<span></span> root = treeBinding.findNodeByKeyPath(topNode);
 }
<span></span> RichTree tree = getTree();
<span></span> RowKeySet rks = searchTreeNode(root, node.toString(), strNodeOnly);
<span></span> tree.setSelectedRowKeys(rks);
 //define the row key set that determines the nodes to disclose.
<span></span> RowKeySet disclosedRowKeySet = buildDiscloseRowKeySet(treeBinding, rks);
<span></span> tree.setDisclosedRowKeys(disclosedRowKeySet);
 //refresh the tree after the search
<span></span> AdfFacesContext.getCurrentInstance().addPartialTarget(tree);
 }

In line 4-7 we get the value the user entered into the search field. Lines 9-19 check if the user has given a search string. If not we clear the currently selected nodes from the tree by creating a new empty RowKeySet and setting this to the tree.

If he got a search string we check if we should search the visible data only or the whole data model. This is done by getting the value from the check box (lines 21-28). Now we data from the tree (lines 30-37).

One thing we have to check before starting the search is if the user has used the ‘show as top’ feature of the tree. This would mean that we only search beginning from the current top node down (lines 39-49).

The search is done in a method

private RowKeySet searchTreeNode(JUCtrlHierNodeBinding node, String searchString, String nodeOnly)

this we pass the start node, the search string and a flag if we want to search the whole data model or only the visible part. The method returns a RowKeySet containing the keys to the rows containing the search string (line 51-52). This list of row keys we set to the tree as selected rows (line 54). As we would like to disclose all rows which we have found, we have to do one more step. This step uses the row key and traverses upward in the tree to add all parent node until the node is found where we started the search (line 53-55). This is necessary as you only see a disclosed child node in a tree if the parent node is disclosed too. For this we you a helper method (line 54) and set the row keys as disclosed rows in the tree.


 /**
<span></span> * Helper method that returns a list of parent node for the RowKeySet
<span></span> * passed as the keys argument. The RowKeySet can be used to disclose
 * the folders in which the keys reside. Node that to disclose a full
<span></span> * branch, all RowKeySet that are in the path must be defined
 *
<span></span> * @param treeBinding ADF tree binding instance read from the PageDef
 * file
<span></span> * @param keys RowKeySet containing List entries of oracle.jbo.Key
<span></span> * @return RowKeySet of parent keys to disclose
 */
<span></span> private RowKeySet buildDiscloseRowKeySet(JUCtrlHierBinding treeBinding, RowKeySet keys) {
<span></span> RowKeySetImpl discloseRowKeySet = new RowKeySetImpl();
<span></span> Iterator iter = keys.iterator();
 while (iter.hasNext()) {
<span></span> List keyPath = (List) iter.next();
<span></span> JUCtrlHierNodeBinding node = treeBinding.findNodeByKeyPath(keyPath);
<span></span> if (node != null && node.getParent() != null && !node.getParent().getKeyPath().isEmpty()) {
 //store the parent path
<span></span> discloseRowKeySet.add(node.getParent().getKeyPath());
 //call method recursively until no parents are found
<span></span> RowKeySetImpl parentKeySet = new RowKeySetImpl();
<span></span> parentKeySet.add(node.getParent().getKeyPath());
<span></span> RowKeySet rks = buildDiscloseRowKeySet(treeBinding, parentKeySet);
<span></span> discloseRowKeySet.addAll(rks);
 }
 }
<span></span> return discloseRowKeySet;
 }

This concludes the implementation of the sear in a tree.

Download

The sample application uses the HR DB schema and can be downloaded from GitHub

The sample was build using JDev 12.2.1.2.

 

Undo Reorder of Columns in af:table

A question on OTN about how to undo a reorder of columns in an af:table can be undone. In this blog I show how to undo such a reorder to show the columns of an af:table in their natural order.
The natural order is defined when you create the table. You can move the attributes in the create dialog or delete attributes you don’t want to see in the UI from the table.

In the image above we see the dialog after we drop a VO as table onto a page. To change is order of the columns in the table you can use the arrows on the right (in the red rectangle). Once you save the table you can reorder the columns in the property editor of the af:table.

img00003

The order of the columns you see in the dialog or the property editor is what is called default order of the columns. This default order can be different than the order of the attributes in the query the VO is based on.
The page we drop the af:table on is very simple. It is build from a quick layout and has a header for the page title and a panelCollection which holds the table.

img00008

We can reorder the columns in the UI by dragging a column and dropping it at a different location.

The question now is how to undo this manual reorder without refreshing the browser window.

To understand how this is implemented, we need to look how the the reorder is done in the first place. A table is build from one or more columns. Each of the columns describes the data to be shown in the column, the header to show and the display index which is the order of the columns shown in the UI. If the display index is less then zero (e.g. -1) the default order is used. Any other positive number is used to show the columns in ascending order of these display index.
To undo any reorder of the columns is an af:table we simply have to get to each column and set it’s display index to -1.

public class UndoColumnReorderBean {
    private static ADFLogger _logger = ADFLogger.createADFLogger(UndoColumnReorderBean.class);
    private RichTable table;

    public UndoColumnReorderBean() {
    }

    public void undoColunmReorder(ActionEvent actionEvent) {
        _logger.info("Undo reorder...");
        // get the tables child components
        List<UIComponent> children = this.table.getChildren();
        for (UIComponent comp : children) {
            // check if the child is a column
            if (comp instanceof RichColumn) {
                RichColumn col = (RichColumn) comp;
                // if hte display index is greater 0 set it to -1
                if (col.getDisplayIndex() >= 0) {
                    _logger.info("...unset column "+col);
                    col.setDisplayIndex(-1);
                }
            }
        }
        _logger.info("... done!");
    }

    public void setTable(RichTable table) {
        this.table = table;
    }

    public RichTable getTable() {
        return table;
    }
}

The bean above has a method undoColumnReorder which is an action event Listener triggered by clicking the ‘Undo Column Reorder’ button. This method uses the af:table component which is bound to the bean as property. It iterates over the child components of the table, checking if the child is a RichColumn (or af:column in the UI) and if yes sets its display index to -1;
To show the change in the UI, we have to ppr the table by adding the button as partial Trigger to the table

img00007

After clicking the button in the ui the table again looks like

img00004

so the default order of the columns is shown again.

You can download the application from GitHub BlogUndoColumnReorder. The sample is build using JDev 12.2.1.2 but you can do the same with any other JDev version 11g or 12c you use. It uses the HR DB schema.

Naviagting an af:table in pagination mode from a bean

A question on the JDeveloper and ADF OTN forum asked about how to navigate to a specific page of an af:table in pagination mode. As of JDeveloper 11.1.1.7.0 adf tables can be rendered in scroll mode or in pagination mode where only a specific number of rows are visible in the table.

af:table in pagination mode

To navigate the pages there is a small navigation toolbar below the table which allows to enter a page number or to navigate to the previous, next, first or last page.

The problem to solve is how to navigate the paginated table from within a java bean?

The table doesn’t offer any navigation listeners or methods you can bind bean methods to. Luckily there is the RangeChangeEvent one of the FacesEvents which can e used to notify a component that change in the range has taken place.

All we have to do to navigate the table in pagination mode is to calculate the needed parameters

  • oldStart: The previous start of this UIComponent’s selected range, inclusive
  • oldEnd: The previous end of this UIComponent’s selected range, exclusive
  • newStart: The new start of this UIComponent’s selected range, inclusive
  • newEnd: The new end of this UIComponent’s selected range, exclusive

We add an input field to the page which allow us to enter a page number and a button which we use to call an action listener in a bean.

The running application looks like

Running application

Another button is used to calculate the index of the selected row in the whole rowset, the index on the page and the page number. The row index and the index of the row on the page are zero based, page numbers start with 1. Let’s look at the code:

public void onGotoPage(ActionEvent actionEvent) {
BindingContainer bindingContainer = BindingContext.getCurrent().getCurrentBindingsEntry();
// get number of page to goto
AttributeBinding attr = (AttributeBinding) bindingContainer.getControlBinding("gotopage1");
Integer newPage = (Integer) attr.getInputValue();
if (newPage == null) {
return;
}
// page one starts at index 0 so subtract 1 from the pagen number
newPage--;
DCIteratorBinding iter = (DCIteratorBinding) bindingContainer.get("EmployeesView1Iterator");
// calculate the old and new rages for the RangeChangeEvent
int range = iter.getRangeSize(); // note both the table and we take the page size from the iterator's RangeSize
int oldStart = iter.getRangeStart();
int oldEnd = oldStart + range;
int newStart = newPage * range;
int newEnd = newStart + range;
// find the table
UIViewRoot iViewRoot = FacesContext.getCurrentInstance().getViewRoot();
UIComponent table = iViewRoot.findComponent("t1");
// build the event and fire it
RangeChangeEvent event = new RangeChangeEvent(table, oldStart, oldEnd, newStart, newEnd);
((RichTable)table).broadcast(event);
// update the table
AdfFacesContext.getCurrentInstance().addPartialTarget(table);
}

Line 2-8 we get the new page number we want to navigate to. Line 9-10 we subtract 1 from the given number as the page is zero based internally. In Line 11 we get the iterator which we need to get the range size and the start of the current range (lines 13-15). These values are oldStart and oldEnd. Lines 16-17 we calculate the new start range as page to go multiplied with the range. The newEnd parameter is the newStart pus the range size.
In lines 18-20 we get to the table component on the page. Then we create the RangeChangeEvent and broadcast the event to the table component in lines 21-23. Finally we ppr the table to see the change in the UI.

To show how to calculate the other way around, to get from the selected row in a table to the index on the page, the page number and the index in the rowset we added another button ‘GetPageOfSelectedRow’which calls a listener in the same bean which builds a string with the needed information.

public void onGetCurrentPage(ActionEvent actionEvent) {
BindingContainer bindingContainer = BindingContext.getCurrent().getCurrentBindingsEntry();
DCIteratorBinding iter = (DCIteratorBinding) bindingContainer.get("EmployeesView1Iterator");
// calculate index and page number. Index is zero based!
int currentRowIndex = iter.getRowSetIterator().getCurrentRowIndex();
_logger.info("CurrentRowIndex: " + currentRowIndex);
int currentPage = currentRowIndex / iter.getRangeSize();
currentPage++;
_logger.info("Current Page:" + currentPage);
int indexOnPage = (currentRowIndex % iter.getRangeSize());
_logger.info("Current index on Page:" + indexOnPage);
// get an ADF attributevalue from the ADF page definitions
AttributeBinding attr = (AttributeBinding) bindingContainer.getControlBinding("selectedRow1");
StringBuffer sb = new StringBuffer();
sb.append("row index overall: ");
sb.append(currentRowIndex);
sb.append(" row index on page: ");
sb.append(indexOnPage);
sb.append(" Page: ");
sb.append(currentPage);
attr.setInputValue(sb.toString());
}

To get the index of the selected row in the whole rowset we need the iterator and get the RowSetIterator from it. The rowSetIterator method getCurrentRowIndex() returns the index of the current row (line 5). The current page is calculated by dividing the current index through the range size (line 7). The final information is the index of the selected row on the page which is calculated as the current index modulo the range size (line 10). The rest of the listener build a string out of this information and writes it to a pageDef variable which is referenced in an outputfield on the page.

<af:outputText value="#{bindings.selectedRow1.inputValue}" id="ot8" partialTriggers="b2"/>

Here are some images from the sample application.

The sample application is build using JDev 12.1.3 and uses the HR DB schema. The sample can be downloaded from  Github

JDev 12.2.1: Remote Task Flows in Action

The new JDeveloper version 12.2.1 is just out and has a lot of new features to investigate. In this post we see how remote task flows work. Yes, they are finally here and they are working. At least if you install a patch available from support.oracle.com.
The downloadable version on JDev 12.2.1 has a small bug which prevents you from running remote task flows (refer to https://community.oracle.com/thread/3816032). Support and the dev team quickly delivered a patch for this. To get the patch, open a service request and ask for a patch for bug 22132843.

Let’s start. We need two applications to show how remote task flows are implmented. One is the remote task flow producer, one consumes the remote task flow. An application can be both, producer and consumer. For this sample we keep it simple and define one app as producer and one as consumer.

Producer Application
This application is really simple as it consists of only one page and one task flow which shows the departments and its employees of the HR DB schema.

Remote Task Flow Producer Application

Remote Task Flow Producer Application

The image above shows the running application stand alone. The single page has the header and a simple task flow beneath it to show the departments and their employees.


There are two properties to set in the task flow.
1) in must be remote invocable
2) the transaction must be isolated

Next we have to make the application aware that it should be a remote task flow producer. For this we edit the projects properties and select the ‘ADF Task Flow’ node.

Project Properties for Producer Application

Project Properties for Producer Application


Please note is the second checkbox selected which allows anonymous users to access the remote task flow. This should not be used in a production environment as this would allow anybody to access the task flow. The doc shows how to secure the access to a remote task flow (see link below).

These settings will add a special servlet and a servlet filter to the web.xml file of the application.

There are more things to consider which you find in the docs at How to Configure an Application to Render Remote Regions

That’s it for the simple producer application.

Consumer Application
The second application is simple too. Here we use a single page which again uses the HR DB schema to show the departments as an editable table in a panel splitter. On the right of this we show the remote task flow of the producer application.

Consumer Application

Consumer Application


In the image above the remote task flow isn’t visible as it is not added at the moment.
To make the remote task flow available we need to run the producer application. Here we have to be careful if we try this out using the embedded WebLogic Server. As only one application can be started in debug mode, we need to start the producer application as a normal application.
Run Producer Application

Run Producer Application

In the consumer application we set the project properties for the ADF Task Flow to allow it to consume remote task flows

Consumer Application Project Properties

Consumer Application Project Properties

Now we create a remote task flow connection. Open the resource palette and select to create a ‘Remote Region Producer…’ from the IDE connections.
Here we fill in the needed info like the path to the remote producer servlet which will get us the names of all remote task flows the application holds. To access the remote task flow we define the URL endpoint


The details about what to fill in are again from the doc.

In the consumer application we now open the one page and drag the remote task flow from the ressource palette onto the page and drop it in the right hand splitter

Drop Remote Region in Consumer Application

Drop Remote Region in Consumer Application


This will give us the known image in design mode as if you use a normal region
Consumer Page

Consumer Page


We are ready to run the consumer application and get
Running Consumer Application

Running Consumer Application

Nice!

You can download the sample application from GitHub:
Consumer Application
Producer Application
Both application use the HR DB schema. Make sure to adjust the DB connection to point to your db server.

Fasten your seat belts: Flying the Oracle Development Cloud Service (3 – Take Off – V1)

In part three of the series about the Oracle Developer Cloud we start working on a project as a member of a team in the developer cloud.

Before starting a new project some basic ground has to be covered. What architecture and technology should the project use as well as which package path to use. For the technology the the decision is easy as we want to use ADF. For the architecture we can choose on one of the patterns outlined at ‘Angles in the architecture’.

A good starting point is to introduce a for every ADF project, regardless of the architectural pattern, is a framework extension project (see ‘Extending a Helping Hand’). So we start with this too.

As a developer can’t create a new repository in a cloud project, we have to do this as a user with admin rights.


The first thing to note is that you should create an empty repository (unmark the ‘Initialize repository with README file’). If you initialize the repository with a README file, the developer can’t later just push his initial local version of the JDev workspace into the remote repository. The local repository has be updated with the README file first.

Now that the remote, empty repository exists we switch roles and work as a developer. For this we use a different login as a user who only has developer rights in the Oracle Developer Cloud.
Before the developer uses the new repository he creates a new workspace or project. We create a workspace for the framework extension library.


Next we add the ‘ADF Model Runtime’ library to the project and then the framework extension classes to the workspace.

Right now we don’t need to add or change any of the code in the created classes. If we later need to add some global functionality we come back to these classes. The next thing to do is to create an ADF library from the created classes

To make the new library available for other projects you can create a new file system connection using the same path we specified in the deployment descriptor

Later we come back to this step as we see that we have to change it a bit to make it work in the cloud. Right now we leaf it as is as this shows how you normally would do this in a normal project.
The next thing to do is to initialize a local GIT repository and push this to the Oracle Development Cloud repository as the initial master

and then push the local master branch to the Oracle Developer Cloud repository. For this we use the repository URL we get when we log into the cloud as the developer
Copy git repository address

Copy git repository address


Using this URL we push the local repository to the remote one

to finally see the changes in the cloud

More Decisions
With the basics covered we have to make another decision:
How to define the workflow for changes to make to the project sources.
Should all team members work on the trunk (called the mainline) or should each member use a branch to work on (called a feature branch). Both of these practices have their supporters and naturally opponents. The first is more CI like per definition. Feature Branches on the other side are not CI by the definition, as the code is not continuously integrated into the main line. This dispute is not for this post and may be not for this blog. Anyway, lets start with feature branching.
This allows us to show a feature of the Oracle Developer Cloud as it allows for code reviews which are mostly used if you work with feature branches, but can be used for the other practice too.

Feature: add build files
The feature we implement is to setup a build system for our framework extension project. We name the feature ‘feature-setup-build’


We learned in part 1 that the Oracle Developer Cloud provides a Continuous Integration server (CI). We plan to use this CI server to build our library whenever the code changes. For this we need to use ANT or Maven as the build system. For this project we choose ANT and can now build the needed build.xml files from the project

To finish this part we add the new files to our local repository and then push them to the remote as a new branch.

We push the local changes to the remote repository in the cloud using the same branch name

We had not looked into the created build.xml file or the build.properties files, we had them just created and pushed them into the repository. Question is, will they work?
Let’s try it on the local machine first.

Now we can run the ANT target ‘all’ which is the default one.
Well, as JDev 12.1.3 has somehow eliminated the ANT tool bar buttons running ANT on a project is a bit cumbersome (hopefully the ANT build buttons are back with the next release)

OK, this works like a charm.

As this post is already very long, we split the take off into two parts, V1 and ROTATE. This concludes part V1. Next time we make the necessary changes to the build files to integrate them to the clouds build system and start the CI process.

Note: for those who wonder about the terms V1 and ROTATE:
– V1 is the maximum speed at which an aircraft pilot may abort a takeoff without causing a runway overrun
– ROTATE or Vr is the speed of an aircraft at which the pilot initiates rotation to obtain the scheduled takeoff performance

JDeveloper: Delete Row from POJO based Table

A question on the JDeveloper & ADF OTN forum about removing rows from a table which is based on a list of POJOs provides the reason for this blog post. The implementation, as simple as it is, holds a surprise.

The sample application build for this sample shows the POJO and the list of POJOs built from it. The list is lazy initialized at the time it’s first accessed (see https://tompeez.wordpress.com/2014/10/18/lazy-initalizing-beans/ for more info on this technique). On the only page a table build from this list.

             <af:table value="#{viewScope.TableHandlerBean.phoneInfoList}" var="row" rowBandingInterval="0" id="t1" columnStretching="multiple"
                        rowSelection="single" varStatus="rowStatus" styleClass="AFStretchWidth">
...

In the last column we add a button which should remove the row.

...
                <af:column sortable="false" headerText="Remove" id="c4">
                  <af:commandButton text="Remove" id="cb1" actionListener="#{viewScope.TableHandlerBean.onRemoveAction}">
                    <af:setPropertyListener from="#{rowStatus.index}" to="#{viewScope.TableHandlerBean.currentSelectedIndex}" type="action"/>
                  </af:commandButton>
                </af:column>
...

As the table is build on a list, we can’t use the default selection listener to get the selected row. Instead we use a setPropertyListener to pass the selected row index to a viewScope variable.

   public void onRemoveAction(ActionEvent actionEvent) {
        Integer currentIndex = getCurrentSelectedIndex();
        logger.info("Removing with index : >> " + currentIndex);
        logger.info("Removing with size : >> " + phoneInfoList.size());
        logger.info("Value in List ** "+phoneInfoList.get(currentIndex).getSequence()+" phone "+phoneInfoList.get(currentIndex).getPhoneType());
        phoneInfoList.remove(currentIndex);
        logger.info("size after removing : >> " + phoneInfoList.size());
        UIComponent ui = (UIComponent) actionEvent.getSource();
        AdfFacesContext.getCurrentInstance().addPartialTarget(ui.getParent().getParent());
    }

   public Integer getCurrentSelectedIndex() {
        return currentSelectedIndex;
    }

The actionListener we use for the remove button picks up the row index and users the default remove() method of the list interface to remove the row from the list.
Finally we send a ppr to the table to visualize the change.
The surprise comes when we run the code and find that removing the selected row doesn’t work.

We don’t get an error message in the log. This is surprising as a simple delete of an element from an ArrayList should not be a problem.

Question is what happened and why?
Do you spot the problem in the code above?
A look at the Javadoc API for the List interface

JavaDoc of List interface (part)

JavaDoc of List interface (part)


shows two remove methods. We passed the index of the row, so it should have worked!
But wait, a close look at the JavaDoc shows that one of the remove methods gets an object as parameter. Well, the return parameter of the getCurrentSelectedIndex() method is an Integer object!
Now it’s clear. As we passed an object to the remove method the list implementation looks for an object which equals the passed object. Not finding one it doesn’t remove anything from the list and it don’t give an error message as this can be a normal result of this operation.
The only hint we could have gotten is that the return parameter of the remove method will return null instead of the removed element. This we did not check.
Anyway, changing the code to

    /**
     * Listener for the remove button in a table row
     * @param actionEvent tigger of the event
     */
    public void onRemoveAction(ActionEvent actionEvent) {
        // Get the index to remove
        Integer currentIndex = getCurrentSelectedIndex();
        logger.info("Removing with index : >> " + currentIndex);
        logger.info("Removing with size : >> " + phoneInfoList.size());
        logger.info("Value in List ** " + phoneInfoList.get(currentIndex).getSequence() + " phone " + phoneInfoList.get(currentIndex).getPhoneType());
        // remove the index. Here we need t opass the int value as the Integer would be interpreted as object to delete. As this object can't be found
        // the list would stay as is. There wouldn't even an error message.
        // To find out if the object was removed you hav eto check the return value. If it's not null it is the element which has been removed.
        PhoneInfoDTO dTO = phoneInfoList.remove(currentIndex.intValue());
        logger.info("size after removing : >> " + phoneInfoList.size());
        if (dTO == null) {
            logger.warning("Element with index " + currentIndex + " not found in list!");
        }
        // get the source and trigger a ppr on the container the table resides in
        UIComponent ui = (UIComponent)actionEvent.getSource();
        AdfFacesContext.getCurrentInstance().addPartialTarget(ui.getParent().getParent().getParent());
    }

makes the application run as desired. The trick it to pass the int value of the currentIndex Interger object.

You can download the sample (the final working one) from github BlogPoJoTableDeleteRow.zip. The sample runs without a DB connection.

Export to Excel enhancements in JDeveloper 11.1.1.9.0 and JDeveloper 12.1.3

In the current JDeveloper version 11.1.1.9.0 and 12.1.3 the af:exportCollectionActionListener got enhanced by options to filter the data to export.

Enhanced options of exportCollectionListener

Enhanced options of exportCollectionListener


The option this blog talks about is the one marked, the FilterMethod. The ducumentation for 12.12 Exporting Data from Table, Tree, or Tree Table does not reveal too much about how to use this FilterMethod.
The sample we build in this blog entry shows how the FilterMethod can be used to filter the data to be exported to excel.
In older version of JDev you hadto use a trick to filter the data which was downloaded from a table see Validate Data before Export via af:exportCollectionActionListener or af:fileDownloadActionListener. The new property of the af:exportCollectionActionListener allows to filter the data without using the trick.
The sample just load the employees table from the HR DB schema and shows it in a table on the screen. In the toolbar we add a button which has the af:exportCollectionActionListener attached.
Running application

Running application


Below is the page code of the toolbar holdign the export button:

                            &lt;f:facet name=&quot;toolbar&quot;&gt;
                                &lt;af:toolbar id=&quot;t2&quot;&gt;
                                    &lt;af:button text=&quot;Export to Excel&quot; id=&quot;b1&quot;&gt;
                                        &lt;af:exportCollectionActionListener type=&quot;excelHTML&quot; exportedId=&quot;t1&quot; filename=&quot;emp.xsl&quot; title=&quot;Export&quot;
                                                                           filterMethod=&quot;#{ExportToExcelBean.exportCollectionFilter}&quot;/&gt;
                                    &lt;/af:button&gt;
                                &lt;/af:toolbar&gt;

The filterMethod of the af:exportCollectionActionListener points to a bean method exportCollectionFilter in a request scoped bean ExportToExcelBean. The method gets called for each cell of the table which gets exported.

    /**
     * This method gets called for each cell which is to be exported.
     * It can be used to filter data to be exported. In this case salary values &gt; 6000 are not exported
     * @param uIComponent component of the cess which gets to be exported
     * @param exportContext context of the exported data (holds e.g. file name, character set...)
     * @param formatHandler format to be exported
     * @return true if cell value is exported, false if not
     */
    public Boolean exportCollectionFilter(UIComponent uIComponent, ExportContext exportContext, FormatHandler formatHandler) {
        if (exportContext.isFirstInRow()) {
            count++;
            _logger.info(&quot;Start a new Row &quot; + count);
        }
        _logger.info(&quot;Export Collection UIComponent: &quot; + uIComponent.getId());
        if (uIComponent instanceof RichOutputText) {
            RichOutputText rot = (RichOutputText) uIComponent;
            Object val = rot.getValue();
            String headerText = &quot;&quot;;
            UIComponent component = rot.getParent();
            if (component instanceof RichColumn) {
                RichColumn col = (RichColumn) component;
                headerText = col.getHeaderText();
            }
            StringBuilder sb = new StringBuilder();
            sb.append(&quot;Name: &quot;);
            sb.append(headerText);
            sb.append(&quot; Value: &quot;);
            sb.append(val);
            _logger.info(sb.toString());
            // check if the salary is greater than 6000
            if (&quot;Salary&quot;.equals(headerText)) {
                if (((BigDecimal) val).intValue() &gt; 6000) {
                    // if yes return false so that the value isn't exported
                    _logger.info(&quot;Skip Vals &gt; 6000&quot;);
                    return false;
                }
            }
        }

        return true;
    }

The method gets the uiComponent which represents the current cell to be exported, the ExportContext and the FormatHandler for the export. The ExportContext hold information about the filename, title, the used character set and status information about the row and cells currently exported. The status can be used to find out is a new row just starts to be exported ro is a cell is part of a span of cells. In the sample we use this information to print a log message for each row exported.
The FormatHandler is used to generate the document to be exported and the data in it. I did not find a way to use my own handler and there is no documentation about how to use another handler, so we leaf this as is for the moment.
In the sample method we like to filter the employee data in a way, that salaries greater than 6000 are not exported to the resulting file. As the method is called for each cell, the first thing to find out is which cell currently used. In lines 15-29 we use the current UIComponent to find out which column we are in. In lines 31-37 we check the salary column. In case the salary value is greater than 6000 we return false as this will trigger that the cell value is not exported. If the salary is below or equal to 6000 we return true and the cell value is exported.
Below we see the result we get if we export the table without the filterMethod set:

Exported table without filter

Exported table without filter


and the result with the filter method set:
Exported table with filter

Exported table with filter

You can download the sample application which was build using JDeveloper 12.1.3 and the HR DB schema from GitHub.

dvt:treemap showing node detail in popup

This post describes how to implement an dvt:treemap which shows a af:popup when the user clicks on a detail node in the map.
The documentation of the dvt:treemap component tell us that the dvt:treemapnode supports the af:showPopupBehaviortag and reacts on the ‘click’ and ‘mouseHover’ events.
This is part of the solution and allows us to begin implementing the use case. We add an af:showPopupBehavior to the nodes we want to show detail information for.

After creating a default Fusion Web Application which uses the HR DB schema, we begin with creating the data model for the model project. For this small sample the departments and employees tables will be sufficient.


The views are named according to their usage to make it easier to understand the model. This is all we need for the model.

Let’s start with the UI which only consist of a single page. The page has a header part and a center part. In the center area we build the treemap by dragging the Departments from the data controls onto the page and dropping it as treemap. After that, in the dialog we specify the first level of the map to be the departmentId (which shows the department name as the label) and the for the second level we choose the employeeId (which shows the last name of the employee as label) from the employees. The whole process is shown in the gallery below.


The resulting treemap is very basic in it’s features, e.g. there is no legend as you see later.
In the next step we create an af:popup to show the nodes detail information. This process is outlined in the next gallery. We drag the popup component onto the page below the af:treemap component

One thing to take note of are the properties of the popup. First we set the content delivery to ‘lazyUncached’, which makes sure that the data is loaded every time the popup is opened. Otherwise we’ll see only the data from the first time the popup has been opened. Second change is to set the launcherVar to ‘source’. This is the variable name we later use to access the node data. Third change is to set the event context to ‘launcher’. This means that events delivered by the popup and its descendents are delivered in the context of the launch source.

The treemap for example, when an event is delivered ‘in context’ then the data for the node clicked is made ‘current’ before the event listener is called, so if getRowData() is called on the collectionModel in the event listener it will return the data of the node that triggered the event. This is exactly what we need.

Finally we add a popupFetchListener to the popup which we use to get the data from the current node to a variable in the bindings. In the sample this variable ‘nodeInfo’ is defined in the variable iterator of the page and an attribute binding ‘nodeInfo1’ is added. More info on this can be found here.


The code below shows the popupFetchListener:

package de.hahn.blog.treemappopup.view.beans;

import javax.el.ELContext;
import javax.el.ExpressionFactory;

import javax.faces.application.Application;
import javax.faces.context.FacesContext;

import oracle.adf.model.BindingContext;
import oracle.adf.share.logging.ADFLogger;
import oracle.adf.view.rich.event.PopupFetchEvent;

import oracle.binding.AttributeBinding;
import oracle.binding.BindingContainer;


/**
 * Treemap handler bean
 * @author Timo Hahn
 */
public class TreemapBean {
    private static ADFLogger logger = ADFLogger.createADFLogger(TreemapBean.class);

    public TreemapBean() {
    }

    /**
     * listen to popup fetch.
     * @param popupFetchEvent event triggerd the fetch
     */
    public void fetchListener(PopupFetchEvent popupFetchEvent) {
        // retrieve node information 
        String lastName = (String) getValueFromExpression("#{source.currentRowData.lastName}");
        Integer id = (Integer) getValueFromExpression("#{source.currentRowData.EmployeeId}");
        //build info string
        String res = lastName + " id: " + id;
        logger.info("Information: " + res);
        // get the binding container
        BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();

        // get an ADF attributevalue from the ADF page definitions
        AttributeBinding attr = (AttributeBinding) bindings.getControlBinding("nodeInfo1");
        //set the value to it
        attr.setInputValue(res);
    }

    // get a value as object from an expression
    private Object getValueFromExpression(String name) {
        FacesContext facesCtx = FacesContext.getCurrentInstance();
        Application app = facesCtx.getApplication();
        ExpressionFactory elFactory = app.getExpressionFactory();
        ELContext elContext = facesCtx.getELContext();
        Object obj = elFactory.createValueExpression(elContext, name, Object.class).getValue(elContext);
        return obj;
    }
}

Finally we have to design the popup to show the node info from the attribute binding ‘nodeInfo1’. The popup uses a dialog with an af:outputText like

Show the node info in the popup

Show the node info in the popup


and set an af:showPopupBehavior to the node showing the employees

Running the finished application brings up the treemap, not pretty but enough to see this use case working. If we click on an employee node we see the popup with the last name of the employee and the employee id, the primary key of the selected row in the employees iterator.

You can download the sample application which was build using JDeveloper 12.1.3 and the HR DB schema from GitHub.

Pitfalls when using libraries of newer version than shipped with JDeveloper or WebLogic Server

A question on JDeveloper & ADF OTN forum cought my attention. A user wanted to use a method of the Apache Commons-IO library named FileUtils.getTempDiretory() but got an error when he tried to use code completion or when he tried to compile the code. The problem was that the compiler (or code completion) did not pick up the right java class from the library even as it was installed in the project a library.
As the original code used belongs to one of my samples I was interested in finding a reason for this behavior as I could see no obvious reason for this behavior.

An inspection of a provided test case quickly revealed the problem and a solution was found too. This blog is about the problem and the solution to it. Lets start with building a test case:


The test case had a model project which used a couple of libraries which we add too to make this sample as close as possible to the test case.
 Model Project Properties

Model Project Properties


There is no code whatsoever used in the model project just the libraries are defined!

To make use of the FileUtils.getTempDiretory() method we have to first download the Apache Commons-IO in a version higher then 2.0. The current version is 2.4 which you get from the given link. Once you unzip the zip (or tar.gz) to a directory of your choice we create a new library for JDeveloper (Tools->Manage Libraries…)

We add This new library to the view controller project


Next is to create a java bean where we try to use the FileUtils.getTempDiretory() method

Here we see the problem mentioned in the OTN question. The FileUtils.getTempDiretory() does not show up at all. The JavaDoc of the Apache Commons-IO 2.4 package shows that the method is available since version 2.0
JavaDoc of FileUtils Class

JavaDoc of FileUtils Class


If we try to compile the code we get an compilation error as seen in the last image.

What is the problem?
Well, it looks like there is another version of the Apache Commons-IO library already loaded in the classpath which gets loaded first. Once a library or class is loaded, another version of the same class will not overwrite the existing one.
First thing we can try is to move the new commons-io library to the top of the list of libraries.
In the test case presented here, this doesn’t work. We still get the same error. So there are libraries loaded before the view controller project libraries come to play.
Remember we added some libraries to the model project even as there is no code in the project at all?
Because the view controller project has dependency defined to the model project when we create an Fusion Web Application by default, libraries of the model project are loaded before the view controller projects.
We have can solve the problem in multiple ways:
1. remove the dependency to the model project. This is not recommended as it would mean that we have to build the model project ourselves if we have to change something in the model and want to run the application.
2. find the library which loads the FindUtils class and see if we can remove it (not all libraries are needed).
3. add the new Apache Commons-IO library to the model project and move it up front. This should load the newer version of the FindUtils class before any other.

Solution 1 isn’t really one. Solution 2 is possible and I’ll blog about it later. For this blog we use solution 3.

Solution
All we have to do is to add the Apache Commons-IO 2.4 library to the model project and move to the top of the list.

Model Project Properties with Commons-IO

Model Project Properties with Commons-IO


If we now rebuild the workspace we see that to error is gone
No Compilation Error

No Compilation Error


The code completion still shows the method red underlined. This is a bug in JDeveloper which doesn’t pick up the right library. Anyway, the compiler will use the right library and we can compile the application.

Now we add another method to the FileBean which returns the path to the temporary directory. This we use in a page index.jsf to show it on the ui.

<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html>
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
  <af:document title="index.jsf" id="d1">
    <af:form id="f1">
      <af:panelGridLayout id="pgl1">
        <af:gridRow height="50px" id="gr2">
          <af:gridCell width="100%" halign="stretch" valign="stretch" id="gc1">
            <!-- Header -->
            <af:outputText value="Preferred Package Test" id="ot1" inlineStyle="font-size:x-large;"/>
          </af:gridCell>
        </af:gridRow>
        <af:gridRow height="100%" id="gr1">
          <af:gridCell width="100%" halign="stretch" valign="stretch" id="gc2">
            <!-- Content -->
            <af:outputText value="Tempdir path = #{FileBean.tempDir}" id="ot2"/>
          </af:gridCell>
        </af:gridRow>
      </af:panelGridLayout>
    </af:form>
  </af:document>
</f:view>

When we run the application we get an exception

    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)
Caused by: java.lang.NoSuchMethodError: org.apache.commons.io.FileUtils.getTempDirectoryPath()Ljava/lang/String;
    at de.hahn.blog.preferredpackages.view.beans.FileBean.getTempDir(FileBean.java:16)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)

Why’s that?
The application compiled without an error and we still get a NoSuchMethodError. The reason for this is that when we start the WebLogic Server the older version of the Apache Commons-IO jar is loaded first, blocking loading of the newer version we need to get to the FileUtils.getTempDirectoryPath() method.
To allow the server to load our newer version of the jar we need to change a descriptor named weblogic-application.xml which is specific for WebLogic Server. For other servers there exist other descriptors allowing the same.
In this descriptor we add a preferred package for the org.apache.commons.io package. Open the weblogic-appliaction.xml descriptor and select the ‘Class Loading…’ node.

Application Descriptors: weblogic-application.xml

Application Descriptors: weblogic-application.xml


Here we enter the package name org.apache.commons.io to the ‘Application Preferred Libraries’ section.

to get this result in the source view of the descriptor:

    <prefer-application-packages>
        <package-name>org.apache.commons.io</package-name>
    </prefer-application-packages>

After restarting the application the index.jsf page show up OK

Running Test Page

Running Test Page

You can download the sample application which was build using JDeveloper 12.1.3 from GitHub.