JDev12c: Searching an af:tree

On the JDev & ADF OTN space I got a question on how to search an af:tree and select and disclose the nodes found matching the search criteria.

Problem description

We like to search an af:tree component for string values and if we find the value we like to select the node where we found the string we searched for. If the node where we found the string is a child node we disclose the node to make it visible.

Final sample Application

I started with building a sample application and show the final result here:

selection_935

We see a tree and a check box and a search field. The checkbox is used to search only the data visible in the tree or the whole data model the tree is build on. The difference is that you build the tree from view objects which can hold more attributes than you like to show in the tree node. This is the case with the sample tree as we see when we search for e.g. ‘sa’ in the visible data

selection_936

When we unmark the check box and repeat the search we get

selection_937

As you see we found another node ‘2900 1739 Geneva’ which doesn’T have the searched string ‘sa’. A look into the data model, the row behind this node shows

selection_938

We see that the street address which we don’t show in the node has the search string. To show that the search works for every node we set the search field to ‘2’ and get hits in different levels

selection_939

The sample application can be downloaded from GitHub. For details on this see the end of this blog.

Implementation

Now that we saw the running final application let’s look at how to implement this. We start by creating a small ADF Fusion Web Application. Is you like to you can start by following the steps given in  Why and how to write reproducible test cases.

Model Layer

Once the base application is created we setup the data model we use to build the tree. For this sample we use ‘Regions’, ‘Countries’ and ‘Location’ of the HR DB schema. To build the model we can use the ‘Create Business Components from Table’ wizard and end up with

selection_942

As you see I’ve renamed the views. The names now show what you’ll see when you use them. We only have one top level view object ‘RegionsView’ which will be the root of our tree in the UI. The child view are used to show detailed data.

View Controller

For the view controller layer we start by a simple page from the ‘Quick Layout’ section

selection_943

Now we add a title and add an af:splitter to the content area. Here we set the width of the first facet to 250 px to have enough room for the search field. We start with building the af:tree from the data control by dragging the ‘RegionsView’ from the data control onto the content area and dropping it as af:tree

Here we don’t select to show all attributes available but only a few.  Later we see that we can search the whole data model and not just only the visible data. Finally we bind the tree to a bean attribute to have access to the tree from the bean when we have searched it. This is a pure convenience, we could search the component tree each time we need the component to avoid the binding to a bean attribute.  When we create the bean we name it ‘TreeSelectionBean’ and set its scope to ‘Request’.  The bean will end in the adfc-config.xml

selection_950

the final code for the af:tree looks like

<af:tree value="#{bindings.RegionsView.treeModel}" var="node"
selectionListener="#{bindings.RegionsView.treeModel.makeCurrent}"
rowSelection="single" id="t1"
binding="#{TreeSelectionBean.tree}">
  <f:facet name="nodeStamp">
    <af:outputText value="#{node}" id="ot2"/>
  </f:facet>
</af:tree>

Now we create two pageDef variables as java.lang.String to hold the search string and the selection for the check box. If you need more information on how to create pageDef variables see Creating Variables and Attribute Bindings to Store Values Temporarily in the PageDef.

selection_949

In the first facet we add a check box and an af:inputText inside an af:panelGroupLayout and bind the value properties to the pageDef variables as

<af:panelGroupLayout id="pgl2" layout="vertical">
  <af:selectBooleanCheckbox text="node only" label="Seach" id="sbc1"
value="#{bindings.myNodeOnly1.inputValue}"/>
  <af:inputText label="Search for" id="it1" value="#{bindings.mySearchString1.inputValue}"/>
  <af:button text="Select" id="b1"
actionListener="#{TreeSelectionBean.onSelection}"/>
</af:panelGroupLayout>

The final thing to do is to wire the button to a bean method which does all the hard work. In the code above this is done with an actionListener which is pointing to the same bean created for the tree binding.


<span></span>public void onSelection(ActionEvent actionEvent) {
<span></span>JUCtrlHierBinding treeBinding = null;
// get the binding container
<span></span>BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
<span></span> // get an ADF attributevalue from the ADF page definitions
<span></span> AttributeBinding attr = (AttributeBinding) bindings.getControlBinding("mySearchString1");
 String node = (String) attr.getInputValue();

// nothing to search!
 // clear selected nodes
<span></span> if (node == null || node.isEmpty()){
<span></span> RichTree tree = getTree();
<span></span> RowKeySet rks = new RowKeySetImpl();
<span></span> tree.setDisclosedRowKeys(rks);
 //refresh the tree after the search
<span></span> AdfFacesContext.getCurrentInstance().addPartialTarget(getTree());

return;
 }

<span></span> // get an ADF attributevalue from the ADF page definitions
<span></span> AttributeBinding attrNodeOnly = (AttributeBinding) bindings.getControlBinding("myNodeOnly1");
<span></span> String strNodeOnly = (String) attrNodeOnly.getInputValue();
<span></span> // if not initializued set it to false!
<span></span> if (strNodeOnly == null) {
<span></span> strNodeOnly = "false";
 }
<span></span> _logger.info("Information: search node only: " + strNodeOnly);

<span></span>//Get the JUCtrlHierbinding reference from the PageDef
<span></span> // For JDev 12c use the next two lines to get the treebinding
<span></span> TreeModel tmodel = (TreeModel) getTree().getValue();
<span></span> treeBinding = (JUCtrlHierBinding) tmodel.getWrappedData();
<span></span> // For JDev 11g use the next two lines to get the treebinding
<span></span> // CollectionModel collectionModel = (CollectionModel)getTree().getValue();
<span></span> // treeBinding = (JUCtrlHierBinding)collectionModel.getWrappedData();
<span></span> _logger.info("Information tree value:" + treeBinding);

//Define a node to search in. In this example, the root node
 //is used
<span></span> JUCtrlHierNodeBinding root = treeBinding.getRootNodeBinding();
 //However, if the user used the "Show as Top" context menu option to
 //shorten the tree display, then we only search starting from this
 //top mode
<span></span> List topNode = (List) getTree().getFocusRowKey();
<span></span> if (topNode != null) {
 //make top node the root node for the search
<span></span> root = treeBinding.findNodeByKeyPath(topNode);
 }
<span></span> RichTree tree = getTree();
<span></span> RowKeySet rks = searchTreeNode(root, node.toString(), strNodeOnly);
<span></span> tree.setSelectedRowKeys(rks);
 //define the row key set that determines the nodes to disclose.
<span></span> RowKeySet disclosedRowKeySet = buildDiscloseRowKeySet(treeBinding, rks);
<span></span> tree.setDisclosedRowKeys(disclosedRowKeySet);
 //refresh the tree after the search
<span></span> AdfFacesContext.getCurrentInstance().addPartialTarget(tree);
 }

In line 4-7 we get the value the user entered into the search field. Lines 9-19 check if the user has given a search string. If not we clear the currently selected nodes from the tree by creating a new empty RowKeySet and setting this to the tree.

If he got a search string we check if we should search the visible data only or the whole data model. This is done by getting the value from the check box (lines 21-28). Now we data from the tree (lines 30-37).

One thing we have to check before starting the search is if the user has used the ‘show as top’ feature of the tree. This would mean that we only search beginning from the current top node down (lines 39-49).

The search is done in a method

private RowKeySet searchTreeNode(JUCtrlHierNodeBinding node, String searchString, String nodeOnly)

this we pass the start node, the search string and a flag if we want to search the whole data model or only the visible part. The method returns a RowKeySet containing the keys to the rows containing the search string (line 51-52). This list of row keys we set to the tree as selected rows (line 54). As we would like to disclose all rows which we have found, we have to do one more step. This step uses the row key and traverses upward in the tree to add all parent node until the node is found where we started the search (line 53-55). This is necessary as you only see a disclosed child node in a tree if the parent node is disclosed too. For this we you a helper method (line 54) and set the row keys as disclosed rows in the tree.


 /**
<span></span> * Helper method that returns a list of parent node for the RowKeySet
<span></span> * passed as the keys argument. The RowKeySet can be used to disclose
 * the folders in which the keys reside. Node that to disclose a full
<span></span> * branch, all RowKeySet that are in the path must be defined
 *
<span></span> * @param treeBinding ADF tree binding instance read from the PageDef
 * file
<span></span> * @param keys RowKeySet containing List entries of oracle.jbo.Key
<span></span> * @return RowKeySet of parent keys to disclose
 */
<span></span> private RowKeySet buildDiscloseRowKeySet(JUCtrlHierBinding treeBinding, RowKeySet keys) {
<span></span> RowKeySetImpl discloseRowKeySet = new RowKeySetImpl();
<span></span> Iterator iter = keys.iterator();
 while (iter.hasNext()) {
<span></span> List keyPath = (List) iter.next();
<span></span> JUCtrlHierNodeBinding node = treeBinding.findNodeByKeyPath(keyPath);
<span></span> if (node != null && node.getParent() != null && !node.getParent().getKeyPath().isEmpty()) {
 //store the parent path
<span></span> discloseRowKeySet.add(node.getParent().getKeyPath());
 //call method recursively until no parents are found
<span></span> RowKeySetImpl parentKeySet = new RowKeySetImpl();
<span></span> parentKeySet.add(node.getParent().getKeyPath());
<span></span> RowKeySet rks = buildDiscloseRowKeySet(treeBinding, parentKeySet);
<span></span> discloseRowKeySet.addAll(rks);
 }
 }
<span></span> return discloseRowKeySet;
 }

This concludes the implementation of the sear in a tree.

Download

The sample application uses the HR DB schema and can be downloaded from GitHub

The sample was build using JDev 12.2.1.2.

 

Quo vadis ADF?

Last week I attended DOAG Konferenz & Ausstellung in Nürnberg Germany. The DOAG (Deutsche ORACLE-Anwendergruppe e.V.) is the biggest German Oracle user group. The conference covers all Oracle products and technologies, way too much to name them all.

As my personal center of gravity is middle-ware and here ADF and the surrounding technologies, I attended lot’s of sessions about middle-ware, cloud, ADF, MAF and JET. The big picture of Oracle becoming a cloud company is getting clearer.

The way developers currently are working on premise with their products migrating to the cloud is getting clearer. There where about 4-5 sessions which gave explicit advice when to use which technology and what problems might arise mixing them. I’ll cover the main three here.

Frank Nimphius started with a session ‘The Future of Application Development Welcome to your new Job’ where he summarized areas of future of application development as

  • “Server-less” deployment
  • [Micro] [Cloud] Services
  • REST & JSON
  • Mobile centric
  • API first
  • Multi channel
  • Artificial Intelligence
  • Cloud Native Development
  • JavaScript
Future Application Development Summary 1

Future Application Development Summary 1

Future Application Development Summary 2

Future Application Development Summary 2

and defined different job roles around this like

  • Citizen (Low Code) Developer
  • Mobile Developer
  • Service Developer
  • Architect
  • Line of Business Manager

Each role using different technologies to fulfill the tasks. This should open spaces for new and old developers

Mobile Job Roles

Mobile Job Roles

Duncan Mills tackled the bear from a different perspective. In his session ‘Standing at Crossroads’ (Oracle ADF and Oracle JET) he pointed out the differences between ADF and JET

Oracle ADF Oracle JET
Support 5 + 3 + unlimited, no backport limitations Major release every 6 month, backports only to previous version
API are stable No guarantee of API stability
Could or on premis Cloud
Metadata focused Code focused
Full stack solution Client only solution
Has to „own“ the page Can be used „anywhere“

However, there are things both have in common, as Duncan states:

“Don’t assume the you have to go to JET to look ‘modern'”

“Don’t assume that JET will automatically be more perfomant”

There are more things you have to take into account before making a decision between ADF and JET like

  • Transaction and Services: here you have to check if your services and data model can support a stateless model. Same for your UI which handles the interaction with the user. One thing to note too is that using JET will produce less client – sever traffic.
  • Need to shape the services for the convenience of the UI: paging data, pre-computation, attribute reduction and mega endpoints

If you plan to mix ADF and JET there are a couple of things which should make you think twice:

  1. No session sharing between ADF and JET
  2. ADF and JET can’t use the same cache
  3. No shared transaction
  4. Separate timeouts
  5. geometry management
  6. Drag & drop not possible between ADF and JET
  7. Different maintenance and different libraries
  8. Different popup’s and glasspane

Summary is that there are plenty of reasons not to mix ADF and JET. If you want to mix ADF and JET in a project you should stick to module level and not mix them on one page.

duncan_doag5

The decision for ADF or  JET should take these points into account.

Shay Shmeltzer attended the German Oracle (ADF) Developer Community meeting on the DOAG and we ask him to talk about this topic ‘The Future of Developer Frameworks’.

shay2_doag1 Shay started by giving a main difference between ADF and JET:

“ADF is a framework, JET is a toolkit”

meaning that ADF allows development in all tires (MVC) whereas JET is only a client technology. Using JET you still have to have a back-end which generates the needed REST services. Here ADF comes into the picture again.

“ADF hides the complexity of the technology from the developer” 

True, building a REST service from an exiting ADFbc model is very easy and allow shaping the service too. Besides ORDS (Oracle REST Data Service, a tooling which allows to develop modern REST interfaces for relational data in the Oracle Database ) this is the easiest way I know.

During the Q&A of his talk we specifically ask him how Oracle sees the future of ADF as some rumors are that ADF is dead. Shay answered (loud and clear):

“ADF isn’t dead!”

Oracle is using ADF heavily in the SaaS products and will going on to do so. There are areas where building UI with JET is preferred (not in SaaS), but here the points mentioned by Duncan Mills are always considered.

My personal opinion is that ADF is alive will be used in the future, but there are options now which allow developers to choose different technologies in certain areas. Using ADF in the model layer and working with relational data bases, create REST or SOAP services with ease is a big plus. For the UI there are use cases where JET will be used, but ADF has its share too.

Undo Reorder of Columns in af:table

A question on OTN about how to undo a reorder of columns in an af:table can be undone. In this blog I show how to undo such a reorder to show the columns of an af:table in their natural order.
The natural order is defined when you create the table. You can move the attributes in the create dialog or delete attributes you don’t want to see in the UI from the table.

In the image above we see the dialog after we drop a VO as table onto a page. To change is order of the columns in the table you can use the arrows on the right (in the red rectangle). Once you save the table you can reorder the columns in the property editor of the af:table.

img00003

The order of the columns you see in the dialog or the property editor is what is called default order of the columns. This default order can be different than the order of the attributes in the query the VO is based on.
The page we drop the af:table on is very simple. It is build from a quick layout and has a header for the page title and a panelCollection which holds the table.

img00008

We can reorder the columns in the UI by dragging a column and dropping it at a different location.

The question now is how to undo this manual reorder without refreshing the browser window.

To understand how this is implemented, we need to look how the the reorder is done in the first place. A table is build from one or more columns. Each of the columns describes the data to be shown in the column, the header to show and the display index which is the order of the columns shown in the UI. If the display index is less then zero (e.g. -1) the default order is used. Any other positive number is used to show the columns in ascending order of these display index.
To undo any reorder of the columns is an af:table we simply have to get to each column and set it’s display index to -1.

public class UndoColumnReorderBean {
    private static ADFLogger _logger = ADFLogger.createADFLogger(UndoColumnReorderBean.class);
    private RichTable table;

    public UndoColumnReorderBean() {
    }

    public void undoColunmReorder(ActionEvent actionEvent) {
        _logger.info("Undo reorder...");
        // get the tables child components
        List<UIComponent> children = this.table.getChildren();
        for (UIComponent comp : children) {
            // check if the child is a column
            if (comp instanceof RichColumn) {
                RichColumn col = (RichColumn) comp;
                // if hte display index is greater 0 set it to -1
                if (col.getDisplayIndex() >= 0) {
                    _logger.info("...unset column "+col);
                    col.setDisplayIndex(-1);
                }
            }
        }
        _logger.info("... done!");
    }

    public void setTable(RichTable table) {
        this.table = table;
    }

    public RichTable getTable() {
        return table;
    }
}

The bean above has a method undoColumnReorder which is an action event Listener triggered by clicking the ‘Undo Column Reorder’ button. This method uses the af:table component which is bound to the bean as property. It iterates over the child components of the table, checking if the child is a RichColumn (or af:column in the UI) and if yes sets its display index to -1;
To show the change in the UI, we have to ppr the table by adding the button as partial Trigger to the table

img00007

After clicking the button in the ui the table again looks like

img00004

so the default order of the columns is shown again.

You can download the application from GitHub BlogUndoColumnReorder. The sample is build using JDev 12.2.1.2 but you can do the same with any other JDev version 11g or 12c you use. It uses the HR DB schema.

Reset Table Filter when Navigating to Page

This blog is a continuation of an older blog about how to reset the filters of an af:table component from a bean (How to reset a filter on an af:table the 12c way). In the older blog I described the technique to reset the filters defined in the FilterableQueryDescriptor of a filterable af:table.

Now users on OTN JDev & ADF space ask for a small variation of the use case. The filter should reset whenever a navigation takes place to the page which holds the af:table. No button should be clicked to reset the filter values.

As the original technique can still be used, I don’t go into detail about how to do this. It’s described in the other blog for JDev versions 12c. The same technique can be applied to 11g but different Java code has to be used (see How to reset a filter on an af:table). I changed the sample application, which you can download (see link at the end of the blog), so that the query panel with the af:table has an additional button to navigate to a different page.

Run through

After starting the application we see the page with an empty table as no search was done. Clicking hte search button will give us

selection_910

The ‘Navigate’ button simply navigate to another view which holds twu buttons which let you navigate back to the original page.

selection_911

The ‘back without clear filter’ just navigates back to the page, whereas the ‘back with clear filter’ navigates to a method in the task-flow which prepares the af:table for reset. This is the bounded task flow:

selection_912

The EmpQueryPanel holds the af:query with the result table as shown in the first image. The view is marked as default activity in the task flow. When you first run the application (page RTFQPTest.jsf) the task flow is added as region to the page showing the query panel with the result table.

When you hit the search button on the page the table shows all employees. Now we can filter the results like ‘FirstName’ contain ‘s’ and ‘LastName’ contains ‘k’

selection_913

Now if we hit the ‘Navigate’ button we go to the page shown in image 2 with the two buttons. If we click on hte ‘back without clear filter’ we come back to the page as shown above. The filter values are still present!

If we click on the ‘back with clear filter’ we see

selection_914

so the filter values are cleared. So, how is it done?

Implementation

In the original sample we had a button which we used to trigger a method which get the FilterableQueryDescriptor from the table. This descriptor holds the filter values which are cleared by looping over all ConjunctionCriterion which are the filter values. Here is the full method for 12c

 /**
 * method to reset filter attributes on an af:table
 * @param actionEvent event which triggers the method
 */
 public void resetTableFilter12c(ActionEvent actionEvent) {
   FilterableQueryDescriptor queryDescriptor = (FilterableQueryDescriptor) getEmpTable().getFilterModel();
   if (queryDescriptor != null &amp;&amp; queryDescriptor.getFilterConjunctionCriterion() != null) {
     logger.info("Filter found...");
     ConjunctionCriterion cc = queryDescriptor.getFilterConjunctionCriterion();
     List&lt;Criterion&gt; lc = cc.getCriterionList();
     if (!lc.isEmpty()){
       logger.info("...iterating criterions...");
     }
     for (Criterion c : lc) {
       if (c instanceof AttributeCriterion) {
         AttributeCriterion ac = (AttributeCriterion) c;
         Object object = ac.getValue();
         logger.info("...found " + ac.getAttribute().getName() + " value: " + object);
         if (object != null) {
           ac.setValue(null);
           logger.info("...reset...");
         }
      }
   }
getEmpTable().queueEvent(new QueryEvent(getEmpTable(), queryDescriptor));
  }
}

public void setEmpTable(RichTable empTable) {
 this.empTable = empTable;
}

public RichTable getEmpTable() {
 return empTable;
}

A look into the log after clicking hte ‘back with clear flter’ shows

selection_915

We see that the for loop caught all filters and resetted every filter to null.

The interesting part is how we triggered the call of the method resetTableFilter12c. As there is no button or other action event involved we use a trick. We add a method to the ‘ShortDesc’ property of the af:table which points to a bean method

selection_916

Now, whenever the af:table is rendered it goes to the bean method asking for the test for hte short description. We use the call of this method as trigger to reset the filters. As this method is called multiple times during the JSF lifecycle, we need some kind of flag which tells us that the reset operation is done already. Otherwise we will spende lots of time calling the reset method without need.

public void setShortDescription(String shortDescritopn) {
logger.info("Set ShortDescription called");
this.shortDescription = shortDescritopn;
}

public String getShortDescription() {
logger.info("get ShortDescription called");
AdfFacesContext adfFacesCtx = AdfFacesContext.getCurrentInstance();

// get the PageFlowScope Params
Map<String, Object> scopePageFlowScopeVar = adfFacesCtx.getPageFlowScope();
Boolean reset = (Boolean) scopePageFlowScopeVar.getOrDefault("resetFilter", Boolean.FALSE);
boolean flip = reset.booleanValue();
if (flip) {
logger.info("ResetTable Filter!");
resetTableFilter12c(null);
scopePageFlowScopeVar.put("resetFilter", Boolean.FALSE);
logger.info("Unset filter reset flag!");
}

return shortDescription;
}

As there are cases where the short description is ask for which we don’t want to use as triggers to clear the filters, we need another flag which we can check. For this we set a flag in the pageFlowScope of hte bounded task flow named ‘resetFilter’.  in the method we get the pageFlowScope and read the flag (lines 8-13). Only when the flag is set to true in the pageFlowScope we call theresetTableFilter12c method (line 14-19) and reset the flag to false.

The only thing left to do is to set the flag in the pageFlowScope when we liek the filters to get cleared when navigating to the page. For this we use the method action ‘resetTableFilter’ which is defined in the task flow. This method action points to a bean method

selection_917

which puts the flag ‘resetFilter’ with a value of ‘Boolean.TRUE’ into the pageFlowScope:

public void setRestFlag() {
AdfFacesContext adfFacesCtx = AdfFacesContext.getCurrentInstance();
// get the PageFlowScope Params
Map<String, Object> scopePageFlowScopeVar = adfFacesCtx.getPageFlowScope();
scopePageFlowScopeVar.put("resetFilter", Boolean.TRUE);
logger.info("Set filter reset flag!");
}

Resources

You can download the sample application from GitHub:  BlogResetTableFilter12c

The sample uses JDev 12.2.1.2.0 and the HR DB schema.

Why and how to write reproducible test cases

We all have been in situations where the application we were developing or maintaining did not do what we expect it to do. This happens all the time. No big deal you think, but after hours of trying to figure out the problem, you still are stuck.

In my experience it helped to stand back a bit and trying to look at the problem from a different angle. There are different ways to do this. One is to try to talk to someone else and to try to explain the problem. I  sometimes see the problem during explaining the problem, sometimes when I try to explain what I have tried to resolve the problem. The interesting part is the the person you talk to does not even have to be a programmer.

In my other post I tried to give some rules on how to ask questions on OTN, so I assume you have done anything in this direction and still don’t have a working solution. Now, there are problems which are not solved after this. This is the point where you should think about writing a reproducible test case. The remaining post is about how to write such a test case in a way you get the most out of it and make it easy for others to help you solve the problem.

A test case should fulfill some requirements:

  1. it must reproduce the problem
  2. it must be as easy as possible in doing this
  3. it should be easy to use by other users which don’t work in your environment
  4. it should use a data model which is known by other users (without the need to study table definitions, triggers, …)
  5. if a known data model can’t be used the model must be described in great detail
  6. if possible it should not be necessary to change  or insert data
  7. if data has to be changed or inserted you should include scripts or steps to get the original data back
  8. it should not take longer than 15-20 minute to setup everything

You might think that it will cost a lot of your time to create such a test case. You are right, it will cost you some time, but this time is well spend:

  1. chance is that you’ll find the problem and it’s solution while creating the test case. Then you are done and can be proud of yourself.
  2. creating the test case forces you to get a clear picture of the problem. This helps to describe the problem to others
  3. providing a test case to the community increases your chance that somebody find the problem and helps you fixing it drastically
  4. If nobody can help you and you ask support.oracle.com for help, they will ask you for a reproducible test case too. Normally the work  faster if you can provide one and have a good description of the problem.

After you now know the reasons and requirements for a test case let’s talk about how to create one. I always use the HE DB schema provided with every Oracle DB (or you can get it from Oracle Sample Model and Scripts). Almost every developer has access to an running HR DB instance. Depending on the problem and the relation to the original data I choose only the minimum number of tables needed to create the test case.

Lets say the problem in your app happens in a data entry form which consists of a master-detail-detail data. Then I look at the HR model and use Regions, Countries, Locations as data model for the test case. Depending on the data you can choose different tables, e.g. Departments, Employees, Jobs.

Once the data model (or the tables) are known, I create a simple ‘ADF Fusion Web Application’ which will create a workspace with two projects. The model project I create by using the ‘Business Components from Table…’ option you get by right clicking on the model project and then selection ‘New’. As the whole process is to long to describe I made screenshots from every dialog . The sample application is created with JDev version 12.2.1.2.0 so you might not exactly see the same images.

Creating the workspace:

Creating the model project:

Checking the model project:

After this the basic workspace for the test case is finished and can be used to add the problem specific parts. Here you start with the business logic and add whatever you need to show hte problem in the UI.

If your problem is related to cascading selectOneChoice in a form, it’s a good idea to add the configuration to the model project and test this using the application module tester. Once the business logic is running you continue with setting up the UI. Here you don’t have to design a pretty looking UI. Make it as simple as you can. Concentrate on showing the problem.

Here are some specific things you should take into account:

  1. if your problem needs a fancy design, e.g. you have a layout problem or a skin problem, then you sure add all needed resources to reproduce the problem.
  2. if your problem needs specific data in the DB, you should create DDL scripts to create the tables and to insert data into the tables. Avoid to add table space names and other storage information in hte scripts as other users likely don’t have those table spaces. A description of the data model is mandatory!
  3. if your case needs third party libraries you must provide the exact version of the libraries and hte location where we can download them! Smaller libraries you should add to the test case zip file.
  4. Write a detailed description on how to start the application (which page to run) and what to do to get to the problem.
  5. Format your code! I hate reading unformatted code. It costs time to interpret things which I would like to spend on solving the problem, not by trying to figure out which part of code I’m looking at.
  6. Comment your code! The better the code is commented the easier we can understand what you have tried to do. In a perfect world I wold like to understand the code by just reading the comments of the methods without having to go into the details of the implementation. Once the problem area is clear I start looking into the implementation. One thing I do is to read the comment of a method and checking it against it’s implementation.

Finally you zip everything together and make the zip file available to the public. Make sure that you delete unnecessary files and folders from the workspace (like .data and .classes). These folders are huge and are recreated automatically when you compile the application. Pay special attention to library files (jars) which are sometimes huge. As you are given us the version and location where to download them (and instructions where to put them!) they can be omitted. You can use Google Drive or Dropbox which are well known to other developers. I you use an unknown file host service you risk that nobody downloads the test case.

One final word of advice

We are all trying to help but have our normal work to do too. Creating a test case, even after you are asked to provide one, doesn’t mean that you get help from other users. There is no service level agreement (SLA) attached!

If you need urgent support ask Oracle Support or help. The test case will help you there too as support don’T need weeks to understand and reproduce the problem.

 

 

 

 

JDeveloper 12.2.1.2 is out

Aside

Today October, 19th 2016 JDeveloper 12.2.1.2 was released. From the first look at it it’s only a maintenance release.  There is currently no ‘What’s new’ document, only a release notes are available.

The release notes show only some bug fixes and some deprecation. Noteworthy are some changes in the REST runtime. One of them is that ADF REST HTTP PUT is deprecated functionality. From the doc

ADF REST HTTP PUT is deprecated functionality

Oracle has deprecated the functionality for executing HTTP PUT methods on ADF REST resource requests. In the current release, the describe for ADF REST resources continues to display PUT actions when the backing view object has the Update operation enabled (the operation enables both PUT and PATCH methods); however, ADF REST service clients should avoid making PUT requests (replace all items of the view row) as this functionality will be desupported in a future release

Another change in the REST department is that adf date and datetime attributes are no longer described as string but as date and datetime. Interesting if you work with ADFbc and Oracle JET.

There are some other small bug fixes and deprecation’s of oracle.domain data types and the dvt:stockGraph. You should use dvt:stockChart instead.

Let’s wait if Oracle releases an ‘What’s new’ document in hte near (?) future which will spear us some time searching for new stuff🙂

How to Ask Questions in OTN Spaces

You have a problem which you can’t figure out yourself and  want to ask some other users for help. We all have been in this situation!

Even the veteran users (like myself) had run into such problems. We all are glad that the OTN spaces exists where we can ask other users for their input and help.

Sure it’s easiest for you to just open a question in OTN or support.oracle.com (MOSC) and just ask

“I have a problem fitting my QTY_X  into the pivot”

Then you’ll get plenty of fire from other users who don’t really know what you are talking about. It will take some questions and your answers and after one or two days, the thread is already 8-10 posts long, the others have a basic understanding of the problem. This is what I call waste of time.

It would have been easier, if you had given the full use case at the beginning. This would have cost you a couple of minutes work (max. 30 min) but would have saved 24 hours in question and answers. So, here are some basic rules of how to ask questions:

  1. search the forum if your question has been asked before (and
    answered). The forums search isn’t that bad🙂
  2. search again using Google. Don’t give up after reading the first hit
  3. give information about your environment like versions of software you use. This is essential as versions change and other users might run into a similiar problem never knowing if the thread they read was about their version
  4. give a full understandable use case of the problem. The process of formulating a good forum question, will force you to think more clearly about the question yourself.  Sometimes in the middle of writing the question the answer comes to you because you’ve restated the problem in terms an outsider can understand
  5. tell us what you have already tried to solve the problem. Help the others to get the big picture and show that you not just dump your work on the forum users
  6. give information about the technologies you use in your application. If you e.g. POI to generate native EXCEL files and this is related to your problem, we should know. If you use PL/SQL to make changes in the DB we should nḱnow too
  7. providing code snippets and any other information which you think helps us to understand the problem or what you have tried to solve it
  8. Screenshots help understand visual problems. It’s hard to describe problems with are only visual like “my fields are not aligned”. Make screenshots and add them to your post.
  9. Provide stacktraces as text if you are ask to provide one. This way we can look at each part.
  10. Format code you provide. This make code readable, regardless if it’s Java, PL/SQL or the source of a .jsff page.

The list above is not complete but a starting point. Giving this information will help you getting an answer to your question.

Please remember that all users have normal jobs to do to get the bills paid. Phrases as ‘ASAP’, ‘urgent’ have no meaning at all . There is no ‘Service Level Agreement’ attached to the OTN Spaces. If you need urgent help you should use support.oracle.com, the paid support Oracle offers.

OTN Appreciation Day: Developer Cloud Service

Tim Hall had the great idea to introduce the ‘OTN Appreciation Day’ where bloggers should write a short blog about their favourite Oracle feature. As the OTN is a great network which I use every day, I like to add a blog about my current best like feature, the ‘Oracle Developer Cloud Service’.

So, why is the Developer Cloud Service my favoured feature?

I’m a consultant, coach and architect helping customers to bring their Oracle related projects to a good start, sometimes back on track, help migrate projects to current versions an coach developers. There are many questions where customers ask to see how something is going to work or how to set something up. Before the DCS, setting up an environment which is similar to the customers was a time consuming task.

I had to install software like JDeveloper, WebLogic Server, SOA Suite and Database most often before getting to the real task.  Coaching developers in new techniques like using git or automate software deployment (continuous integration) needs software too.

The DCS offers an ready to use environment to develop software and deploy it to WebLogic Server already setup with the needed packages like ADF and/or SOA Suite. You can plugin your own DB or use the one running in the cloud.

Agile development can be done with the DCS too. You can use an integrated bug tracker, use agile boards and create tasks you assign to developers of your team. Code reviews can be done and the Hudson server is used to build a new artifact including the last reviewed changes. Once they passed there automated tests, the new version can be automatically deployed to the server. The new version of the software is ready to be accessed from everybody. You get a full integrated DevOps platform!

This DCS makes my live as a consultant and coach a lot easier. Modern techniques can be shown and teached to customers. I also use the DCS for trainings the German ADF Community holds on multiple occasions. The DCS has evolved since end of 2015, more features have been added and more are in the pipeline to make it more productive in hte future.

To find out more about the Developer Cloud Service visit my other blog posts about the DCS or Oracle Developer Cloud Service.

If you want to try the DCS for yourself, you can get a free 30 day trial.

 

 

 

Summary of Day 4 at the Oracle Open World 2016

Late, but not forgotten, here is the summary of day four. It was too late yesterday, after the appreciation event to write it all down,

Wednesday was a somehow slow day for me as I attended two sessions only. Most of the day was reserved for meetings around my other activities in the OTN network like moderation and the German ADF Community which will soon relaunch their community page on OTN.

The first session was about testing web applications with Selenium ‘Testing Java Web Applications with Selenium: A Cookbook‘ by Jorge Hidalgo and Vicente Gonzalez Arellano, over at the Java One. It turned out that the Selenium Webdriver for JDev ADF is better working than the one showed in the demo in this session. The JDev Webdriver abstracts all the tricky stuff like waiting for ajax calls or finding the right component away from the developer. This make the job really easy. Summary: nothing new learned.


After a nice working lunch with my peer OTN moderators and the Queen of Moderators I attended a session about developing applications with Oracle JET and ADFbc REST services ‘Oracle Application Development Framework and Oracle JavaScript Extension Toolkit in the Cloud‘ by Sherry Yu, Shray Bansal and Abhinav Shroff. This session was interesting to see as it used REST services generated from ADFbc. This kind of REST services offer many usages

ADFbc REST Services Usage

ADFbc REST Services Usage

and allow some very nice features out of the box like pageable collections, rich set of meta data, list of values, attribute types and validation and resource discovery

ADFbc REST Services Functions

ADFbc REST Services Functions

During run time you can tailor the payload by only retrieving the attributes you need, execute batch transactions, sort the results and have build in security.

ADFbc Run Time Features

ADFbc Run Time Features

Simple queries can be added to the REST calls. These are working like ‘Query by Example’ in ADF tables. This set of features allows for many different use cases

ADFbc REST Use Cases

ADFbc REST Use Cases

like back end for OracleJET based applications, mobile friendly UIs, integration with other services and as REST solution for SaaS.


The remaining part of the day I spend on multiple events like hte OTN Blogger Meetup, OTN Happy Hour and finally the Oracle Appreciation Event featuring Sting and Gwen Stefani.

 

Summary of Day 5 at Oracle Open World 2016

I started the day with a session on Alta UI ‘Implementing Oracle’s New Alta UI Features’ by Richard Wright. Richard started by giving some reasoning about why Oracle developed Alta UI. It was manly because the users demanded a more mobile friendly UI. The biggest change which came with Alta UI was that the UI has to be build by thinking ‘mobile first’ and by more designing the flow of operations by personas. Only then you gain the full advantage of the Alta UI.

Transforming a older (legacy) application to a modern application using the Alta UI is not just migrating the skin. You have to redo the UI and design it for mobile first.This means that you have to think about different device sizes which in the end means that you have to design the application in a responsive manner.

Here the page stretches on the device. This is mostly not working on small devices as it makes the user to zoom into the right section to see the information. Because of hte size mobile friendly means that you try to visualize the information instead of e.g. showing the user a table. An image is giving information a human can intake more easily than  data in a table.

For a developer this means that using a list view should be preferred over using a table. A list view allows better responsive design.

Summary is that you should

  • Leverage major UI updates as an opportunity
  • Verify actual users versus previously targeted users
  • Target UI for preferred user devices
  • Understand their most important artifacts and tasks

Next session of the day was ‘Cloud-Native Application Development with Oracle Application Container Cloud‘ by Shaun Smith, Anand Kothari and Eric Jacobsen. This session is about the Oracle Application Container Cloud Service which lets you run native Java SE applications or Node.js based application run in the cloud.

I already mentioned the ‘Cloud Native Architecture’ on day 2.

img_bqnk2w

 

and the demands on the application development

and tools to use to make this architecture work from Oracles point of view

The Application Container Cloud should allow you to make such development simpley by

  • Develop
  • Zip
  • Deploy

your application. This can be done on a polyglot platform using java, php, Node.js and later even Ruby and Java EE. It’s an open platform allowing you to run many applications. Oracle provides a Linux system and you can bring what ever you like.

All this runs on Docker containers. The only constraint is that the applications must be stateless, as the containers are build up and shut down on the fly to load balance your application. This is done automatically without you needing to interfere.

Once your application runs monitoring the JVM or the performance of the application is done via the cloud services. Patching, if needed, is done for you too. Not that you don’t know about it, but it’s just a click on a button. If you don’t like the patch because it breaks your application you can easily rollback the patch


Final session of the day and OOW 2016 for me was ‘Using Docker with Continuous Delivery in Oracle Cloud‘ by Greg Stachnick and Mike Raab. This session talked about how Docker is used in the Oracle Container Cloud Service to allow agile, containerized development in the cloud.

The first part was about the developers cloud which was covered in almost every session about the cloud.

img_20160922_132523

Second part was about the Container Cloud Service and it’s base implementation StackEngine (a company bought by Oracle end of last year).

IMG_20160922_132704.jpg

Key features of the Container Cloud are shown in the image below:

IMG_20160922_132757.jpg

When setting up a service in a docker container the UI looks like

IMG_20160922_134200.jpg

Changes made in the UI are reflected directly in a docker run script (which you can get on the same page). Spinning up a new container is a matter of two clicks:

Stacks are the equivalent to Dockers composer but have some add on like you are able to add parameters to the containers.

In the end the Container Cloud service is a flexible ‘bring your own’ container and run it in hte cloud. Don’t forget to bring the needed licences too🙂

Product will be available within the next 12 month!


That was the OOW2016 for me. See you next year!