Monday, March 31, 2014

Responsible Entities - Population and Validation

I've completely neglected this piece due to a tightly scheduled phase of a project. That's over and so will be this.


In part 1 & part 2 we envisioned an architectural model for data entities that exhibit data related behavior without using JPA and explained how to track and report changes. In this part, we'll look into the ways we can populate and validate these objects.

Data Population

Read-Only entities

By now we can guess that read-only entities should only be populated directly from the database. The reliable and least intrusive way to do this is to start from the root entity which should have a public static factory method that accepts live database cursors from which the entity and its dependencies read their data. An example should clarify. In this example we are looking at a root read-only entity called "DashboardEntity" and its collection member of messages (obviously read-only) called "MessagesEntity".

public final class DashboardEntity extends BaseEntity {
  private static final long serialVersionUID = 5074611836312001925L;

  @XmlElement
  private int uniqueTrackingId;

  @XmlElement
  private MessagesEntity messages;

  private DashboardEntity() {
      uniqueTrackingId = 0;
      messages = new MessagesEntity();
  }

  /**
   * Factory method that creates an instance from a live cursor. Entity is not
   * responsible for the cursor's life cycle.
   * 
   * @param dashboardRowSet
   * @param msgRowSet
   * 
   * @return an instance of DashboardEntity
   */
  public static DashboardEntity CreateInstance(
          final ResultSet dashboardRowSet, final ResultSet msgRowSet)
          throws SQLException {
      DashboardEntity obj = new DashboardEntity();
      try {
          obj.loadData(dashboardRowSet);
          obj.messages.loadData(msgRowSet);
      } catch (SQLException ex) {
          throw ex;
      }
      return obj;
  }

  /**
   * Loads its data from a live cursor as a MEMBER class as opposed to a ROOT
   * class
   * 
   * @param dashboardRowSet
   * @throws SQLException
   */
  private void loadData(final ResultSet dashboardRowSet) throws SQLException {
      try {
          if (dashboardRowSet.next()) {
              uniqueTrackingId = dashboardRowSet.getInt("uniqueTrackingId");
          }
      } catch (SQLException ex) {
          throw ex;
      }
  }
}

Snippet 1: Root read-only entity that accepts all live cursors needed to populate itself and its members

public final class MessagesEntity extends
      CollectionBaseEntity {

  private static final long serialVersionUID = 1124863903497528641L;

  MessagesEntity() {
  }

  /**
   * Loads its data from a live cursor
   * 
   * @param msgRowSet
   * @throws SQLException
   */
  void loadData(final ResultSet msgRowSet) throws SQLException {
      try {
          while (msgRowSet.next()) {
              MessageEntity obj = new MessageEntity();
              obj.loadData(msgRowSet);
              rows.add(obj);
          }
      } catch (SQLException ex) {
          throw ex;
      }
  }
}

Snippet 2: Member read-only collection entity creating its members by giving them the cursor

Please take note of the followings:
  1. MessagesEntity's constructor is package scoped. It could've been made private and have the entity to expose a static factory method to which the DashboardEntity would pass its corresponding cursor. But usually you'd be controlling the entire stack of entities and that level of complication won't be necessary.
  2. Please note that MessagesEntity isn't responsible for loading MessageEntity members. It simply pass the cursor to them to load themselves. It should be easy to figure out what MessageEntity looks like (hint: it has a "loadData" method similar to what DashboardEntity has).
  3. Entities aren't responsible for the life cycle of these cursors. That's controlled by the mediator that received them from the database (e.g.: a DAO).
  4. While these entities are loaded, these cursors are kept alive. Any interruption would cause a live useless resource to linger in the database for a while. The resource would be released eventually though. In some databases such as Oracle you could read the data into a user-defined type (defined in the database schema) and return that as a disconnected structure. That solution, however, requires more maintenance and more space. Moreover, it's less flexible and amenable to optimization. You ought to read the data out anyway. Whereas with cursor, you'd have the option not to read it in the application.

Editable entities

Depending on your point of view of changes throughout a particular application, editable entities could be loaded from a read-only entity (optimistic) or directly from the database (pessimistic). In other words, to update the data you could either populate an editable entity using a database cursor in the backend very much like the read-only entities (which is then presented to the front-end to be edited and sent back to be persisted) or populate it from a read-only entity and trust that the data hasn't been changed ever since. The latter can be improved by introducing the time element. That is, an editable entity built from a read-only one could only be valid for certain amount of time before it's persisted.

Since the code, in both cases, is similar to snippets 1 and 2, I won't provide an example. However, bear in mind that updating editable entities means implicitly invoking aspect code that's woven into your entities. To prevent that from happening while the entity is being populated, you could introduce a volatile flag in the BaseEditableEntity that can be set by the entity when the data population begins and ends. The aspect could then read this flag to decide whether or not it should proceed with its execution.
Another issue to watch out for is the isNew flag (have a look at the BaseEditableEntity's default constructor). Basically, when a new editable object is created, isNew becomes true indicating this object is new and when asked if it's changed (perhaps to be persisted if it is) it'll say yes. However, an editable entity built from a read-only one (or database cursor) isn't new. So when the loading finished, you'll have to flag the entity as "old".

Validation

The ultimate goal of validation is to ensure that the values of an entity's data attributes comply with certain predefined rules before the entity is persisted1. This simply means that validation only applies to single2 editable entities.

We won't try to reinvent the wheel since JSR-303 has already introduced validation. What we'd like to do, however, is to find a way to get entities to report their state of validity when asked. The solution should be obvious by now. To summarize, introduce an aspect which validates any field that is annotated with annotations from "javax.validation.constraints" when is changed and record the "broken rule" in the entity itself.

Following snippet shows the basic validation aspect.

@Aspect
public class AspectValidation {

    @Around("set(@(javax.validation.constraints..*) * *) && this(BaseEditableEntity) && this(k) && args(newVal)")
    public void aroundSetField(JoinPoint jp, Object k, Object newVal)
            throws Throwable {

        BaseEditableEntity trackingObj = ((BaseEditableEntity) k);
        if (trackingObj.getSkipInterceptors())
            return;

        Signature signature = jp.getSignature();
        String fieldName = signature.getName();

        // build default validator factory
        Validator validator = Validation.buildDefaultValidatorFactory()
                .getValidator();

        Set> constraintViolations = validator
                .validateProperty(trackingObj, fieldName);
        if (!constraintViolations.isEmpty()) {
            for (ConstraintViolation violation : constraintViolations) {
                trackingObj.brokenRules.add(BrokenRule.createInstance(
                        fieldName, violation.getMessage()));
            }
        }
    }
}
Snippet 3: Validation aspect

The followings require clarification:

  1. getSkipInterceptors refers to what I pointed out in previous section regarding population of editable entities.
  2. brokenRules is a List of BrokenRule in the BaseEditableEntity.
  3. BrokenRule is simply a class with 2 fields that holds the name of the field in the entity to which there is a validation violation is associated and the validation violation itself.


This concludes the "Responsible Entities". Please feel free to take it apart and point out the shortcomings.

1. The entities that are loaded from database and not changed ever since are assumed to be valid (in term of rules) simply because the records in database are.
2. The assumption we made in part 2 about collection entities being solely containers still stands. If that's not the case (if they have data attributes that need validation) the approach explained here for validating single entities should apply to them, as well.

Monday, August 19, 2013

Responsible Entities - Tracking and reporting changes

In the introduction episode, I wrote about an Idea for an architectural model to create data entities that exhibit data related behaviors particularly Population, Validation, and Tracking and Reporting Changes without using JPA. In this episode we take a closer look at the model and the implementation of "tracking and reporting changes".

Basics

In this model, entities are divided into two categories; those that are not going to be persisted back to the database and are usually used to transfer data to higher layers of the application stack (e.g.: display a list); and those that will potentially be persisted back to the database, in other words, "Read-only" and "Editable" ("Persistable") entities. Being "immutable" is one of the key characteristics of the Read-only entities. A "search result" is a good example of a read-only entity. When an item from within the search result is selected to be modified (and saved), an editable entity is used. Although, it isn't very clear from this description that how an item in a read-only entity (search result) can be edited and persisted. This brings me to other aspects of this model:
  1. In the search result example, the result is a "collection" of items each of which is a read-only entity, as well. That's essentially another way of categorizing entities; "Collection" and "Single". Table 1 summarizes the categories.
  2. In order to persist information (using entities), one should always use editable entities. In the search result example, when an item is selected to be modified, an editable entity can be created via either making a copy of the selected read-only entity (in the presentation tier) or reloading the selected item's data from the database (using some sort of unique identifier) onto an editable object. The approach depends on the design and frequency of the updates. We are either "optimistic" that the data hasn't changed since the search result was created or, well, we don't. Either way, the point I'm trying to make is that entities' persistence characteristics don't change. A read-only entity will never become editable. Or editable entities can't be aggregated into read-only collection entities.

Table 1: Categorising Data Entities
Read-only SinglesEditable Singles
Read-only CollectionsComprised of / Aggregate intoNot allowed
Editable CollectionsNot allowedComprised of / Aggregate into

Putting it all in a class diagram, we have the following two figures.

Fig.1: Base classes for Editable entities
Fig.1: Base classes for Editable entities
Fig.1 depicts the two base abstract classes for individual (single) and collection editable entities implementing (realizing) "Editable Entity" interface using which we mandate the basic necessities of an editable class; tracking changes and validation.

Fig.2: Base classes for Read-only entities
Fig.2: Base classes for Read-only entities

Fig.2 shows the read-only counterparts.

Upon first glance, you might notices the following aspects of the model ("WHATs"):

  1. These 4 classes are good candidates for being abstract (which they are).
  2. Single editable entities track changes differently than the collection editable entities. Single entities have a built-in attribute called "changes" in which changes to other attributes are recorded (will explain the "HOW" shortly). Whereas collection entities track adding and removing items.
  3. Collection classes are generics and members are BaseEditableEntity objects (single editable entities) for editable collections and BaseEntity objects (single read-only entities) for read-only collections.

Tracking and Reporting Changes

As mentioned in previous part, tracking changes in data entities has been proven useful in implementing UI interactions as well as addressing concurrency issues. For instance, in the search result example, if user chooses to edit and save an item, you might want to know if something was actually changed. Or you might want to persist only those fields that are modified in order to mitigate the risk of concurrency issues (if you are using an optimistic concurrency control method). Of course, it goes without saying that tracking changes is only applicable to editable entities.

Single Editable Entities

Single editable entities are either created from scratch and persisted (new objects) or loaded (from a read-only entity or database), edited and then persisted (existing objects).
A new object doesn't need much tracking except for the fact that it should be marked "new". Hence BaseEditableEntity has an "isNew" attribute which is false by default and is set to true in its protected constructor. BaseEditableEntity's constructor is then called within single editable entities' constructors.

package ca.amir.entities;

import java.io.Serializable;

public class BaseEditableEntity implements Serializable, EditableEntity { 

 protected boolean isNew = false;

 protected BaseEditableEntity() {
  isNew = true;
 }
}
Snippet 1: The isNew attribute is used to flag new object when built from scratch

As for existing objects, single entities inherit an internal Map<String, Object> collection called "changes" in which the "key" is the name of the changed attribute and "value" is its new value. To populate the collection whilst the values of entity's attributes change, the first method that comes to mind is to have a function to call to which we pass the field name and the new value. Although working, this method requires calling the function everywhere an attribute is being set. I prefer using Aspect Oriented Programming, instead. That is, seamlessly intercepting the calls that set the value of an attribute and if necessary, registering that change with the "changes" Map. Also, we may want the flexibility of tracking changes in certain attributes of an entity. Hence, we could demarcate the attributes that we would like to track with a custom annotation and have our "change-tracking Aspect" to only intercept the calls that set the annotated attributes. Following code snippets summarize the usage of annotation and AOP (utilizing AspectJ) together for this purpose.
package ca.amir.aspects;

import org.aspectj.lang.Signature;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.ProceedingJoinPoint;

import ca.amir.entities.BaseEditableEntity;
import ca.amir.entities.BaseEntity;
import java.lang.reflect.Field;

@Aspect
public class ChangeTrackingAspect {

 @Around("set(@ca.amir.entities.ChangesTraced * *) && target(t) && args(newVal)")
 public void aroundSetField(ProceedingJoinPoint jp, Object t, Object newVal)
   throws Throwable {
  Signature signature = jp.getSignature();
  String fieldName = signature.getName();
  Field field = t.getClass().getDeclaredField(fieldName);
  field.setAccessible(true);
  Object oldVal = field.get(t);

  if ((oldVal == null ^ newVal == null)
    || (oldVal != null && !oldVal.equals(newVal))) {
   ((BaseEditableEntity) t).objectChanged(fieldName, newVal);
  }

  jp.proceed();
 }
}
Snippet 2: The aspect that intercepts calls

Line 15 marks the "aroundSetField" function of the aspect class to run when the value of an attribute with any name and of any type annotated with "ChangesTraced" annotation (Snippet 3) is set ( set(@ca.amir.entities.ChangesTraced * *) ). The function compares attribute's old and new values (lines 24 and 25) and if they are different the aspect calls the "objectChanged" method of BaseEditableEntity (Snippet 4) which registers the change.

package ca.amir.entities;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

/**
 * Custom annotation used to demarcate all attributes
 * of an editable entity for which changes are traced
 */
@Retention(value=RetentionPolicy.RUNTIME)
public @interface ChangesTraced {
}
Snippet 3: "ChangesTraced" custom annotation

package ca.amir.entities;

import java.io.Serializable;
import java.util.Map;
import java.util.HashMap;

public class BaseEditableEntity implements Serializable, EditableEntity { 

 protected Map<String, Object> changes;
 protected boolean isNew = false;

 protected BaseEditableEntity() {
  isNew = true;
  changes = new HashMap<String, Object>();
 }

 public final void objectChanged(String fieldName, Object newValue) {
  changes.put(fieldName, newValue);
 }
}
Snippet 4: "objectChanged" function registers the change


There is a bit of room for a few improvements here which I'll point out and leave the implementation to readers:
  1. This data structure doesn't take undo function into consideration. For instance, in a multistage wizard-like form, it's possible for user to go back in steps and undo a particular change before submitting the entire form. In that case we'd want to revert its corresponded recording.
  2. "objectChanged" function, which is invoked by the Aspect when a change occurs, is generic and isn't aware of the context. But if your tracking requirement extends to editable entities themselves, this method won't suffice. That is, if in addition to recording a change a context specific action is required, this centralized function won't be the correct method to implement it. Hint: Observer pattern.
  3. Different layers in the application stack would most likely want to know about registered changes in an entity. Presentation tier for interaction purposes or data tier for persistence purposes. Usually a generic way of reporting changes will suffice all layers (e.g.: an iterator method). However, there are times that different layers require different views of those changes (as if an entity maintains different contract with different layers). For instance, there might be an optional write-only data attribute in the entity which obviously shouldn't be reported as part of the changes made to the entity when requested by the presentation tier. On the contrary, entity should report it when changes are being persisted into the database. This level of sophistication is only required when you don't control all the layers.

A word on reuse

You might encounter single editable entities with very similar read-only counterparts. For example, a read-only entity to display profile information (fetched from database) and an editable one to edit and update the profile (back to database). It's very tempting to invent different ways to share the data attributes of the two entities; for instance, aggregating a third class containing shared attributes in both entities. Although this enables us to reuse the attributes it couples the read-only and editable entities. Coupling isn't a problem as long as it doesn't affect the fundamentals of the model. Using a third class however, does. Attributes of single editable entities that are tracked for their changes should be annotated with the custom annotation. That isn't the case in read-only entities. Even though read-only entities will ignore the annotation and this contradiction will work for the time being, it should be avoided since this is an indication of an extensibility issue in the model.

Collection Editable Entities

Here I've made the assumption that collection editable entities are containers with no data attributes that need tracking. An example should clarify this.

Fig.3: "OrderLines" collection is merely a container
Fig.3: "OrderLines" collection is merely a container

In this example, "OrderLines" is merely a container with no data attributes of its own which we'd want to track. However, if you have collection entities with data attributes for which you'd want to track changes, then in addition to the change tracking method I'll explain in this section you need to incorporate the method explained in previous section (using AOP and annotation) into your collection entities, as well.

Collections entities with such characteristics only need to track "removed" items. That's because collection members track their own status when they are "new" or "modified". Hence, in addition to the inherited "rows" collection in which members are stored each collection editable entity has a "removed" collection. Removed items are simply moved from "rows" to "removed" ("remove" function) and vice versa when remove is undone (similar to single entities, undo function is currently missing).
package ca.amir.entities;

import java.io.Serializable;
import java.util.List;

public class CollectionBaseEditableEntity<T extends BaseEditableEntity> implements Serializable, EditableEntity {

    protected List rows;
    protected List removed;
 
 public final T remove(int index) {
  T obj = rows.remove(index);
  removed.add(obj);
  return obj;
 }
 
 public final boolean remove(T itemToRemove) {
  if(rows.remove(itemToRemove))
   return removed.add(itemToRemove);
  
  return false;
 }
}
Snippet 5: Tracking removed items

XML Serialization

All entities are serializable. However, if XML serialization is one of your requirements (e.g.: JAX-WS) then you need to demarcate the entities with proper annotations. Please see @XmlType, @XmlAccessorType and @XmlElement for more information.
Also, since collection entities' internal collection attributes ("rows" and "removed") are generics, their actual type will be lost during serialization (type erasure). To get around this issue, CollectionBaseEditableEntity has "transient" getter methods that have to be overridden by entities and annotated with the actual type.
package ca.amir.entities;

import java.io.Serializable;
import java.util.List;
import javax.xml.bind.annotation.XmlType;
import javax.xml.bind.annotation.XmlTransient;

@XmlType(name = "CollectionBaseEditableEntity", namespace="http://entities.amir.ca")
public class CollectionBaseEditableEntity<T extends BaseEditableEntity> implements Serializable, EditableEntity {

    protected List<t> rows;
    protected List<t> removed;
 
    @XmlTransient
    public List<t> getElements() {
     return rows;
    }
}
Snippet 6: Transient getter method for "rows" to be overridden by entities

package ca.amir.entities;

import java.util.List;
import javax.xml.bind.annotation.XmlType;
import javax.xml.bind.annotation.XmlElement;

@XmlType(name = "OrderLines", namespace="http://entities.amir.ca")
public class OrderLines extends CollectionBaseEditableEntity<orderline> {

    @Override
    @XmlElement(type=OrderLine.class)
    public List<OrderLine> getElements() {
     return rows;
    }
}
Snippet 7: Overridden getter method for "rows"

Snippets 5 & 6 depict how "getElements" is overridden and annotated with the actual type of the members ("OrderLine") in a child collection ("OrderLines").

Mapping the Map

java.util.Map doesn't naturally map to a XML representation. That is, annotating it as an XmlElement isn't going to produce a XML form using which you can build the same object back (unmarshall or deserialize). One way to get around it is custom marshalling by the means of a custom XmlAdapter. However, since I use quite a few different types of Map I decided to create a generic XmlAdapter using which I could minimize the amount of code I write for each one. The next 3 snippets show the generic Adapter, its MapType class, and the Member (Entry) class of the MapType class.

public class GenericMapAdapter<K, V> extends
        XmlAdapter<MapType<K, V>, Map<K, V>> {

    @Override
    public final MapType<K, V> marshal(final Map<K, V> v) throws Exception {
        MapType<K, V> obj = new MapType<K, V>();
        for (Entry<K, V> entry : v.entrySet())
            obj.entry.add(MapTypeEntry.createInstance(entry.getKey(),
                    entry.getValue()));
        return obj;
    }

    @Override
    public final Map<K, V> unmarshal(final MapType<K, V> v) throws Exception {
        Map<K, V> obj = new HashMap<K, V>();
        for (MapTypeEntry<K, V> typeEntry : v.entry)
            obj.put(typeEntry.key, typeEntry.value);
        return obj;
    }
}
Snippet 8: Generic XmlAdapter


public final class MapType<K, V> {

    @XmlElement
    final List<MapTypeEntry<K, V>> entry = new ArrayList<MapTypeEntry<K, V>>();
}
Snippet 9: Generic XmlAdapter's MapType class


public final class MapTypeEntry<K, V> {
    @XmlElement
    public K key;

    @XmlElement
    public V value;

    private MapTypeEntry() {
    }

    public static final <K, V> MapTypeEntry<K, V> createInstance(final K k, final V v) {
        MapTypeEntry<K, V> obj = new MapTypeEntry<K, V>();
        obj.key = k;
        obj.value = v;

        return obj;
    }
}
Snippet 10: Member (Entry) class of the MapType class


What's left to do is to use the custom adapter in conjunction with XmlJavaTypeAdapter to annotate a Map type. Obviously, you wouldn't be able to use the Generic XmlAdapter simply because you can't get a class literal from a generic type.

illegal start of expression
@XmlJavaTypeAdapter(GenericMapAdapter<String,Object>.class)
Map<String, Object> changes;
                                                    ^

The solution is, you guessed it, a subclass for each Map type.

public final class StringObjectMapAdapter extends

        GenericMapAdapter<String,Object> {
}


...................


@XmlJavaTypeAdapter(StringObjectMapAdapter.class)
Map<String, Object> changes;
Snippet 11: StringObjectMapAdapter - An XmlAdapter for a Map<String,Object>


Using Maven

If you use Maven to build and package, then you'll need to use AspectJ's Maven plugin in order to compile the aspects. However, the plugin will by default add all .java and .aj files in the project source directories. In order to avoid this problem and also not compiling aspects with maven's compiler plugin, we need to configure the plugins to pick and compile appropriate resources (AspectJ plugin and compiler plugin to compile aspects and non-aspect classes respectively).

I do these sorts of configuration in a company / project wide super POM. Following code shows the configuration.


<pluginManagement>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>2.4</version>
            <configuration>
                <source>1.6</source>
                <target>1.6</target>
                <encoding>UTF-8</encoding>
                <excludes>
                    <exclude>**/aspects/*</exclude>
                </excludes>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>aspectj-maven-plugin</artifactId>
            <version>1.3</version>
            <configuration>
                <verbose>true</verbose>
                <complianceLevel>1.6</complianceLevel>
                <showWeaveInfo>true</showWeaveInfo>
                <aspectDirectory>**/aspects/*</aspectDirectory>
            </configuration>
            <executions>
                <execution>
                    <goals>
                        <goal>compile</goal>
                        <goal>test-compile</goal>
                    </goals>
                </execution>
            </executions>
            <dependencies>
                <dependency>
                    <groupId>org.aspectj</groupId>
                    <artifactId>aspectjrt</artifactId>
                    <version>1.6.10</version>
                </dependency>
                <dependency>
                    <groupId>org.aspectj</groupId>
                    <artifactId>aspectjtools</artifactId>
                    <version>1.6.10</version>
                </dependency>
            </dependencies>
        </plugin>
    </plugins>
</pluginManagement>
Snippet 12: Configuration of maven plugins

This configuration instructs the compiler plugin to ignore any java file under the "aspects" directory and AspectJ's plugin to attach itself to "compile" and "test-compile" goals and only build the "aspects" directory.

In the next part we'll discuss validation and population.

Monday, July 01, 2013

Responsible Entities - Introduction

When C# is my technology of choice to design a middle tier (and I have the chance to work it out from scratch) I often end up using an open source architectural framework called CSLA.NET (This post isn't about C# or CSLA though. Indulge me a few moments as I try to set the context.). Created by +Rockford Lhotka, CSLA.NET essentially helps you with building an object-oriented business layer by the means of standardizing the implementation of business objects. I usually build a hierarchy of CSLA based business objects for each domain entity in which I encapsulate business logic (business rules, validations, authorization rules if I use any, etc.) and data. CSLA is a lot more than just a series of base classes from which you'd inherit your business objects (its capabilities form a number of books). Suffice it to say that it offers distribution within the middle tier in an easy and configurable way and facilitates the UI interactions (e.g.: data binding, sorting, N-level undo) within the boundaries of Microsoft technologies (e.g.: ASP.NET MVC).
Mapping CSLA to EJB 3.0, each CSLA business object is an equivalent of a Session Bean (a distributed business object that doesn't contain data) and an Entity (a POJO with no business logic). To a lot of developers who work with EJB 3, creating data entities means using JPA. And that, to some extent, is understandable. JPA introduced a much leaner approach to data persistence compared to Entity Beans and is usable both within Java SE environments as well as within Java EE.
But then the question arises, what if one can't or doesn't want to use JPA? For instance, if you are required to adhere to the specification of Java EE 5 where JPA was initially introduced and thus, lacking the features you might need.
Based on the lessons learned from CSLA, I'd like to build a model for entities in which:
  1. Similar to JPA, entities are annotated POJOs.
  2. Unlike JPA, the core data-related tasks (Population, Validation, and Tracking and Reporting Changes) are delegated to the entities themselves (validation is introduced in Java EE 6 with JSR 303). The advantage of this approach is that you don't have to annotate the entities with the detail of data layer objects such as table or procedure names. Also, entities that track and report their changes not only fulfill data persistence requirements, but also prove useful in the implementation of UI interactions.
In the next few posts, I'll explain the implementation detail of different aspects (Tracking changes, Validation, and Data Population) of the model.

Friday, May 31, 2013

Preferences Need No Inferences

I decided to write this post when I came across a website with a background image in its homepage and the header part of each internal page! And not just any image, but children's hand-drawn ones (a skeuomorph of children's drawing notebook). Although a trend out of fashion, I'd like to use this opportunity to explain why this isn't such a good idea from both usability and technical points of view. In rare cases the background image is actually the content. Those cases are not factored in this post.

Usability

A research paper titled "Attention web designers: You have 50 milliseconds to make a good first impression!"1 talks about 3 studies that were conducted to find out how quickly people form an opinion about web page visual appeal.

    In the first study, participants twice rated the visual appeal of web homepages presented for 500ms each. The second study replicated the first, but participants also rated each web page on seven specific design dimensions. Visual appeal was found to be closely related to most of these. Study 3 again replicated the 500ms condition as well as adding a 50ms condition using the same stimuli to determine whether the first impression may be interpreted as a 'mere exposure effect' (Zajonc 1980). Throughout, visual appeal ratings were highly correlated from one phase to the next as were the correlations between the 50ms and 500ms conditions. Thus, visual appeal can be assessed within 50ms, suggesting that web designers have about 50 ms to make a good first impression.

Although sensory (visual design) and the first good impression are highly correlated, but in order to get the visual design right you need to think about what the visual design is actually presenting. Visual design is where the arrangement of interface elements such as content and navigation (skeleton) is presented. And skeleton presents a much more abstract aspect of your website; interaction design. Interaction design, I believe, is best explained by Jesse J. Garrett in his excellent book called "The Elements of User Experience".

    Any time a person uses a product, a sort of dance goes on between the two of them. The user moves around, and the system responds. Then the user moves in response to the system, and so the dance goes on.
....
But every dancer will tell you that for the dance to really work, each participant must anticipate the moves of the other.

Putting this in the context of 50 ms rule, visual design ought to be clear enough for user to quickly figure out the first or the next move. To know how, is beyond this post. Suffice to say that typography, choice of colors, consistency, and contrast are the elements of visual design to which you need to pay close attention.

Using background images will limit your choice of colors throughout the page to a great degree. It will also distract user's attention from the major tasks made available to him (e.g.: navigation menu, call to action boxes) to figure out the next move. Another word, it decreases the contrast of those tasks. However, if the image is not in the background then it can be cleverly placed and used as a mean to increase the contrast of major tasks with a lot less distraction. Let's consider Skype as an example.

Skype's homepage in a PC browser
Skype - An example of using image to increase the contrast of major tasks
The image in Skype's homepage isn't designed to captivate user's attention but to redirect it to important elements of the page:

  1. Navigation bar on top with major tasks for users who have used Skype before
  2. The "Join" circle (with the help of big blue circle) to, I guess, emphasize an important goal and increase the chance of conversion
  3. And the immediate element below the image ("Learn about Skype") for users who have an idea what Skype is but haven't used it before and want more information. In this case, the page that follows "Learn about Skype" link does the justice. 
Moreover, the image is toned well and is not distracting.

Above picture shows Skype's homepage in a PC browser. The image below, however, shows how it looks like in tablets and smart phones.

Skype's homepage in a tablet browser
Skype uses Responsive Web Design
Note the design uniformity and the emphasis on "account creation" goal. Interestingly, “Buy Skype Credit” is chosen to be one of the visible elements in this mode. Perhaps their analytic data shows smartphone users are more interested in making long distance phone calls (Skype To Go) or using Skype’s Wi-Fi hotspots both of which require Skype credit.


Technical

If you are using HTML5 along with scalable vector graphics, you are probably fine. Looking at IE's HTML5 score card (being the lowest and thus the least supportive of HTML5) it can be seen that SVG is supported.
But that's a big "if". Using HTML5 limits your browser and device support. So if you have to support broader (not necessarily older) set of browsers2 then it's likely that you can't use HTML5. As a result, you need to select an image big enough that looks relatively sharp in all possible (or acceptable) resolutions which means heavier pages. It also have to look right in different orientations.



1. GITTE LINDGAARD, GARY FERNANDES, CATHY DUDEK and J. BROWN - Human Oriented Technology Lab, Carleton University, Ottawa, Canada - Behaviour & Information Technology, Vol. 25, No. 2
2. Your Mental Model (or any derived artifact) could clue you in on how IT-savvy you users are and consequently what sort of devices or browsers they might use. Or you might've already profiled the existing users (e.g.: Google Analytics).

Saturday, April 20, 2013

Microsoft hurt itself with Windows 8 (or did it?)

IDC's press release on April 10th suggests a decline of almost 14% in PC shipments in the first quarter of 2013 compared to the same quarter in 2012 and, at the same time, the increase in the sale of tablets and smart phones. It also suggests that even the introduction of Windows 8 hasn't made any difference and, on the contrary, has slowed the market. The report continues to explain why:

The radical changes to the UI, removal of the familiar Start button, and the costs associated with touch have made PCs a less attractive alternative to dedicated tablets and other competitive devices. Microsoft will have to make some very tough decisions moving forward if it wants to help reinvigorate the PC market.

Now, those of you whom follow Adaptive Path's UX Week might have come across "Story of Windows 8" by +Jensen Harris, director of program management for the Windows user experience team. His presentation is about design principles behind Windows 8. The core presentation starts with a key question that product managers of Windows asked themselves back in 2009 that "is familiarity always the element that keep a product relevant; a winner" (paraphrasing, of course) while admitting Windows (arguably) is the most familiar experience in the world. The presentation continues to demonstrate examples that suggest otherwise.

It doesn't take a market research expert to connect the dots in this case. Microsoft, I believe, had realized that the PC market was (and will be) challenged by emerging markets and in order to remain a major player it had to recognise the differences of experiences such as those between PC and tablet users. Whether they've predicted this loss is not known to me. I'm only going to guess that they have and have been working with their partners to innovate further, but maybe not so much in the PC market.

I personally like what Microsoft has done with Windows 8. Although the tactile experience is missing in PCs with traditional means of input (Keyboard and mouse), the craftsmanship as well as efficiency of Windows 8 is enough for me to have at least one copy at home.

Saturday, April 13, 2013

Mental Models

One of the companies I'm associated with is going through a major website redesign project. My responsibilities, as an architect, are divided between back-end duties (e.g.: design reviews of enterprise components) and participating in review sessions of many artifacts of Information Architecture and Content Strategy delivered by an external vendor. I realized then that it might be worth to write a brief for project members about the origin of these artifacts and some of the decisions around user experience; a Mental Model.
Two books were used as primary sources of this brief which I definitely recommend to every information architect, web designer or project manager; Mental Models by Indi Young and The Elements of User Experience by Jesse James Garrett.

"The deepest form of understanding another person is empathy which involves a shift from observing how people seem on the outside to imagining what it feels like to be them on the inside".1

Empathy with a person is distinct from studying how a person uses something. In the context of application design, empathy extends to knowing what the person wants to accomplish regardless of whether he/she is aware of the solution.
Mental model gives one a deep understanding of people's motivations and thought-processes, along with the emotional and philosophical landscape in which they operate. It's simply an "affinity diagram"2 of behaviors made from data gathered from audience representatives.
Most research techniques can be categories into three groups (Table 1). Mental model is a generative research technique that allows researcher to create a framework based on data from participant. This framework can be used to guide information architecture and interaction design. Aligning functionalities in a proposed solution with mental model will depict gaps and how well features and functions support users to achieve their goals.


Research Method
Techniques used
What is it good for?
Preference
Opinions, likes, desires
Survey
Focus Group
Mood Boards
Preference Interviews
Card Sort
Customer Feedback

Visual Design
Branding
Market Analysis
Advertising Campaigns
Evaluative
What is understood or accomplished with a tool
Usability Test
Log Analysis
Search Analytics
Card Sort
Customer Feedback

Interaction Functionality
Screen layout
Nomenclature
Information Architecture
Generative
Mental environment in which things get done
Non-directed Interviews
Contextual Inquiry
Mental Model
Ethnography
Diary
Navigation and Flow
Interaction Design
Alignment and Gap analysis
Contextual Information
Contextual Marketing
Table 1 - Research Types matrix

To create a mental model, one needs to collect actual users' perspective and vocabulary. Essentially, you interview users and analyze the conversations to create a diagram called Mental Model Diagram; a process of Interview-Comb-Group in which audience representatives3 are interviewed, interviews are combed for tasks4, and eventually grouped5 to form a diagram.

Why use mental models?
Three main reasons: Confidence in your design, Clarity in direction, and Continuity of strategy (3 Cs).

Confidence in your design
Mental model gives your team the confidence that what they design is founded on research. Management knows that product of that design will be a success. And since it respect some of users' philosophies and emotions it'll makes sense to them.
Mental model can be used to validate ideas and requests for change. If a change request doesn't match a behavior in mental model it can be adjusted or respectfully rejected. A mental models also helps in avoiding politics. It can be used as an impartial evaluator when it comes to discussions over design decisions; solid data replacing one's circumstantial interpretation of problems.
Moreover, mental models represent the entirety of each audience segment's environment. That is, mental model becomes a mean to distinguish among solutions that are required to provide good enough coverage and support to those segments. If there are a lot in common among audience segments, a single solution might suffice. On the other hand, distinctively different segment demands its own solution.

Clarity in direction
While designing a solution or product, not only you should care about user experience but also align design decisions with organization's business strategies. Another word, a potential design idea can't evolve in isolation. Thus, decisions about user experience ought to be part of a bigger scheme; the "Whole Experience". Essentially, a design decision should be assessed for its impact on all the ways an organization interacts with its users. Jesse James Garrett describes the phrase Experience Strategy accordingly: Experience Strategy = Business Strategy + User Experience6. Mental model helps you discover the gaps in existing user experience considering your business strategy and, vice versa, find out what your business strategy looks like with existing user experience.

Continuity of strategy
Since mental model provides a clear direction, it naturally becomes a mean to prioritise solutions. Now that you know how your business strategy should look like to support users better and sustain7 (or what the gaps are in users' whole experience with your organization) new ideas will begin to emerge and some solutions no longer make sense or pushed further down the solution stack. In summary, a mental model with which solutions are aligned becomes a strategy roadmap.
Furthermore, a mental model becomes a place where decision history and rational is recorded. It helps you to preserve internal knowledge and becomes a foundation of decisions to come.


1. "Difficult Conversations" by Douglas Stone, Bruce Patton, and Sheila Heen
2. http://en.wikipedia.org/wiki/Affinity_diagram
3. You'll have to start with finding what's called "Task-Based Audience Segments". The process involves finding groups of people in your audience who do similar things. From each group, depending on research project's scope and stakeholder's priorities, a few types of personalities are chosen to represent it. That follows a series of recruitments to find actual people who meet the criteria and are elaborate enough for interviews.
4. Finding tasks is not as simple as finding verbs in sentences. People aren't always specific in conversation and use things like tone of voice and gesture to depict a meaning. In this context, "task" refers to everything that comes up when a person accomplishes something; actions, thoughts, feelings, philosophies, and motivations.
5. Tasks are then grouped to towers and towers are grouped to mental segments of a mental model diagram.
6. http://blog.jjg.net
7. Sustainability may not always be a very important criterion. However, the cost of support for an organization without clear experience strategy could bring it to its knees. In the context of application development and maintenance the cost of support includes frustration and job satisfaction rate of staff at all levels and consequently innovation rate. For example, in public sector related businesses, lack of innovation and use of modern technology is recognized as an important issue (Citizen Experience: Designing a New Relationship with Government).

Tuesday, October 07, 2008

Fact checking of an Agile development exercise

Recently I have been working with a development team on various sub-projects of a bigger project. New features were being introduced in form of new sub-projects and the team was eager to adopt Agile development. So we decided to do so in one of those sub-projects in order to get them familiar with the concept and hopefully apply it to all the future development projects.
I’ll briefly elaborate the conditions in which the development of the agile sub-project and a similar non-Agile project have been carried out, list their outcomes and leave the judgment to you. 


Project A:
  • Scope is small.
  • Requirements are captured in form of detail writing; provided by the Product Management team. (Took them one week to finish). This was the usual practice in the entire company. That is, no development starts without a written spec.
  • The development team goes through the document (which is at least 50 pages) to find out the areas that are not very clear and clarify them with the PM team. They also estimate the development time for the features requested in the sub-project and negotiate the features with the stakeholders (A lot of buffer is used in estimates to mitigate risks).
  • The development team shares the requirement with the Q.A team and starts development. Whenever a major feature is developed (iteratively) development team releases the unit-tested code to the Q.A team. This goes until the sub-project is complete. No change request is accepted while development is ongoing.
Project B:
  • Scope is small.
  • Requirements are captured in form of Sketches and Domain Models along with a list of identified functions (the preliminary copy was ready in three days). The development team and Q.A team have a couple of meetings with the Product Management team afterward to make sure they understand the models and the priority of the features. They start then designing the overall architecture with the help of Architecture team, finding the reusable components and developing the required base code while the Product Management works on the detail of some of the high priority features. Q.A team develops test plan and required test cases. These three teams constantly update each other during this process.
  • Product managers evaluate the prototype and come up with a list of adjustments. In the meantime, Dev and Q.A teams study the detail document of the high priority features provided by the PM team to come up with more accurate estimate for high priority features.
  • Teams discuss the changes, the detail document and the estimates.
  • Dev team continues the development to solidify the architecture and frequently releases the developed and unit-tested code to the Q.A team to integrate and test. When all the bugs are resolved for each build, it will be released to the product managers for evaluation and assessment.
  • Product managers can ask for change before the architecture is frozen which usually happens after the second release. In contrast, the Dev team has the opportunity to negotiate features.
Outcome of Project A:
  • Some features could not make it to the deadline and some were not accepted by product managers as they were misunderstood by the Dev team.
Outcomes of Project B:
  • Estimates were ready earlier and more accurate, too, as they were provided based on the actual development work that the Dev team had done.
  • More angles of requirements were clear to Dev and PM teams because of the presence of the Q.A team in all the discussions from the beginning. Amazingly in some cases Q.A team members knew more about requirements than even PMs. That’s mainly because they have been testing other sub projects and are as familiar with overall requirements. Besides, they always ask the best questions. I believe that’s because they think about test cases and scenarios in advance.
  • PM team was very happy about the outcome since all the important and high priority features that could finish in the given time and were agreed upon, were present and entirely tested. They were also happy about the fact that clients could start adoption of new features earlier.
I don’t think I need to explain which project was more successful. 

Before closing this post, I'd like to highlight a few of the pitfalls of agile development that I see or hear frequently.
  • Pure modeling and sketching is not always a suitable way of requirement gathering. If you have distributed teams, challenging stakeholders or very complex requirements then you might want to spend a bit more time digging and documenting.
  • If you are using TDD (Test Driven Development), it doesn’t mean that you don’t need a Q.A team or downstream black box testing; no matter how much developers may write unit tests and run white box testing.
  • Change Adoption doesn’t mean that after every release stakeholders can come up with change requests. Change can be accepted and handled before the Architecture is stable. So it’s the development team’s responsibility to release the code as frequent as possible in form of an executable application to the stakeholders and it's stakeholders’ responsibility to assess the released application and come up with change requests as early as possible. That means, if you have a stakeholder that has no idea what he wants (and believe me, there are such stakeholders) you need to spend more time building prototypes and exploring requirements before you stabilize the Architecture.
  • The project that was chosen for the team to exercise Agile development was relatively small. That’s because it was their first time doing iterative development in Agile manner and I wanted to mitigate potential risks imposed by failure of the project. Also, they had an agile practitioner in their team; me! Don't do this without a coach.