# Friday, 06 September 2013

Not to be confused with "rapper" class.

The concept of a "wrapper class" occurs frequently in Force.com development. Object oriented design encourages the use of classes to encapsulate data and behaviors that may mutate the data, or interact with other classes.

The goals of an Apex wrapper class are:

  • Decouple data (records) from behaviors (methods)
  • Validate inputs
  • Minimize side effects
  • Enable unit testing
  • Handle exceptions

This article provides an example wrapper class template to address these goals and explores each aspect of the wrapper class individually.

Register for Dreamforce 13 and attend my session on "Leap: An Apex Development Framework" for training on how to use the Leap framework to generate wrapper classes, triggers, and more. Follow @codewithleap on Twitter for updates on Apex design patterns.

All Apex wrapper classes share a few common properties and methods. To avoid repeatedly cut-n-pasting the same common code into each wrapper class, a base class is created for all wrapper classes to inherit through the "extends" keyword.

public abstract class WrapperClassBase {
     public Id id;  
     public boolean success = true;
     public List<String> errors = new List<String>();
     public boolean hasErrors(){ return errors.size() > 0;}
     public void clearErrors() { success = true; errors.clear();}
}

Now, with a base class available, the simple wrapper class looks like the following:

public with sharing class Order extends WrapperClassBase {
     public Order__c record   = null;
     public static final String SFIELDS = 'Id, Name'; // ... add other fields here
     
     public Order(){}
     
     public Order withSObject(Order__c orderRecord){
          this.record = orderRecord;
          this.id = orderRecord.Id;
          return this;
     }
     
     public Order withId(ID recordId){
          id = recordId;
          String sFieldsToQuery = Order.SFIELDS;
          record = Database.query('SELECT ' + Order.SFIELDS + ' FROM Order__c WHERE Id=:id LIMIT 1');
          return this;
     }
     
     public Order validate(){         
          /*
          Validate fields here. Set this.success = false and populate this.errors.add('err message');
          */
          return this;
     }
     
     public Order doSomething(){
          return this;
     }
     
     private Map<ID,LineItem> m_lineItems = null;
     public Map<Id,LineItem> lineItems{
          get{
               if(m_lineItems == null){
                    m_lineItems = LineItem.fromRecords( this.lineItemRecords );
               }
               return m_lineItems;
          }
     }
     
     private List<OrderLineItem__c> m_lineItemRecords = null;
     public List<OrderLineItem__c> lineItemRecords{
          get{
               if(m_lineItemRecords == null){
                    m_lineItemRecords = Database.query('SELECT ' + LineItem.SFIELDS + ' FROM OrderLineItem__c WHERE Order__c=:id');
               }
               return m_lineItemRecords;
          }
     }
     
     public static Map<ID,Order> fromRecords(List<Order__c> records){
          Map<ID,Order> orders = new Map<ID,Order>();
          for(Order__c o : records){
               orders.put(o.Id, new Order().withSObject(o));
          }
          return orders;
     }
     
     public String toJSON(){
          Map<String, String> r = new Map<String, String>();
          List<String> fieldNames = Order.SFIELDS.split(',');
          for(String fName : fieldNames){
               r.put(fName, String.valueOf( this.record.get(fName) ));
          }
          return JSON.serialize(r);
     }
}

Inheritance

As of this writing, Apex does not allow inheriting the core SObject class (which would be ideal).

Record encapsulation uses the strongly typed Order__c record, rather than the abstractly typed SObject, in order to use the benefits of strongly typed objects in the Force IDE, such as auto-complete of the fields. Moving the record to the base class would require constantly casting SObject to the strong type.

Class Name

For custom objects, it's common to remove the namespace prefix and __c suffix from the wrapper class name. For standard objects, always prefix the class name with "SFDC", or some other naming convention, to avoid conflicts.

Wrapping Standard Objects

Creating wrapper classes with the same name as standard objects, although possible, is discouraged. Class names supersede standard object names, such that if the intent is to create a standard Account object, but a class named 'Account' already exists, the code will not compile because the compiler is trying to create an instance of the wrapper class and not the standard object.

To get around this, use a standard naming convention; SFDCAccount, SFDCContact, or SFDCLead; to differentiate the wrapper class names from their respective standard objects.

Construction

SObjects are constructed in 2 contexts:

  • withSObject(SObject record): The record has already been retrieved via a SOQL statement and the data just needs to be wrapped in a class container.
  • withId(ID recordId): The ID of a record is known, but has not yet been retrieved from the database.
The actual class constructor accepts no arguments. The builder pattern is used to construct the class and kick off a fluent chain of subsequent methods in a single line.

Once constructed, SObject fields are accessed directly through the public 'record' property, as in:

new Order().withID(someid).record.Custom_Field__c

This convention is chosen over creating getCustomField() and setCustomField() properties for brevity and to make use of code auto-complete features,. However, if mutability of the SObject record, or it's fields, are a concern then the public accessor can be modified to 'private' and corresponding get/set properties added.

SFIELDS

Each wrapper class exposes a static public string named SFIELDS for use in selecting all fields for a record. This is equivalent to writing "SELECT * FROM TABLE_NAME" in traditional SQL syntax.

The SFIELDS string can be periodically auto-generated from a Leap task to keep wrapper class field definitions in sync with the data model, or manually updated with just a subset of fields to be used by the wrapper class.

Builder Pattern

The real magic in the wrapper class template is it's ability to chain several methods together in a single line, commonly referred to as the Builder Pattern and discussed in a previous article, Developing Fluent Interfaces With Apex.

Using the Order__c wrapper class example above, the following is possible:

     Order o = new Order().withSObject(objectList.get(0)).doSomething();
     if(o.validate().hasErrors()){
          //handle exceptions
     }

The return types of builder methods must be the same as the wrapper class type. Each builder method returns 'this' to allow method chaining. Builder pattern is useful in the early stages of development when the exact method behaviors and system architecture is not entirely known (see 40-70 Rule to Technical Architecture) and allows a compositional flow to development, incrementally adding new features without significant refactoring effort.

Child Object Relationships

A wrapper class represents a single instance of a Salesforce record. Depending on how lookup relationships are defined, wrapper classes will usually be either a parent (master) or child (detail) of some other records, which also have wrapper classes defined.

The "fromRecords" utility method is provided to easily construct collections of child objects retrieved from SOQL queries. Collections of child wrapper classes are stored as Maps that support the quick lookup of child wrapper classes by their record ID.

Properties and Side Effects

The #1 cause of software entropy in Apex development is unwanted "side effects", which is the dependency on class variables that can be modified by other methods.

The wrapper class template encourages lazy initialization of properties to protect access to member variable. Lazy initialization also avoids repeated queries for the same records, which is a common cause for exceeding governor limits.

Java has not yet evolved to support class properties. But Apex does, and wrapper classes are an opportunity to use them whenever possible. For the sake of brevity, properties are preferred over methods, whenever possible. This Microsoft .NET article on choosing between properties and methods is very applicable to Apex.

For Developers doing a lot of client-side Javascript development in the UI, the use of server-side Apex properties closely approximates the associative array behavior of Javascript objects, and maintains a consistent coding style across code bases.

Unit Testing

Wrapper classes provide a clean interface for unit testing behaviors on objects. The Winter '14 release requires that unit tests be managed in a separate file from the wrapper class. Common convention is to always create a unit test file for each wrapper class with a suffix of 'Tests' in the class name.

Exception Handling

Without a clear exception handling strategy, it can be confusing for Developers to know how a class handles exceptions. Does it consistently bubble up, or catch all exceptions? There is no equivalent to the Java 'throws' keyword in Apex. To remedy this, the wrapper class template base provides a boolean 'success' flag that can be set by any method at any time.

When setting success=false, the exception handling code should also add to the List errors collection, giving an explanation of what went wrong in the transaction. It is the responsibility of calling objects/methods to check for success or hasErrors() after any transaction.

JSON Serialization

Wrapper classes can be serialized to JSON and returned to requesting clients for use in UI binding. The ToJSON() method is provided in the wrapper class template and can be customized to serialize the class.

Friday, 06 September 2013 10:46:42 (Pacific Daylight Time, UTC-07:00)
# Monday, 22 October 2012

I frequently use the FizzBuzz interview question when interviewing Salesforce developer candidates.

The original FizzBuzz interview question goes something like this:

Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

The output from the first 15 numbers should look like this:

1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz

It's a great question because the interviewer can evolve the requirements and take the discussion in many different directions.

A good interview is a lot like auditioning a drummer for a rock band. You want to start off with something easy, then "riff" on an idea to get a sense of the candidates listening skills and ability to create variations on a theme (aka refactoring).

Unfortunately, most interviews have the intimidating premise of pass/fail, so the key to an effective interview is in setting up the question so that the interview candidate understands it is okay to constantly change and revise their answers, and that the interview will be evolving around a central concept; which is FizzBuzz.

The questions below gradually get harder by design, and at some point the candidate may not have an answer. That's okay. As an interviewer, you need to know:

a) How does the candidate respond when asked to do something they don't understand?
b) If we hired this person, what is the correct onboarding and mentoring plan for this candidate to help them be successful?

I'll drop hints during the question setup, using buzzwords like "TDD" (test-driven development), "unit testing", and "object oriented design", hoping the candidate might ask clarifying questions before jumping into code, like "Oh, you want to do TDD. Should I write the unit test first?"

So, on to the code. The fundamental logic for FizzBuzz requires a basic understanding of the modulo operator; which, in all fairness, is not a particularly valuable thing to know on a daily basis, but is often the minimum bar for testing the meets "Computer Science or Related 4 Year Degree" requirement in many job descriptions, since it's universally taught in almost all academic curriculums.

After the first round, the basic logic for FizzBuzz should look something like this:

function doFizzBuzz(){
     for(integer i=1; i <= 100; i++){
          String output = '';
          if( i % 3 == 0 ){
               output += 'Fizz';
          }
          if( i % 5 == 0 ){
               output += 'Buzz';
          }
          if(output == ''){
               output = string.valueOf(i);
          }
          System.debug(output);
     }
}

Some things interviewers will be looking for:

  • Use and understanding of the Modulo operator
  • Efficiency. Is mod calculated twice for each value to meet the compound "FizzBuzz" requirement?
  • Using a 0 based loop index and printing numbers 0-99 instead of 1-100
  • Unclear coding blocks or control flow (confusing use of parentheses or indenting)

Even if the candidate misses one of these points, they can usually get over the hurdle quickly with a bit of coaching.

"So, let's evolve this function into an Apex class."

For experienced Salesforce Developers, you can start gauging familiarity with Apex syntax; but be flexible. More experienced Developers/Architects will probably think faster in pseudo code, and Java Developers (if you're gauging potential to become a Force.com Developer) will want to use their syntax.

Refactoring the basic logic above into an Apex class might look something like this:

public class FizzBuzz {
     public void run(){
          for(integer i=1; i <= 100; i++){
               if(math.mod(i, 3) == 0){
                    System.debug('Fizz');
               }
               if(math.mod(i, 5) == 0){
                    System.debug('Buzz');
               }
               else{
                    System.debug(i);
               }
          }
     }
}

"Okay. How would you test that? Let's write a unit test".

If the candidate is not familiar with Force.com Development, now might be a good opportunity to explain that 75% minimum test coverage is required to deploy code.

A basic unit test should look something like:

     public static testMethod void mainTests(){    
          FizzBuzz fb = new FizzBuzz();
          fb.run();
     }    

The test runner will report 100% unit test coverage by virtue of executing the entire run() method within a testMethod. But is this really aligned with the true spirit and principle of unit testing? Not really.

A more precise follow-up question might be: "How would you Assert the expected output of FizzBuzz?"

In it's current state, FizzBuzz is just emitting strings. Does the candidate attempt to parse and make assertions on the string output?

At this point, it's helpful to start thinking in terms of TDD, or Test Driven Development, and attempt to write a unit test before writing code. One possible solution is the Extract Method design pattern, creating methods for isFizz() and isBuzz(), then test to assert those methods are working correctly.

public class FizzBuzz {    

     private void run(){    
          for(integer i=1; i <= 100; i++){
               String output = '';              
               if( isFizz(i) ){
                    output += 'Fizz';
               }
               if( isBuzz(i) ){
                    output += 'Buzz';
               }
               if(output == ''){
                    output = string.valueOf(i);
               }
               System.debug(output);
          }
     }
   
     static final integer FIZZ_MULTIPLE = 3;
     private boolean isFizz(integer n){
          return ( math.mod(n, FIZZ_MULTIPLE) == 0);
     }

     static final integer BUZZ_MULTIPLE = 5;
     private boolean isBuzz(integer n){
          return ( math.mod(n, BUZZ_MULTIPLE) == 0);
     }

     public static testmethod void fizzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isFizz(1));
          System.assertEquals(false, fb.isFizz(2));
          System.assertEquals(true,  fb.isFizz(3));
          System.assertEquals(false, fb.isFizz(4));
          System.assertEquals(false, fb.isFizz(5));
     }
    
     public static testmethod void buzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isBuzz(1));
          System.assertEquals(false, fb.isBuzz(2));
          System.assertEquals(false, fb.isBuzz(3));
          System.assertEquals(false, fb.isBuzz(4));
          System.assertEquals(true,  fb.isBuzz(5));
     }   

     public static testmethod void fizzBuzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(true, fb.isFizz(15));
          System.assertEquals(true, fb.isBuzz(15));
     }
}

This is a considerable improvement, but the test coverage is now only at 40%. The run() method is still leaving some technical debt behind to be refactored.

I may drop the candidate a hint about Model-View-Controller and ask how they might deconstruct this class into it's constituent parts.

There are no DML or objects to access, so effectively there is no Model.

But the run() method is currently overloaded with FizzBuzz logic (controller) and printing the output (view). We can further extract the logic into a List of strings to be rendered in any form by the run() method.

public class FizzBuzz { 

     private void run(){    
          for(String element : this.getFizzBuzzList()){
               system.debug(element);
          }
     }
   
     private List<string> getFizzBuzzList(){
          List<string> fizzBuzzList = new List<string>();
          for(integer i=1; i <= 100; i++){
               string listElement = '';

               if( isFizz(i) ){
                    listElement = 'Fizz';
               }
               if( isBuzz(i) ){
                    listElement += 'Buzz';
               }
               if(listElement == ''){
                    listElement = string.valueOf(i);
               }

               fizzBuzzList.add(listElement);
          }
          return fizzBuzzList;
     }
    
     static final integer FIZZ_MULTIPLE = 3;
     private boolean isFizz(integer n){
          return ( math.mod(n, FIZZ_MULTIPLE) == 0);
     }
    
     static final integer BUZZ_MULTIPLE = 5;
     private boolean isBuzz(integer n){
          return ( math.mod(n, BUZZ_MULTIPLE) == 0);
     }
    
     public static testmethod void fizzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isFizz(1));
          System.assertEquals(true,  fb.isFizz(3));
          System.assertEquals(false, fb.isFizz(5));
     }
    
     public static testmethod void buzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isBuzz(1));
          System.assertEquals(false, fb.isBuzz(3));
          System.assertEquals(true,  fb.isBuzz(5));
     }
    
     public static testmethod void fizzBuzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(true, fb.isFizz(15));
          System.assertEquals(true, fb.isBuzz(15));
     }
    
     public static testmethod void fizzBuzzListTests(){
          FizzBuzz fb = new FizzBuzz();
          //0 based offsets.
          System.assertEquals(100, fb.getFizzBuzzList().size() );
          System.assertEquals('1', fb.getFizzBuzzList().get(0) );
          System.assertEquals('Fizz', fb.getFizzBuzzList().get(2) );
          System.assertEquals('4', fb.getFizzBuzzList().get(3) );
          System.assertEquals('Buzz', fb.getFizzBuzzList().get(4) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(14) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(29) );
     }
}

Test coverage is now at 90% after extracting the run() print logic into a unit testable method that returns a list. The last 10% can be easily covered by calling run() anywhere inside a testMethod.

If there's time remaining in the interview, a good enhancement is to add dynamic ranges. Instead of printing 1-100, modify the class to support any range of numbers. Basically, this is just testing the candidate's ability to manage class constructor arguments.

public class FizzBuzz {    
     private final integer floor;
     private final integer ceiling;    

     public FizzBuzz(){
          floor      = 1;
          ceiling = 100;
     }
    
     public FizzBuzz(integer input_floor, integer input_ceiling){
          floor = input_floor;
          ceiling = input_ceiling;
     }
    
     private void run(){    
          for(String element : this.getFizzBuzzList()){
               system.debug(element);
          }
     }
    
     private List<string> getFizzBuzzList(){
          List<string> fizzBuzzList = new List<string>();
          for(integer i=floor; i <= ceiling; i++){
               string listElement = '';
               if( isFizz(i) ){
                    listElement = 'Fizz';
               }
               if( isBuzz(i) ){
                    listElement += 'Buzz';
               }
               if(listElement == ''){
                    listElement = string.valueOf(i);
               }
               fizzBuzzList.add(listElement);
          }

          return fizzBuzzList;
     }

    
     static final integer FIZZ_MULTIPLE = 3;
     private boolean isFizz(integer n){
          return ( math.mod(n, FIZZ_MULTIPLE) == 0);
     }
     
     static final integer BUZZ_MULTIPLE = 5;
     private boolean isBuzz(integer n){
          return ( math.mod(n, BUZZ_MULTIPLE) == 0);
     }

    
     public static testmethod void fizzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isFizz(1));         
          System.assertEquals(true,  fb.isFizz(3));
          System.assertEquals(false, fb.isFizz(5));
     }
    
     public static testmethod void buzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isBuzz(1));         
          System.assertEquals(false, fb.isBuzz(3));         
          System.assertEquals(true,  fb.isBuzz(5));
     }
    
     public static testmethod void fizzBuzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(true, fb.isFizz(15));
          System.assertEquals(true, fb.isBuzz(15));
     }
   
     public static testmethod void fizzBuzzListTests(){
          //Use a 0 based index range to make fetching/testing list offsets easier.
          FizzBuzz fb = new FizzBuzz(0, 100);
          System.assertEquals(101, fb.getFizzBuzzList().size() );
          System.assertEquals('1', fb.getFizzBuzzList().get(1) );
          System.assertEquals('2', fb.getFizzBuzzList().get(2) );
          System.assertEquals('Fizz', fb.getFizzBuzzList().get(3) );
          System.assertEquals('4', fb.getFizzBuzzList().get(4) );
          System.assertEquals('Buzz', fb.getFizzBuzzList().get(5) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(15) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(30) );
     }
}

I will usually follow-up this question with questions about boundary checking and programmatic validation rules.

"Should FizzBuzz be allowed to accept negative numbers?"

"Should the ceiling value always be greater than the floor?"

If yes to either of these, then how would the candidate implement validation rules and boundary checks? This very quickly gets into writing more methods and more unit tests, but mirrors the reality of day-to-day Force.com development.

Once variables get introduced at class scope, then this is a good opportunity to have discussions about side-effects and immutability.

"What happens 6 months later when another Developer comes along and tries to modify the ceiling or floor variables in new methods?"

"How can you prevent this from happening?"

"What are the benefits of initializing variables only once and declaring them 'final'?"

An experienced Developer will likely have a grasp of functional programming techniques and the long-term benefits of minimizing side-effects and keeping classes immutable.

And finally, these unit tests are all written inline. How would the candidate separate tests from production classes?

For more information about refactoring patterns, check out this list of patterns or read Martin Fowler's brilliant book Refactoring: Improving the Design of Existing Code.

Monday, 22 October 2012 16:53:22 (Pacific Daylight Time, UTC-07:00)
# Sunday, 30 October 2011

Integrating CRM with ERP/Financial systems can be a challenge. Particularly if the systems are from 2 different vendors, which is often the case when using Salesforce.com CRM.

At Facebook, we've gone through several iterations of integrating Salesforce with Oracle Financials and the team has arrived at a fairly stable and reliable integration process (kudos to Kumar, Suresh, Gopal, Trevor, and Sunil for making this all work).

Here is the basic flow (see diagram below):

1) The point at which Salesforce CRM needs to pass information to Oracle is typically once an Opportunity has been closed/won and an order or contract has been signed.

2) Salesforce is configured to send an outbound message containing the Opportunity ID to an enterprise service bus (ESB) that is configured to listen for specific SOAP messages from Salesforce.

3) The ESB accepts the outbound message (now technically an inbound message on the receiver side) and asserts any needed security policies, such as whitelist trusting the source of the message.

4) This is the interesting part. Because the Salesforce outbound message wizard only allows for the exporting of fields on a single object, the ESB must call back to retrieve additional information about the order; such as the Opportunity line items, Account, and Contacts associated with the Order.

In Enterprise Application Integration (EAI) speak, this is referred to as a Content Enrichment pattern.

5) An apex web service on Salesforce receives the enrichment request, queries all the additional order details, and returns a canonical XML message back to the ESB.

6) The ESB receives the enriched message and begins processing validation and de-duplication rules, then transforms the message into an object that can be consumed by Oracle.

7) The ESB then inserts the Order into Oracle.

8) The Oracle apps API inserts/updates the various physical tables for the order and throws any exceptions.

Sunday, 30 October 2011 17:15:23 (Pacific Standard Time, UTC-08:00)
# Sunday, 23 January 2011
Salesforce Administrators and Developers are routinely required to manipulate large amounts of data in a single task.

Examples of batch processes include:
  • Deleting all Leads that are older than 2 years
  • Replacing all occurrences of an old value with a new value
  • Updating user records with a new company name

The tools and options available are:
  1. Browser-based Admin features / Execute anonymous in System Log
  2. Data loader
  3. Generalized Batch Apex (Online documentation)
  4. Specialized Batch Apex
Option 1 (Admin Settings) represents the category of features and tools available when directly logging into Salesforce via a web browser. The transactions are typically synchronous and subject to Governor Limits.

Option 2 (Data Loader) provides Admins with an Excel-like approach to downloading data using the Apex Data Loader, manipulating data on a local PC, then uploading the data back to Salesforce. Slightly more powerful than browser-based tools, doesn't require programming skills, and subject to web service API governor limits (which are more generous). But also requires slightly more manual effort and introduces the possibility of human error when mass updating records.

Option 3 (Generalized Batch Apex) introduces the option of asynchronous batch processes that can manipulate up to 50 million records in a single batch. Doesn't require programming (if using the 3 utility classes provided below in this blog post) and can be executed directly through the web browser; but limited to the general use cases supported by the utility classes. Some general purpose batch Apex utility classes are provided at the end of this article.

Option 4 (Specialized Batch Apex) requires Apex programming and provides the most control of batch processing of records (such as updating several object types within a batch or applying complex data enrichment before updating fields).

Batch Apex Class Structure:

The basic structure of a batch apex class looks like:

global class BatchVerbNoun implements Database.Batchable<sObject>{
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query); //May return up to 50 Million records
    }
  
    global void execute(Database.BatchableContext BC, List<sObject> scope){       
        //Batch gets broken down into several smaller chunks
        //This method gets called for each chunk of work, passing in the scope of records to be processed
    }
   
    global void finish(Database.BatchableContext BC){   
        //This method gets called once when the entire batch is finished
    }
}
An Apex Developer simply fills in the blanks. The start() and finish() methods are both executed once, while the execute() method gets called 1-N times, depending on the number of batches.

Batch Apex Lifecycle

The Database.executeBatch() method is used to start a batch process. This method takes 2 parameters: instance of the batch class and scope.

BatchUpdateFoo batch = new BatchUpdateFoo();
Database.executeBatch(batch, 200);
The scope parameter defines the max number of records to be processed in each batch. For example, if the start() method returns 150,000 records and scope is defined as 200, then the overall batch will be broken down into 150,000/200 batches, which is 750. In this scenario, the execute() method would be called 750 times; and each time passed 200 records.

A note on batch sizes: Even though batch processes have significantly more access to system resources, governor limits still apply. A batch that executes a single DML operation may shoot for a batch scope of 500+. Batch executions that initiate a cascade of trigger operations will need to use a smaller scope. 200 is a good general starting point.

The start() method is called to determine the size of the batch then the batch is put into a queue. There is no guarantee that the batch process will start when executeBatch() is called, but 90% of the time the batch will start processing within 1 minute.

You can login to Settings/Monitor/Apex Jobs to view batch progress.


Unit Testing Batch Apex:
The asynchronous nature of batch apex makes it notoriously difficult to unit test and debug. At Facebook, we use a general Logger utility that logs debug info to a custom object (adding to the governor limit footprint). The online documentation for batch apex provides some unit test examples, but the util methods in this post use a short hand approach to achieving test coverage.

Batch Apex Best Practices:
  • Use extreme care if you are planning to invoke a batch job from a trigger. You must be able to guarantee that the trigger will not add more batch jobs than the five that are allowed. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.
  • When you call Database.executeBatch, Salesforce.com only places the job in the queue at the scheduled time. Actual execution may be delayed based on service availability.
  • When testing your batch Apex, you can test only one execution of the execute method. You can use the scope parameter of the executeBatch method to limit the number of records passed into the execute method to ensure that you aren't running into governor limits.
  • The executeBatch method starts an asynchronous process. This means that when you test batch Apex, you must make certain that the batch job is finished before testing against the results. Use the Test methods startTest and stopTest around the executeBatch method to ensure it finishes before continuing your test.
  • Use Database.Stateful with the class definition if you want to share variables or data across job transactions. Otherwise, all instance variables are reset to their initial state at the start of each transaction.
  • Methods declared as future are not allowed in classes that implement the Database.Batchable interface.
  • Methods declared as future cannot be called from a batch Apex class.
  • You cannot call the Database.executeBatch method from within any batch Apex method.
  • You cannot use the getContent and getContentAsPDF PageReference methods in a batch job.
  • In the event of a catastrophic failure such as a service outage, any operations in progress are marked as Failed. You should run the batch job again to correct any errors.
  • When a batch Apex job is run, email notifications are sent either to the user who submitted the batch job, or, if the code is included in a managed package and the subscribing organization is running the batch job, the email is sent to the recipient listed in the Apex Exception Notification Recipient field.
  • Each method execution uses the standard governor limits anonymous block, Visualforce controller, or WSDL method.
  • Each batch Apex invocation creates an AsyncApexJob record. Use the ID of this record to construct a SOQL query to retrieve the job’s status, number of errors, progress, and submitter. For more information about the AsyncApexJob object, see AsyncApexJob in the Web Services API Developer's Guide.
  • All methods in the class must be defined as global.
  • For a sharing recalculation, Salesforce.com recommends that the execute method delete and then re-create all Apex managed sharing for the records in the batch. This ensures the sharing is accurate and complete.
  • If in the course of developing a batch apex class you discover a bug during a batch execution, Don't Panic. Simply login to the admin console to monitor Apex Jobs and abort the running batch.


Utility Batch Apex Classes:

The following batch Apex classes can be copied and pasted into any Salesforce org and called from the System Log (or Apex) using the "Execute Anonymous" feature. The general structure of these utility classes are:
  • Accept task-specific input parameters
  • Execute the batch
  • Email the admin with batch results once complete
To execute these utility batch apex classes.
1. Open the System Log

2. Click on the Execute Anonymous input text field.

3. Paste any of the following batch apex classes (along with corresponding input parameters) into the Execute Anonymous textarea, then click "Execute".


BatchUpdateField.cls
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id, CompanyName from User';
BatchUpdateField batch = new BatchUpdateField(query, 'CompanyName', 'Bedrock Quarry');
Database.executeBatch(batch, 100); //Make sure to execute in batch sizes of 100 to avoid DML limit error
*/
global class BatchUpdateField implements Database.Batchable<sObject>{
    global final String Query;
    global final String Field;
    global final String Value;
   
    global BatchUpdateField(String q, String f, String v){
        Query = q;
        Field = f;
        Value = v;
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject> scope){   
        for(sobject s : scope){
            s.put(Field,Value);
         }
        update scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchUpdateField ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, CompanyName from User';
        BatchUpdateField batch = new BatchUpdateField(query, 'CompanyName', 'Bedrock Quarry');
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
BatchSearchReplace.cls
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id, Company from Lead';
BatchSearchReplace batch = new BatchSearchReplace(query, 'Company', 'Sun', 'Oracle');
Database.executeBatch(batch, 100); //Make sure to execute in batch sizes of 100 to avoid DML limit error
*/
global class BatchSearchReplace implements Database.Batchable<sObject>{
    global final String Query;
    global final String Field;
    global final String SearchValue;
    global final String ReplaceValue;
   
    global BatchSearchReplace(String q, String f, String sValue, String rValue){
        Query = q;
        Field = f;
        SearchValue = sValue;
        ReplaceValue = rValue;
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject&> scope){   
        for(sobject s : scope){
            string currentValue = String.valueof( s.get(Field) );
            if(currentValue != null && currentValue == SearchValue){
                s.put(Field, ReplaceValue);
            }
         }
        update scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchSearchReplace ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, Company from Lead';
        BatchSearchReplace batch = new BatchSearchReplace(query, 'Company', 'Foo', 'Bar');
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
BatchRecordDelete.cls:
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id from ObjectName where field=criteria';
BatchRecordDelete batch = new BatchRecordDelete(query);
Database.executeBatch(batch, 200); //Make sure to execute in batch sizes of 200 to avoid DML limit error
*/
global class BatchRecordDelete implements Database.Batchable<sObject>{
    global final String Query;
   
    global BatchRecordDelete(String q){
        Query = q;   
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject&> scope){       
        delete scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchRecordDelete ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, CompanyName from User where CompanyName="foo"';
        BatchRecordDelete batch = new BatchRecordDelete(query);
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
Sunday, 23 January 2011 12:07:02 (Pacific Standard Time, UTC-08:00)
# Thursday, 08 July 2010

No, not this Trigger... keep reading...

Trigger development (apologies to Roy Rogers' horse) is not done on a daily basis by a typical Force.com Developer.

In my case, Trigger development is similar to using regular expressions (regex) in that I often rely on documentation and previously developed code examples to refresh my memory, do the coding, then put it aside for several weeks/months.

I decided to create a more fluent Trigger template to address the following challenges and prevent me from repeatedly making the same mistakes:

  • Bulkification best practices not provisioned by the Trigger creation wizard
  • Use of the 7 boolean context variables in code (isInsert, isBefore, etc...) greatly impairs readability and long-term maintainability
  • Trigger.old and Trigger.new collections are not available in certain contexts
  • Asynchronous trigger support not natively built-in

The solution was to create a mega-Trigger that handles all events and delegates them accordingly to an Apex trigger handler class.

You may want to customize this template to your own style. Here are some design considerations and assumptions in this template:

  • Use of traditional event method names on the handler class (OnBeforeInsert, OnAfterInsert)
  • Maps are used where they are most relevant
  • Objects in map collections cannot be modified, however there is nothing in the compiler to prevent you from trying. Remove them whenever not needed.
  • Maps are most useful when triggers modify other records by IDs, so they're included in update and delete triggers
  • Encourage use of asynchronous trigger processing by providing pre-built @future methods.
  • @future methods only support collections of native types. ID is preferred using this style.
  • Avoid use of before/after if not relevant. Example: OnUndelete is simpler than OnAfterUndelete (before undelete is not supported)
  • Provide boolean properties for determining trigger context (Is it a Trigger or VF/WebService call?)
  • There are no return values. Handler methods are assumed to assert validation rules using addError() to prevent commit.

References:
Apex Developers Guide - Triggers
Steve Anderson - Two interesting ways to architect Apex triggers

AccountTrigger.trigger

trigger AccountTrigger on Account (after delete, after insert, after undelete, 
after update, before delete, before insert, before update) {
	AccountTriggerHandler handler = new AccountTriggerHandler(Trigger.isExecuting, Trigger.size);
	
	if(Trigger.isInsert && Trigger.isBefore){
		handler.OnBeforeInsert(Trigger.new);
	}
	else if(Trigger.isInsert && Trigger.isAfter){
		handler.OnAfterInsert(Trigger.new);
		AccountTriggerHandler.OnAfterInsertAsync(Trigger.newMap.keySet());
	}
	
	else if(Trigger.isUpdate && Trigger.isBefore){
		handler.OnBeforeUpdate(Trigger.old, Trigger.new, Trigger.newMap);
	}
	else if(Trigger.isUpdate && Trigger.isAfter){
		handler.OnAfterUpdate(Trigger.old, Trigger.new, Trigger.newMap);
		AccountTriggerHandler.OnAfterUpdateAsync(Trigger.newMap.keySet());
	}
	
	else if(Trigger.isDelete && Trigger.isBefore){
		handler.OnBeforeDelete(Trigger.old, Trigger.oldMap);
	}
	else if(Trigger.isDelete && Trigger.isAfter){
		handler.OnAfterDelete(Trigger.old, Trigger.oldMap);
		AccountTriggerHandler.OnAfterDeleteAsync(Trigger.oldMap.keySet());
	}
	
	else if(Trigger.isUnDelete){
		handler.OnUndelete(Trigger.new);	
	}
}

AccountTriggerHandler.cls

 
public with sharing class AccountTriggerHandler {
	private boolean m_isExecuting = false;
	private integer BatchSize = 0;
	
	public AccountTriggerHandler(boolean isExecuting, integer size){
		m_isExecuting = isExecuting;
		BatchSize = size;
	}
		
	public void OnBeforeInsert(Account[] newAccounts){
		//Example usage
		for(Account newAccount : newAccounts){
			if(newAccount.AnnualRevenue == null){
				newAccount.AnnualRevenue.addError('Missing annual revenue');
			}
		}
	}
	
	public void OnAfterInsert(Account[] newAccounts){
		
	}
	
	@future public static void OnAfterInsertAsync(Set<ID> newAccountIDs){
		//Example usage
		List<Account> newAccounts = [select Id, Name from Account where Id IN :newAccountIDs];
	}
	
	public void OnBeforeUpdate(Account[] oldAccounts, Account[] updatedAccounts, Map<ID, Account> accountMap){
		//Example Map usage
		Map<ID, Contact> contacts = new Map<ID, Contact>( [select Id, FirstName, LastName, Email from Contact where AccountId IN :accountMap.keySet()] );
	}
	
	public void OnAfterUpdate(Account[] oldAccounts, Account[] updatedAccounts, Map<ID, Account> accountMap){
		
	}
	
	@future public static void OnAfterUpdateAsync(Set<ID> updatedAccountIDs){
		List<Account> updatedAccounts = [select Id, Name from Account where Id IN :updatedAccountIDs];
	}
	
	public void OnBeforeDelete(Account[] accountsToDelete, Map<ID, Account> accountMap){
		
	}
	
	public void OnAfterDelete(Account[] deletedAccounts, Map<ID, Account> accountMap){
		
	}
	
	@future public static void OnAfterDeleteAsync(Set<ID> deletedAccountIDs){
		
	}
	
	public void OnUndelete(Account[] restoredAccounts){
		
	}
	
	public boolean IsTriggerContext{
		get{ return m_isExecuting;}
	}
	
	public boolean IsVisualforcePageContext{
		get{ return !IsTriggerContext;}
	}
	
	public boolean IsWebServiceContext{
		get{ return !IsTriggerContext;}
	}
	
	public boolean IsExecuteAnonymousContext{
		get{ return !IsTriggerContext;}
	}
}
Thursday, 08 July 2010 14:16:12 (Pacific Daylight Time, UTC-07:00)
# Thursday, 24 June 2010

Chatter Developer Challenge / Hackathon 2010 Roundup

The Chatter Developer Challenge sponsored by Salesforce encouraged Developers to create a wide variety of applications that demonstrate the new Salesforce Chatter API.

The challenge culminated in a Hackthon event on June 22nd 2010 at the San Jose Convention Center where prizes were awarded for various applications.

My entry, Chatter Bot, demonstrated the use of Chatter within a Facility Management application that captured physical world events and moved them to the cloud to produce Chatter feed posts.

Chatter Bot is a system comprised of 4 major components:

  • Arduino board with motion and light sensors (C/C++)
  • Proxy Service (Java Processing.org Environment)
  • Salesforce Sites HTTP Listener (Visualforce/Apex)
  • Facility Management App (Force.com database and web forms)
(Source code to all components available at the bottom of this post)

I was elated to learn a few days before the hackathon that Chatter Bot had been selected as a finalist and I was strongly encouraged to attend. So I packed up Chatter Bot to take the 2 hour flight from Portland to San Jose.

It wasn't until I arrived at the airport that it suddenly dawned on me how much Chatter Bot bares a striking resemblance to a poorly assembled explosive device. Apparently the TSA agent handling the X-Ray machine thought so too and I was taken aside for the full bomb sniffing and search routine.

It crossed my mind to add a bit levity to the situation by making some kind of remark, but I quickly assessed that I was probably one misinterpreted comment away from being whisked off in handcuffs to some TSA lockup room. Ironically, I had no problem with security in San Jose coming back. They must be accustomed to these types of devices in Silicon Valley.

Upon arriving in San Jose, I setup Chatter Bot and configured the San Jose Convention Center (SJCC) as a Building Facility (Custom object) to be monitored.

Several assets were created to represent some rooms within the SJCC.

Finally, the Chatter Bot was associated with a particular room (Asset) through an intersection object called AssetSensors that relates a device ID (usually a MAC address) and an Asset.

Within minutes the motion and light sensors were communicating to the cloud via my laptop and reporting on activity in the Hackathon room.

Given the high quality and functionality of fellow competitors apps, such as the very cool Chatter for Android app by Jeff Douglas, and observations from the public voting, I thought Chatter Bot might be a little too "out of the box" to take a prize. It was a genuinely surreal and surprising moment when I learned Chatter Bot received the grand prize.

Thank you Salesforce for hosting such a great event and thank you to the coop-etition for the encouraging exchange of ideas and feedback during the challenge!

Arduino Sensor

/////////////////////////////
//VARS
//the time when the sensor outputs a low impulse
long unsigned int lowIn;

//the amount of milliseconds the sensor has to be low 
//before we assume all motion has stopped
long unsigned int pause = 5000;

boolean lockLow = true;
boolean takeLowTime;  

int LDR_PIN = 2;    // the analog pin for reading the LDR (Light Dependent Resistor)
int PIR_PIN = 3;    // the digital pin connected to the PIR sensor's output
int LED_PIN = 13;

byte LIGHT_ON    = 1;
byte LIGHT_OFF   = 0;
byte previousLightState  = LIGHT_ON;
int lightLastChangeTimestamp = 0;
unsigned int LIGHT_ON_MINIMUM_THRESHOLD = 1015;
unsigned long lastListStateChange = 0; //Used to de-bounce borderline transitions.

// Messages
int SENSOR_MOTION = 1;
int SENSOR_LIGHT  = 2;

/////////////////////////////
//SETUP
void setup(){  
  //PIR initialization
  pinMode(PIR_PIN, INPUT);
  pinMode(LED_PIN, OUTPUT);
  digitalWrite(PIR_PIN, LOW);
  
  Serial.begin(9600);
  
  InitializeLED();
  InitializeLightSensor();
  InitializeMotionSensor();
}

////////////////////////////
//LOOP
void loop(){
  
  if(digitalRead(PIR_PIN) == HIGH){
    digitalWrite(LED_PIN, HIGH);   //the led visualizes the sensors output pin state
    if(lockLow){
      //makes sure we wait for a transition to LOW before any further output is made:
      lockLow = false;
      writeMeasure(SENSOR_MOTION, HIGH);
      delay(50);
      digitalWrite(LED_PIN, LOW);   //the led visualizes the sensors output pin state
    }
    takeLowTime = true;
  }

  if(digitalRead(PIR_PIN) == LOW){
    digitalWrite(LED_PIN, LOW);  //the led visualizes the sensors output pin state
    if(takeLowTime){
      lowIn = millis();          //save the time of the transition from high to LOW
      takeLowTime = false;       //make sure this is only done at the start of a LOW phase
    }
    
    //if the sensor is low for more than the given pause, 
    //we assume that no more motion is going to happen
    if(!lockLow && millis() - lowIn > pause){
      //makes sure this block of code is only executed again after 
      //a new motion sequence has been detected
      lockLow = true;
      writeMeasure(SENSOR_MOTION, LOW);
      delay(50);
    }
  }
  
  ProcessLightSensor();
}

boolean InitializeLED(){
  Serial.println("INIT: Initializing LED (should see 3 blinks)... ");
  for(int i=0; i < 3; i++){
    digitalWrite(LED_PIN, HIGH);
    delay(500);
    digitalWrite(LED_PIN, LOW);
    delay(500);
  }
}

//the time we give the motion sensor to calibrate (10-60 secs according to the datasheet)
int calibrationTime = 10;

boolean InitializeMotionSensor(){
  //give the sensor some time to calibrate
  Serial.print("INIT: Calibrating motion sensor (this takes about ");
  Serial.print(calibrationTime);
  Serial.print(" seconds) ");
  for(int i = 0; i < calibrationTime; i++){
    Serial.print(".");
    delay(1000);
  }
  Serial.println(" done");
  Serial.println("INIT: SENSOR ACTIVE");
  delay(50);
}

boolean InitializeLightSensor(){
  Serial.print("INIT: Initializing light sensor. Light on threashold set to ");
  Serial.println(LIGHT_ON_MINIMUM_THRESHOLD);
  Serial.println("INIT: 20 samples follow...");
  for(int i = 0; i < 20; i++){
    int lightLevelValue = analogRead(LDR_PIN);
    Serial.print("INIT: ");
    Serial.println(lightLevelValue);
  }
}

boolean ProcessLightSensor(){
  byte currentState = previousLightState;
  int lightLevelValue = analogRead(LDR_PIN);  // returns value 0-1023. 0=max light. 1,023 means no light detected.
  
  if(lightLevelValue < LIGHT_ON_MINIMUM_THRESHOLD){
     currentState = LIGHT_ON;
  }
  else{
     currentState = LIGHT_OFF;
  }
  
  if(LightStateHasChanged(currentState) && !LightStateIsBouncing() ){
    previousLightState = currentState; 
    
    if(currentState == LIGHT_ON){
      writeMeasure(SENSOR_LIGHT, HIGH);
    }
    else{
      writeMeasure(SENSOR_LIGHT, LOW);
    }
    
    delay(2000);
    lightLastChangeTimestamp = millis();
    
    return true;
  }
  else{
    return false; 
  }
}

boolean LightStateHasChanged(byte currentState){
   return currentState != previousLightState; 
}

//De-bounce LDR readings in case light switch is being quickly turned on/off
unsigned int MIN_TIME_BETWEEN_LIGHT_CHANGES = 5000;
boolean LightStateIsBouncing(){
   if(millis() - lightLastChangeTimestamp < MIN_TIME_BETWEEN_LIGHT_CHANGES){
      return true; 
   }
   else{
      return false; 
   }
}

byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; 
char deviceID[ ] = "007DEADBEEF0";
//Format MEASURE|version|DeviceID|Sensor Type|State (on/off)
void writeMeasure(int sensorType, int state){
  Serial.print("MEASURE|v1|");
  
  Serial.print(deviceID);
  Serial.print("|");
  
  if(sensorType == SENSOR_MOTION)
    Serial.print("motion|");
  else if(sensorType == SENSOR_LIGHT)
    Serial.print("light|");
  else
    Serial.print("unknown|");
  
  if(state == HIGH)
    Serial.print("on");
  else if(state == LOW)
    Serial.print("off");
  else
    Serial.print("unknown");
  
  Serial.println("");
}

Chatter Bot Proxy (Processing.org Environment)

import processing.serial.*;

Serial port;
String buffer = "";

void setup()
{
    size(255,255);
    println(Serial.list());
    port = new Serial(this, "COM7", 9600);
}

void draw()
{
  if(port.available() > 0){
    int inByte = port.read();
    print( char(inByte) );
    if(inByte != 10){ //check newline
      buffer = buffer + char(inByte);
    }
    else{
       if(buffer.length() > 1){
          if(IsMeasurement(buffer)){
              postToForce(buffer);
          }
          buffer = "";
          port.clear();
       }
    }
  }
}

boolean IsMeasurement(String message){
  return message.indexOf("MEASURE") > -1;
}

void postToForce(String message){
  String[] results = null;
  try
  {
    URL url= new URL("http://listener-developer-edition.na7.force.com/api/measure?data=" + message);
    URLConnection connection = url.openConnection();

    connection.setRequestProperty("User-Agent",  "Mozilla/5.0 (Processing)" );
    connection.setRequestProperty("Accept",  "text/plain,text/html,application/xhtml+xml,application/xml" );
    connection.setRequestProperty("Accept-Language",  "en-us,en" );
    connection.setRequestProperty("Accept-Charset",  "utf-8" );
    connection.setRequestProperty("Keep-Alive",  "300" );
    connection.setRequestProperty("Connection",  "keep-alive" );
    
    results = loadStrings(connection.getInputStream());  
  }
  catch (Exception e) // MalformedURL, IO
  {
    e.printStackTrace();
  }

  if (results != null)
  {
    for(int i=0; i < results.length; i++){
      println( results[i] );
    }
  }
}

Visualforce Site Chatter Listener

<apex:page controller="measureController" action="{!processRequest}" 
contentType="text/plain; charset=utf-8" showHeader="false" 
standardStylesheets="false" sidebar="false">
{!Result}
</apex:page>

Controller

public with sharing class measureController {
	public void processRequest(){
    	if(Data != null){
    		system.debug('data= ' + Data);
    	}
    	
    	CreateFeedPosts();
    }
    
    private void CreateFeedPosts(){
    	if(AssetDeviceBindings.size() == 0)
    		return;
    	
    	for(AssetSensor__c binding : AssetDeviceBindings){
	    	FeedPost newFeedPost = new FeedPost();
	    	newFeedPost.parentId = binding.Asset__c;
			newFeedPost.Type = 'TextPost';
	        newFeedPost.Body = FeedPostMessage();
	        insert newFeedPost;
    	}
    }
    
    private string FeedPostMessage(){
    	if(AssetDeviceBindings.size() == 0)
    		return '';
    	
    	if(SensorType == 'motion'){
    		if(State == 'on')
    			return 'Motion detected';
    		else
    			return 'Motion stopped';
    	}
    	else if(SensorType == 'light'){
    		return 'Lights turned ' + State;
    	}
    	else
    		return 'Unknown sensor event';
    }
    
    private List<AssetSensor__c> m_assetSensor = null;
    public List<AssetSensor__c> AssetDeviceBindings{
    	get{
    		if(m_assetSensor == null){
    			m_assetSensor = new List<AssetSensor__c>();
    			if(DeviceID != null){
    				m_assetSensor = [select Id, Name, Asset__c, DeviceID__c from AssetSensor__c where DeviceID__c=:DeviceID limit 500];
    			}
    		}
    		return m_assetSensor;
    	}
    }
    
    private integer EXPECTED_MESSAGE_PARTS = 5;
    private integer DATA_MESSAGE_TYPE = 0;
    private integer DATA_VERSION	= 1;
    private integer DATA_DEVICEID	= 2;
    private integer DATA_SENSOR_TYPE= 3;
    private integer DATA_STATE		= 4;
    
    private List<string> m_dataParts = null;
    public List<string> DataParts{
    	get{
    		if(m_dataParts == null && Data != null){
    			m_dataParts = Data.split('\\|');
    		}
    		return m_dataParts;
    	}
    }
    
    public string Version{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_VERSION];
    		}
    		else
    			return null;
    	}
    }
    
    public string DeviceID{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_DEVICEID];
    		}
    		else
    			return null;
    	}
    }
    
    public string SensorType{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_SENSOR_TYPE];
    		}
    		else
    			return null;
    	}
    }
    
    public string State{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_STATE];
    		}
    		else
    			return null;
    	}
    } 
    
    private string m_data = null;
    public string Data{
    	get{
    		if(m_data == null && ApexPages.currentPage().getParameters().get('data') != null){
    			m_data = ApexPages.currentPage().getParameters().get('data');
    		}
    		return m_data;
    	}
    }
    
    private string m_result = '';
    public String Result{
    	get{
    		return 'ok';
    	}
    }
    
    public static testMethod void tests(){
    	Asset testAsset = new Asset();
    	testAsset.Name = 'Test Asset';
    	testAsset.AccountID = [select Id from Account order by CreatedDate desc limit 1].Id;
    	insert testAsset;
    	
    	AssetSensor__c binding = new AssetSensor__c();
    	binding.Name = 'Test Binding';
    	binding.DeviceID__c = '007DEADBEEF9';
    	binding.Asset__c = testAsset.Id;
    	insert binding;
    	
    	measureController controller = new measureController();
    	controller.processRequest();
    	system.assert(controller.Data == null);
    	system.assert(controller.DataParts == null);
    	system.assert(controller.Version == null);
    	system.assert(controller.DeviceID == null);
    	system.assert(controller.SensorType == null);
    	system.assert(controller.State == null);
    	
    	string TEST_MEASURE = 'MEASURE|v1|007DEADBEEF9|motion|on';
    	ApexPages.currentPage().getParameters().put('data', TEST_MEASURE);
    	controller = new measureController();
    	controller.processRequest();
    	system.assert(controller.Data == TEST_MEASURE);
    	system.assert(controller.DataParts != null);
    	system.assert(controller.DataParts.size() == 5);
    	system.assert(controller.Version == 'v1');
    	system.assert(controller.DeviceID == '007DEADBEEF9');
    	system.assert(controller.SensorType == 'motion');
    	system.assert(controller.State == 'on');
    	
    	system.assert(controller.AssetDeviceBindings != null);
    	system.assert(controller.AssetDeviceBindings.size() == 1);
    	system.assertEquals('007DEADBEEF9', controller.AssetDeviceBindings[0].DeviceID__c);
    	system.assertEquals(testAsset.Id, controller.AssetDeviceBindings[0].Asset__c);
    	
    	system.assert(controller.Result == 'ok');
    }
}
Thursday, 24 June 2010 18:03:10 (Pacific Daylight Time, UTC-07:00)
# Friday, 28 May 2010
A true REST interface with full support for HTTP Verbs, status codes, and URIs is currently not available on the Salesforce.com platform. However, a simple REST-like interface for getting objects can be developed using Salesforce Sites, Visualforce, and Apex.

This example uses a free Developer Edition with a Site named 'api' that uses only 2 Visualforce pages named 'rest' and 'error'. The rest page accepts a single parameter named 'soql', executes the SOQL query, and returns a JSON formatted response.



The error page is also used to generically handle all 40x and 50x errors.



The body of the error page returns a simple JSON message that the api is unavailable.
<apex:page contenttype="application/x-JavaScript; charset=utf-8" 
showheader="false" standardstylesheets="false" sidebar="false">
{"status": 500, "error": "api currently not available"}
</apex:page>

The rest Visualforce page (full source at bottom of this post) accepts a SOQL parameter and returns JSON results. To get a list of all Leads with their First and Last names, you'd use the SOQL

select Id, FirstName, LastName from Lead
and pass this query to the REST interface in a GET format such as (example here)
http://cubic-compass-developer-edition.na7.force.com/api?soql=select%20Id,%20FirstName,%20LastName%20from%20Lead

Note that the rest page is defined as the default handler for the site named 'api', so it's not required in the URL.
This simple interface supports any flavor of SOQL, including the WHERE and LIMIT keywords, so you can pass queries like

select Id, FirstName, LastName from Lead where LastName='Smith' limit 20
REST interfaces often assume the unique ID of an object is the last portion of the URL request. This can similarly be achieved with a query like (example here)
select Id, FirstName, LastName from Lead where Id='00QA00000019xkpMAA' limit 1

All of these example queries will only return the Id field by default. To fix this, update the Sites Public Access Settings and grant Read access to the Lead object.





The new URL rewriting feature in Summer 10 provides some additional options the necessary means to implementing a RESTful interface with full support for object URIs and linking.

Visualforce Source Code for rest.page
<apex:page controller="RESTController" action="{!processRequest}" 
contentType="application/x-JavaScript; charset=utf-8" showHeader="false" 
standardStylesheets="false" sidebar="false">
{!JSONResult}
</apex:page>
Apex Source Code for RESTController.cls
public with sharing class RESTController {
	public void processRequest(){
		validateRequest();		
    	if( HasError )
    		return;
    	
    	//Add support for other types of verbs here
    	processGetQuery();
    }
    
    static final string ERROR_MISSING_SOQL_PARAM = 'Bad Request. Missing soql parameter';
    static final string ERROR_SOBJECT_MISSING	 = 'Bad Request. Could not parse SObject name from SOQL';
    static final string ERROR_FROM_MISSING		 = 'Bad request. SOQL missing FROM keyword';
    public void validateRequest(){
    	if(Query == null){
    		errorResponse(400, ERROR_MISSING_SOQL_PARAM);
    	}
    	else if(sObjectName == null){
    		//Force a get of object name property.
    		//Detailed error response should already be logged by sObjectName parser
    	}
    }
    
    public boolean HasError = False;
    private void errorResponse(integer errorCode, string errorMessage){
    	JSONResponse.putOpt('status', new JSONObject.value(errorCode));
    	JSONResponse.putOpt('error', new JSONObject.value(errorMessage));
    	HasError = True;
    }
        
    public void processGetQuery(){
    	Map<String, Schema.SObjectField> fieldMap = Schema.getGlobalDescribe().get(SObjectName).getDescribe().fields.getMap();
    	List<JSONObject.value> objectValues = new List<JSONObject.value>();
    	List<sObject> resultList = Database.query(Query);
 		
    	for(sObject obj : resultList){
    		JSONObject json = new JSONObject();
    		json.putOpt('id', new JSONObject.value( obj.Id ));
    		for(SObjectField field : fieldMap.values() ){
    			try{
    				string f = field.getDescribe().getName();
    				string v = String.valueOf( obj.get(field) );
    				json.putOpt(f, new JSONObject.value( v ));
    			}
    			catch(Exception ex){
    				//Ignore. Field not included in query
    			}
    		}
			objectValues.add(new JSONObject.value(json));
    	}
    	JSONResponse.putOpt('status', new JSONObject.value(200));
    	JSONResponse.putOpt('records', new JSONObject.value(objectValues));
    }
    
    private string m_query = null;
    public string Query{
    	get{
    		if(m_query == null && ApexPages.currentPage().getParameters().get('soql') != null){
    			m_query = ApexPages.currentPage().getParameters().get('soql');
    		}
    		return m_query;
    	}
    }

	static final string SOQL_FROM_TOKEN = 'from ';    
    private string m_sObject = null;
    public string sObjectName{
    	get{
    		if(m_sObject == null && Query != null){
    			string soql = Query.toLowerCase();
    			
    			integer sObjectStartToken = soql.indexOf(SOQL_FROM_TOKEN);
    			if(sObjectStartToken == -1){
    				errorResponse(400, ERROR_FROM_MISSING);
    				return null;
    			}
    			sObjectStartToken += SOQL_FROM_TOKEN.length(); 
    			
    			integer sObjectEndToken = soql.indexOf(' ', sObjectStartToken);
    			if(sObjectEndToken == -1)
    				sObjectEndToken = soql.length();
    			
    			m_sObject = Query.substring(sObjectStartToken, sObjectEndToken);
    			m_sObject = m_sObject.trim();
    			system.debug('m_sObject = ' + m_sObject);
    		}
    		return m_sObject;
    	}
    }
    
    private JsonObject m_jsonResponse = null;
    public JSONObject JSONResponse{
    	get{
    		if(m_jsonResponse == null)
    			m_jsonResponse = new JSONObject();
    		return m_jsonResponse;
		}
		set{ m_jsonResponse = value;}
	}
    
	public String getJSONResult() {
    	return JSONResponse.valueToString();
	}
	
	public static testMethod void unitTests(){
		RESTController controller = new RESTController();
		controller.processRequest();
		system.assertEquals(True, controller.HasError);
		system.assertEquals(True, controller.JSONResponse.has('status'));
		system.assertEquals(400, controller.JSONResponse.getValue('status').num);
		system.assertEquals(True, controller.JSONResponse.has('error'));
		system.assertEquals(ERROR_MISSING_SOQL_PARAM, controller.JSONResponse.getValue('error').str);
		
		controller = new RESTController();
		ApexPages.currentPage().getParameters().put('soql', 'select Id fro Lead');
		controller.processRequest();
		system.assertEquals(True, controller.HasError);
		system.assertEquals(True, controller.JSONResponse.has('status'));
		system.assertEquals(400, controller.JSONResponse.getValue('status').num);
		system.assertEquals(ERROR_FROM_MISSING, controller.JSONResponse.getValue('error').str);
		
		controller = new RESTController();
		ApexPages.currentPage().getParameters().put('soql', 'select Id from Lead');
		controller.processRequest();
		system.assertEquals(False, controller.HasError);
		system.assertEquals('Lead', controller.sObjectName);
		
		Lead testLead = new Lead(FirstName = 'test', LastName = 'lead', Company='Bedrock', Email='fred@flintstone.com');
        insert testLead;
        
        controller = new RESTController();
		ApexPages.currentPage().getParameters().put('soql', 'select Id from Lead where email=\'fred@flintstone.com\'');
		controller.processRequest();
		system.assertEquals(False, controller.HasError);
		system.assertEquals('Lead', controller.sObjectName);
		system.assertEquals(True, controller.JSONResponse.has('records'));
	}
}
Friday, 28 May 2010 12:34:08 (Pacific Daylight Time, UTC-07:00)
# Thursday, 29 April 2010

The VMForce value proposition:
  1. Download Eclipse and SpringSource
  2. Signup for a Salesforce Development account and define your data model
  3. Write your Java app using objects that serialize to Salesforce
  4. Drag and drop your app onto a VMWare hosted service to Force.com to deploy
The partnership breaks down as:
  1. VMWare hosts your app
  2. Salesforce hosts your database
The 2 are seamlessly integrated so that Java Developers can effectively manage the persistence layer as a black box in the cloud without worrying about setting up an Oracle or MySql database, writing stored procedures, or managing database performance and I/O.

This is all great news for Java Developers. It's yet another storage option on the VMWare cloud (I'm assuming VMWare remains fairly agnostic beyond this relationship and Force.com becomes one of many persistence options to Spring Source developers).

For larger organizations already using Salesforce but developing their custom Java apps, this opens up some new and attractive options.

Existing Salesforce Developers may have wondered if Java would replace Apex and Visualforce, prompting a Salesforce blog post aptly titled "In case you were wondering...". In short, "no". Apex and Visualforce will continue to evolve and be the primary platform for developing Salesforce native apps. I personally will continue to use Apex and Visualforce for all development when the data is stored in Salesforce unless compelling requirements come along to use VMForce (most likely that have particular DNS, bandwidth, or uptime needs).

So why the partnership between VMWare and Salesforce? When Salesforce announced Apex back in 2007 it was met with broad acceptance, but some common criticisms were:
  • Why another DSL (Domain Specific Language)?
  • Why can't I leverage my existing Java skills to write business apps?
  • Salesforce is written in Java. Can I upload my existing Java apps to the cloud?
These criticisms were coupled with some looming 800 pound Gorillas in the room (Amazon and VMWare) pushing virtualization as the basis for cloud computing while Salesforce promoted the non-virtualized, multi-tenant path to cloud computing.

They both can't be right. Or can they? CIO's are being bombarded with virtualization as a viable cloud computing solution, so I think Salesforce has wisely taken a step back and taken a position that says "We do declarative, hosted databases better than anyone else. Go ahead and pursue the virtualization path for your apps and leverage our strength in data management as the back end".

Over time, the bet is that VMForce customers will also discover the declarative configuration tools for form-based CRUD (Create/Read/Update/Delete) apps can meet the rapid prototyping and development needs of most any line of business apps.

For object-oriented developers, Salesforce provides a persistence layer that meets or exceeds any ORM (Object Relational Mapping) or NoSQL solution. The impedance mismatch between objects and relational databases is widely known, and VMForce solves this problem very elegantly.

I think other ORM Developer communities, such as Ruby on Rails Developers, will appreciate what is being offered with VMForce, prompting some to correctly draw parallels between VMForce and Engine Yard.

In my experience working with Azure, I cannot emphasize enough how difficult it was to work through the database and storage aspects on even the most simplest application design. Sure, C# is a dream to work with (compared to both Java and Apex) and ASP.NET works well enough for most applications, but Microsoft leaves so many data modeling and storage decisions to the Developer in the name flexibility, which ultimately means sacrificing simplicity, reliability, and in some cases scalability.

Some final thoughts, observations and questions on VMForce:
  • Are there any debugging improvements when using VMForce relative to Apex/VF?
  • The connection between VMWare and Salesforce is presumably via webservices and not natively hosted in the same datacenter. Does this imply some performance and latency tradeoffs when using VMForce? (Update: No. Per the comment from David Schach, the app VM is running in the same datacenter as the Force.com DB)
  • Licensing: No idea what the pricing will be. Will there be a single biller or will Developers get separate invoices from VMWare and Salesforce for bandwidth/computing and storage?
  • It strikes me as quite simple to develop Customer/Partner portals or eCommerce solutions in Java that skirt the limitations of some Salesforce license models when supporting large named-user/low authentication audiences. Will Salesforce limit the types and numbers of native objects that can be serialized through VMForce?
  • Will VMForce apps be validated and listed on the AppExchange? If so, will they be considered hybrid or native? What security review processes will be enforced?
  • Why only the teaser? Ending a great demo with "and it should be available sometime later this year" just seemed deflating. I think Business Users and Developers respond to this kind of promotion much differently. It would be far better to leave Developers with some early release tools in hand immediately after the announcement and capitalize on the moment. Business Users, however, can be shown Chatter, and other future release features, to satiate their long term interests.

Update:Jesper Joergensen has an excellent blog post that answers many of these questions. Thanks Jesper!

Thursday, 29 April 2010 14:38:34 (Pacific Daylight Time, UTC-07:00)
# Sunday, 21 February 2010

There are several bank specific rules for validating credit cards, but all card numbers can be validated using the Luhn algorithm. The Luhn algorithm, aka the Luhn formula or "modulus 10" algorithm, is a simple checksum formula used to validate a variety of identification numbers.

It is not intended to be a cryptographically secure hash function; it was designed to protect against accidental errors, not malicious attacks. Most credit cards and many government identification numbers use the algorithm as a simple method of distinguishing valid numbers from collections of random digits.

Programmers accustomed to using '%' as a modulus operator need only to shift gears slightly and use the Math.mod(int1, int2) library function when working with Apex.

public static boolean isValidCheckSum(List<integer> numbers){
	integer sum = 0;
	integer len = numbers.size();
	
	for (integer i = len - 1; i >= 0; i--)
	{
		if (math.mod(i , 2) == math.mod(len,  2) )
		{
			integer n = numbers[i] * 2;
			sum += (n / 10) + ( math.mod(n, 10));
		}
		else
			sum += numbers[i];
	}
	return ( math.mod( sum, 10) == 0 );
}
Sunday, 21 February 2010 12:15:00 (Pacific Standard Time, UTC-08:00)
# Tuesday, 16 February 2010

I spent a few hours tackling this problem, so hopefully this blog post will spare someone else the difficulty of using Salesforce Spring '10 features before it is broadly released.

Spring '10, aka version 18, is a big release for Salesforce Developers. The current version of Eclipse Force IDE defaults to using the version 16 API.

Once Spring '10 was installed on my org (NA1) I committed to using the v18 features on a couple new projects. Unfortunately the Spring '10 release hit a snag, so now the release is being staggered over 30-45 days and the new Force IDE release has been put on hold.

Fortunately, there are a couple workarounds to using the v18 API and features today.

Option 1)
a) Create an Apex class as you normally would in Eclipse and accept version 16 as the default.
b) Eclipse will create a second file next to the class with a 'meta.xml' extension.
c) Edit the -meta.xml file by changing the version to 18 then save.

Option 2)
a) Create an Apex class through the Salesforce browser interface but do not accept the default version 18. Instead, select version 16.
b) Open the Salesforce project in Eclipse and confirm the Apex class is available in the IDE (the current IDE apparently won't automatically sync with classes > v16. At least that was my experience).
c) Back in the Salesforce browser, change the version from 16 to 18.

Finally, depending on which approach used, you'll need to synchronize the Eclipse IDE with Salesforce servers using either the "Deploy to" or "Refresh from" server options by right clicking on the class and selecting from the Force.com option. (Option 1 requires "Deploy to". Option 2 "Refresh from").

Hope this helps!

Tuesday, 16 February 2010 18:28:19 (Pacific Standard Time, UTC-08:00)
# Tuesday, 02 February 2010

Every once in awhile I run into a requirement when using Apex where I think there must be a real obvious way to do something, only to hit a wall and decide to develop the functionality myself.

When this happens, the result either ends up being:
a) I later discover there actually is a faster/better way to do something and I feel stupid for taking the NIH path (Not Invented Here).
b) The functionality really is missing and the effort invested turns out to be well spent.

Let's just hope the time spent on this wrapper class is in the latter category :-)

In this use case, a custom database object is wrapped in an Apex class that abstracts the underlying object properties and provides some methods. One of the properties (a database custom field) is a multipicklist that is stored in semicolon delimited format, but is better represented in object-oriented terms as a List. Methods for Adding and Removing items from the list are also needed.

public class Foo{
	private Foo__c m_record = null;
	public Foo(Foo__c record){
		m_record = record;
	}

	private MultiSelectProperty m_categories = new MultiSelectProperty();
	public List<string> Categories{
		get{return m_categories.ToList(m_record.Category__c);}
	}
	public void AddCategory(string category){
		m_record.Category__c = m_categories.Add(m_record.Category__c, category);
	}
	public void RemoveCategory(string category){
		m_record.Category__c = m_categories.Remove(m_record.Category__c, category);
	}
}

The MultiSelectProperty class is defined here. The unit tests should help tell the story on how it can be used and the assumptions it makes.

//Manages semicolon-delimited multipicklist data. 
global class MultiSelectProperty {
	
	public List<string> ToList(string storedValue){
		List<string> values = new List<string>();
		if(storedValue == null || storedValue == '')
			return values;
		else{			
			string[] splitValues = storedValue.split(';');
			for(string v : splitValues){
				if(v != null && v != '')
					values.add(v);
			}
			return values;
		}
	}
	
	public boolean Contains(string field, string value){
		List<string> values = this.ToList(field);
		for(string v : values){
			if(v==value){
				return true;
			}
		}
		return false;
	}
	
	public string Add(string field, string value){
		if(field == null)
			field = '';
		
		if(value == null || value == '')
			return '';
		
		if(this.Contains(field, value))
			return field;
		
		if(field.endsWith(';')){
			return field + value;
		}
		else
			return field + ';' + value;
	}
	
	public string Remove(string field, string value){
		if(field == null || field == '')
			return '';
		
		if(value == null || value == '')
			return field;
		
		if(!this.Contains(field, value))
			return field;
		
		List<string> values = this.ToList(field);
		string formattedFields = '';
		for(string v : values){
			if(v == value)
				continue;
			formattedFields += v + ';'; 
		}
		return formattedFields;		
	}
	
	@IsTest public static void tests(){
		final string SINGLE_VALUE_NULL = null;
		final string SINGLE_VALUE_EMPTY = '';
		final string SINGLE_VALUE = 'first option;';
		final string SINGLE_VALUE_EMPTY_TRAIL = 'first option';
		final string DOUBLE_VALUE = 'first option;second option;';
		final string TRIPLE_VALUE = 'first option;second option;third option;';
		
		MultiSelectProperty categories = new MultiSelectProperty();
		System.assert(categories.ToList(SINGLE_VALUE_NULL).size() == 0);
		System.assert(categories.ToList(SINGLE_VALUE_EMPTY).size() == 0);
		System.assert(categories.ToList(SINGLE_VALUE).size() == 1);
		System.assert(categories.ToList(SINGLE_VALUE_EMPTY_TRAIL).size() == 1);
		System.assert(categories.ToList(DOUBLE_VALUE).size() == 2);
		System.assert(categories.ToList(TRIPLE_VALUE).size() == 3);
		
		System.assert(categories.Contains(SINGLE_VALUE, 'first option') == true);
		System.assert(categories.Contains(SINGLE_VALUE, 'second option') == false);
		System.assert(categories.Contains(DOUBLE_VALUE, 'second option') == true);
		System.assert(categories.Contains(DOUBLE_VALUE, 'third option') == false);
		System.assert(categories.Contains(TRIPLE_VALUE, 'first option') == true);
		System.assert(categories.Contains(TRIPLE_VALUE, 'second option') == true);
		System.assert(categories.Contains(TRIPLE_VALUE, 'third option') == true);
		
		string fields = categories.Add(SINGLE_VALUE, null);	
		System.assert(categories.ToList(fields).size() == 0);
		fields = categories.Add(null, null);
		System.assert(categories.ToList(fields).size() == 0);
		fields = categories.Add(null, '');
		System.assert(categories.ToList(fields).size() == 0);
		fields = categories.Add(SINGLE_VALUE, '');
		System.assert(categories.ToList(fields).size() == 0);
		
		fields = categories.Add(SINGLE_VALUE, 'second option');
		System.assert(categories.ToList(fields).size() == 2);
		System.assert(categories.Contains(fields, 'second option') == true);
		
		fields = categories.Add(SINGLE_VALUE_EMPTY_TRAIL, 'second option');
		System.assert(categories.ToList(fields).size() == 2);
		System.assert(categories.Contains(fields, 'second option') == true);
		
		fields = categories.Add(DOUBLE_VALUE, 'second option');
		System.assert(categories.ToList(fields).size() == 2);
		System.assert(categories.Contains(fields, 'second option') == true);
		
		fields = categories.Remove(null, '');
		System.assert(categories.ToList(fields).size() == 0);
		
		fields = categories.Remove(null, null);
		System.assert(categories.ToList(fields).size() == 0);
		
		fields = categories.Remove(SINGLE_VALUE, '');		
		System.assert(categories.ToList(fields).size() == 1);
		
		fields = categories.Remove(SINGLE_VALUE, null);		
		System.assert(categories.ToList(fields).size() == 1);
		
		fields = categories.Remove(SINGLE_VALUE, 'second option');
		System.assert(categories.ToList(fields).size() == 1);
		System.assert(categories.Contains(fields, 'second option') == false);
		
		fields = categories.Remove(SINGLE_VALUE, 'first option');
		System.assert(categories.ToList(fields).size() == 0);
		System.assert(categories.Contains(fields, 'first option') == false);
		
		fields = categories.Remove(TRIPLE_VALUE, 'first option');
		System.debug('results from remove ' + fields);
		System.assert(categories.ToList(fields).size() == 2);
		System.assert(categories.Contains(fields, 'first option') == false);
		System.assert(categories.Contains(fields, 'second option') == true);
		System.assert(categories.Contains(fields, 'third option') == true);
	}
}
Tuesday, 02 February 2010 12:23:21 (Pacific Standard Time, UTC-08:00)
# Monday, 18 January 2010

Apex is a programming language supported by the Force.com platform.

A fluent interface is a way of implementing an object oriented API in a way that aims to provide for more readable code.

An example of a fluent interface looks like:

Contact c = new Contact();
ITransaction trx = new FluentTransaction()
		.Initialize(c, 5.0)
		.Validate()
		.Execute()
		.Log()
		.Finish();

Transactions are a good use case for implementing a fluent interface because they often execute the repeated steps of initializing and validating input values, executing, and logging data. An API designer can expressly define these methods in an interface using Apex (very similar to Java and C# interfaces). When implementing a transaction object, the compiler will check to ensure the proper interfaces are implemented.

A fluent interface is normally implemented by using method chaining to relay the instruction context of a subsequent call. Generally, the context is defined through the return value of a called method self referential, where the new context is equivalent to the last context.

public class MethodChaining {
	
	public interface ITransaction{		
		ITransaction Initialize(Contact c, double d);
		ITransaction Validate();
		ITransaction Execute();		
		ITransaction Log();
		ITransaction Finish();
		boolean IsValid();
		boolean IsSuccessful();
	}
	
	public class FluentTransaction implements ITransaction{
		private Contact m_contact;
		private double m_amount;
		private boolean m_isSuccessful = false;		
		private boolean m_isValid = false;
		
		public ITransaction Initialize(Contact c, double d){
			m_contact = c;
			m_amount = d;
			return this;
		}
		
		public ITransaction Validate(){
			//Validate the m_contact and m_amount inputs
			//Setting this to false will instruct other execution chain methods to halt
			m_isValid = true; 
			return this;
		}
		
		public ITransaction Execute(){
			if(IsValid()){
				// Execute the transaction here
				m_isSuccessful = true;
			}
			return this;
		}
				
		public ITransaction Log(){
			if(IsValid()){
				//Add any transaction audit logging here
			}
			return this;
		}
		
		public ITransaction Finish(){
			if(IsValid()){
				//Finalize method. Manage any required cleanup
			}
			return this;
		}
		
		public boolean IsValid(){
			return m_isValid;
		}
				
		public boolean IsSuccessful(){
			return m_isSuccessful;
		}		
	}
	
	public MethodChaining(){
		Contact c = new Contact();
		ITransaction trx = new FluentTransaction()
			.Initialize(c, 5.0)
			.Validate()
			.Execute()
			.Log()
			.Finish();
		
		if(trx.IsSuccessful()){
			//Do something
		}
		else{
			//Do something else
		}
	}
}

Monday, 18 January 2010 15:06:41 (Pacific Standard Time, UTC-08:00)
# Saturday, 09 January 2010
With the economy being what it is, I can think of no better resolution for a small business owner/entrepreneur than to focus on creating jobs.
Ideally these jobs will be created in the Portland, OR metro area, but we're accustomed to operating with distributed teams and are happy to continue doing so.

Web Application Design / Development
With our shift away from .NET and onto the Force.com platform, a person in this role will need to be competent with:
  • Salesforce API and custom/native objects
  • Apex Controllers and Classes / Object modeling
  • Visualforce Pages / Eclipse
  • Javascript / JQuery / AJAX
Possible job creation triggers:
We're actively involved in re-architecting the i-Dialogue platform to run on Force using the above technologies. This product will be launched on the AppExchange with a Freemium business model. As we start to see adoption, the long-term strategy of the product will be to develop and offer add-on premium application modules.
Demand for custom module development, such as custom shopping carts or product configurators, will also spark growth.

Professional Services Developer / Project Manager
The Pro Service Developer is competent with what's available "out of the box" from Salesforce and our core products, and offers last mile customization and development services in direct response to customer and project requirements.

Responsibilities include:
  • Custom Visualforce template design and branding
  • Custom layout
  • Configuration and implementation services
  • Weekly progress meetings. Customer facing support and requirements gathering.
Desired skills:
  • JQuery UI / Themeroller / Custom Themes
  • HTML / Javascript
  • Visualforce (UI emphasis - no custom controller development)
Possible job creation triggers:
  • Clients and customers requiring custom branding of Salesforce Sites and Portal templates.
  • Custom website, landing page, and portal development on Force.com
  • Multi-channel campaigns executed across email and landing page sites
  • Integration projects involving Sites, Customer/Partner Portal, and Ideas

Account / Sales Manager
The AppExchange coupled with our concept of a Site Gallery will automate much of the Sales and Marketing process, however an Account/Sales Manager will be required to assist current and prospective clients assemble the right mix of software and services for their solution and provide quotes.

Possible job creation triggers:
Once the free version is released, we'll shift to offering premium add-on modules within 2 months.


Risks in 2010
The freemium model is somewhat new to us. We're giving away significant value and functionality in the free version, so it's very likely that only 5-20% of all customers will require our premium services, which in turn will enable new job growth. However, this model leverages the scalability and distribution of the cloud and requires no additional effort on our part to provision new computing resources.

The Web Application Design and Development position requires a Software Engineering approach to using Javascript. This is a common skill found in Silicon Valley, but not so much in Portland, OR. It may be difficult to find just the right person(s) for this role.

Most support functions are handled by Pro Service and Account Managers today, but there may be a need for a specific support role in the future.


Conclusion

The actual number of jobs in each position may vary, but these are the 3 primary job functions we'll seek to create in 2010. The products and features we have planned in 2010 are embracing the cloud in ways unimaginable a couple years ago and I'm very excited to wake up each day in pursuit of these solutions. For me, software development has always been about the journey, and surrounding myself with creative, innovative, and passionate individuals on this journey is important.

If past success is any indicator of the future, then I think our new direction will be successful. Of the 60,000+ customers on Salesforce, many are always seeking to gain more value from their CRM investment by deploying Sites and Portals. By running natively and offering services directly on the Force.com platform, rather than at arms length in a hybrid configuration, we're now able to offer much richer applications and solutions.

If you know of anyone in the Portland, OR area that has these particular skill sets, please have them contact me at mike@cubiccompass.com.

Saturday, 09 January 2010 14:11:55 (Pacific Standard Time, UTC-08:00)
# Sunday, 13 December 2009
Update Jan 2012: This article is way out of date. I've since moved to a MacPro for development, using VMWare Fusion to run some Windows apps.

What would be your ultimate machine for developing applications in the cloud?

I've been mulling over this question for a few weeks and finally got around to putting a solution together over a weekend. There are both hardware and software components:

Hardware
  • Something in between a netbook and laptop, around 4 pounds.
  • 8+ hour battery life
  • 1" thin (Easy to tote)
  • SSD (Solid State Disk 256 GB+)
  • 4 GB RAM
  • 2 GHz CPU+
  • ~$700
I settled on a new Acer 4810 Timeline which met most of my requirements. The exceptions being the Acer has a 1.3 GHz CPU and doesn't have SSD. I wanted the SSD primarily for a speedy boot time, but some tuning of the Win7 sleep options along with the Acer's 8+ hour battery life means the laptop is rarely ever turned off and can quickly recover from sleep mode.

Software

Next up was the software required to fully embrace the cloud. My list of essential cloud tools is no where near as prolific as Scott Hanselman's tool list. But hey, this is a nascent craft and we're just getting started. These tools are essential, in my opinion, for doing development on the leading cloud platforms.

Windows 7
Whoa! I can already hear my Mac toting friends clicking away from this blog post. "WTF? Why Windows 7 for a cloud development machine?" Well, there are several reasons:
  • First of all, even if you do develop on a Mac you likely have a Windows VM or dual boot configured for Windows anyway.
  • Windows has been running on x86 architecture for years. Mac only made the jump a few years ago and is still playing catch up on peripheral device and driver support.
  • Even Google, a huge cloud player, consistently will develop, test, and release all versions of Chrome and Google App Engine tools on Windows before any other platform. Windows developers typically get access to these tools months in advance of Mac users.
  • Eclipse is another tool that is well supported on Windows, above all other platforms.
  • Silverlight support. This is my RIA environment of choice going forward.

Visual Studio 2010 Beta 2
By the time you read this blog, perhaps this version of Visual Studio will be outdated, but it is the first release of VS designed with complete support for Azure.

Windows Azure
You'll need an Azure account to upload your application to the cloud for hosting.

Azure SDK
Great article here for getting started on installing/configuring your machine for the latest Azure bits

Azure Storage Explorer
Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications.

Eclipse 
Eclipse is a Java-based IDE that requires the Java runtime

Rather than downloading the latest and greatest version of Eclipse, I recommend downloading whatever version is currently supported by the next essential cloud development tool (below)...

Force IDE
Eclipse Plug-in that supports development and management of Apex/Visualforce code on Salesforce.com's platform (aka Force.com)

Google App Engine
Currently, there are both Python and Java development environments for GAE. Like Azure, GAE supports development and debugging on localhost before uploading to the cloud, so the installation package provides a nice local cloud emulation environment.

SVN
I have a need for both the Windows Explorer Tortoise shell plug-in and the Eclipse plug-in. You may need only one or the other. All the Force.com open source projects are accessible via SVN.

Git on Windows (msysgit)
It seems gitHub is becoming the defacto standard for managing the source code for many web frameworks and projects. Excellent article here on how to install and configure Git on Windows

Amazon Web Services (AWS) Elastic Fox
Nice Firefox plug-in for managing EC2 and S3 instances on AWS. I mostly just use it for setting up temporary RDP whitelist rules on EC2 instances as I connect remotely from untrusted IP addresses (like airports/hotels/conferences).

I highly recommend using the browser based AWS console for all other provisioning and instance management. There's also an AWS management app for Android users called decaf.

Browsers
If you're doing real world web development, then you already know the drill. Download and install Internet Explorer 8, Firefox, and Chrome at minimum. In addition to the Elastic Fox plug-in (above) you'll want to install Firebug. IE 8 now has Firebug-like functionality built-in, called the Developer Toolbar. Just hit F12 to access it (see comparison with Firebug here).

I personally use JQuery for all web development that requires DOM manipulation to handle cross-browser incompatibilities and anomalies.

There are 6-10 other utilities and tools I installed on this machine that aren't specific to the cloud. Installing everything on this list took about 90 minutes (plus a couple hours to pull down all my SVN project folders for Passage and other related projects I manage).

Given that cloud development is all about distributing resources over servers and clients, I like to take a minimalist approach and reduce the surface area and drag of my local environment as much as possible. This improves OS and IDE boot time as well as eliminates a lot of common debugging issues as a result of version incompatibilities.

What about you? What hardware/tools/applications are essential to your cloud development projects?

Sunday, 13 December 2009 12:01:04 (Pacific Standard Time, UTC-08:00)
# Monday, 23 November 2009

I just returned from the 2009 Dreamforce conference last week. Prior to the conference, a few of us on Twitter were bouncing ideas around on how to track all the after hours events and arrange for meeting people while at the conference. Additionally, I was looking for something mobile-browser friendly since carrying a netbook around wasn't going to be practical.

I built the site http://www.dfbuzz.com using Force.com Sites as an excuse to try out a new UI library and address these requirements.

In the end, Twitter ended up being the best way to get realtime information on the conference 'buzz', but I did get some feedback from people that they discovered an event on DF Buzz, or that a comment left on an event was useful in making a decision on whether or not to attend.

Thanks to all the sponsors who posted their events. I have a laundry list of enhancements to make next year. :-)

Monday, 23 November 2009 14:22:00 (Pacific Standard Time, UTC-08:00)
# Saturday, 07 November 2009
Update 2/22/2010 See this article for a better way of using Ajax with Visualforce/Apex.

JQuery and JSON have become my tools of choice for designing and developing Single Page Applications (or SPA). Why?
  1. The user experience is better if the entire page is not refreshed when executing a single action
  2. There's a vast library of JQuery plug-ins and a Developer community to tap into.
  3. Using JSON as the standard object model and separating the UI from the database allows me to design a UI only once and port the application to other cloud platforms (for example, the VF code below can run on i-Dialogue, Google Apps Engine, or Azure by only changing a few lines of code).

Credit goes to Ron Hess for showing me this pattern.I've made very little changes to his Apex code example, except to add JQuery and handling actionFunction rerender callbacks slightly differently.

The basic pattern I'm using for developing Single Page Applications for Visualforce is to call actionFunctions from JQuery, which in turn call Apex controller methods that construct JSON strings. The web page then handles the JSON formatted response in re-rendered outputPanels. The code below demonstrates a simple auto complete function that searches Salesforce Accounts that match a user entered input on each keypress. Clicking on an Account drills down to Account details by calling another actionFunction that retrieves specific Account information.

Some Caveats

  • The outputPanel scripts will execute the first time the page is loaded, so a check for null is required in the callback to prevent first-time execution
  • The dynamic re-rendering of script within an outputPanel makes it difficult to do true functional programming and create closures that elegantly handle the callbacks. More complex applications may have to utilize global state variables, increasing the mutability of the application (and potential for bugs)
  • The mutability of Apex controller properties requires a one-to-one relationship between actionFunction handlers and their corresponding response strings.
  • actionFunctions send the page ViewState in the AJAX XmlHttpRequest and return a blob of XML (apparently using the Sarissa open source library) in the response, which has a slightly slower performance than what you'd get using JQuery's native AJAX utility methods.

Source Code

  1. <apex:page sidebar="false" showHeader="false" standardStylesheets="false" title="AJAX Development Harness" controller="exampleCon">
  2. <style>
  3. body {font-family: Verdana;}
  4. .apexTable { width: 600px;}
  5. .evenTableRow {background-color: #eee;}
  6. .defaultHidden {display: none;}
  7. style>
  8. <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js">script>
  9.     <h3>Using JQuery and JSON with Visualforceh3>
  10.     <apex:form >
  11.         <apex:actionFunction name="apex_searchAccounts" action="{!SearchAccounts}" immediate="true" rerender="searchResultsPanel">
  12.             <apex:param name="searchTerm" value="" />
  13.         apex:actionFunction>
  14.         <apex:actionFunction name="apex_getAccountDetails" action="{!GetAccountDetails}" immediate="true" rerender="accountDetailsPanel">
  15.             <apex:param name="accountid" value="" />
  16.         apex:actionFunction>
  17.     apex:form>
  18.    
  19.     Enter Account Name: <input type=text id='accountInput' /> (type the letters 'uni' to test)<br/>
  20.     <div id="accountsListView" class="view">div>
  21.     <div id="accountDetailsView" class="view defaultHidden">div>
  22.                  
  23.     <script type="text/javascript">
  24.         $(function() {
  25.             $("#accountInput")
  26.                 .focus()
  27.                 .keyup( function (e) {
  28.                     var searchTerm = $("#accountInput").val();
  29.                     if( searchTerm != ''){
  30.                         apex_searchAccounts(searchTerm);
  31.                     }
  32.             });
  33.              
  34.             $(".accountDetailsLink").live('click', function() {
  35.                 apex_getAccountDetails( $(this).attr('id') );
  36.             });
  37.         });
  38.        
  39.         function clearAllViews(){
  40.             $(".view").html('');           
  41.         }
  42.        
  43.         function renderAccountsListView(accounts){         
  44.             if(accounts.length === 0){
  45.                 $("#accountsListView").html("no accounts matching that query").fadeIn();
  46.                 return;
  47.             }
  48.             var tableHTML = '';
  49.             for(var i=0; i < accounts.length; i++){
  50.                 tableHTML += '
  51. ';
  52.             }
  53.             tableHTML += '
  54. + accounts[i].id + '" href="#" class="accountDetailsLink">' + accounts[i].name + '
    '
    ;
  55.             $("#accountsListView").html(tableHTML).fadeIn();
  56.             $("table tr:nth-child(even)").addClass("evenTableRow");
  57.         }
  58.        
  59.         function renderAccountDetailView(account){         
  60.             var tableHTML = '

    Account Details for ' + account.name + '

    '
    ;
  61.             tableHTML += '';         
  62.             tableHTML += '
  63. ';
  64.             tableHTML += '
  65. ';
  66.             tableHTML += '
  67. ';
  68.             tableHTML += '
  69. ';           
  70.             tableHTML += '
  71. Name' + account.name + '
    Type' + account.type + '
    Description' + account.description + '
    Website+ account.website + '" target=_blank>' + account.website + '
    '
    ;           
  72.             $("#accountDetailsView").hide().html(tableHTML).slideDown();
  73.             $("table tr:nth-child(even)").addClass("evenTableRow");
  74.         }
  75.     script>
  76.     <apex:outputPanel id="searchResultsPanel" layout="block" rendered="true">
  77.         <script>
  78.         function apex_searchAccounts_callback(jsonResponse){           
  79.             if(jsonResponse === null || jsonResponse === ''){
  80.                 return;
  81.             }
  82.            
  83.             var accounts;
  84.             eval('accounts = ' + jsonResponse);
  85.             if(accounts !== null){
  86.                 renderAccountsListView(accounts);
  87.             }
  88.         }
  89.         clearAllViews();
  90.         apex_searchAccounts_callback('{!JSONSearchAccounts}');
  91.         script>
  92.     apex:outputPanel>    
  93.    
  94.     <apex:outputPanel id="accountDetailsPanel" layout="block" rendered="true">
  95.         <script>       
  96.         function apex_getAccountDetails_callback(jsonResponse){        
  97.             if(jsonResponse === null || jsonResponse === ''){
  98.                 return;
  99.             }
  100.            
  101.             var account;
  102.             eval('account = ' + jsonResponse);
  103.             if(account !== null){
  104.                 renderAccountDetailView(account);
  105.             }
  106.         }
  107.         apex_getAccountDetails_callback('{!JSONAccountDetails}');
  108.         script>
  109.     apex:outputPanel>    
  110. apex:page>
  111.  
  112. //Controller
  113.  
  114. public class exampleCon {                  
  115.     public String JSONSearchAccounts{ get; set; }
  116.    
  117.     public PageReference SearchAccounts() {
  118.         String searchTerm = Apexpages.currentPage().getParameters().get('searchTerm');         
  119.         String soql = 'SELECT Id, Name FROM Account WHERE Name like \'' + searchTerm + '%\'';              
  120.         List<Account> accountList = Database.query(soql);          
  121.        
  122.         string json = '[';
  123.         for(Account acct : accountList){
  124.             json += '{"id": "' + acct.Id + '", ' +
  125.             '"name": "' + acct.Name + '"},';
  126.         }
  127.         json += ']';
  128.         json = json.replace('},]', '}]');
  129.        
  130.         JSONSearchAccounts = json.replace('\'', '');
  131.         return null;
  132.     }
  133.    
  134.     public String JSONAccountDetails{ get; set;}
  135.        
  136.     public PageReference GetAccountDetails() {
  137.         String accountid = Apexpages.currentPage().getParameters().get('accountid');           
  138.         String soql = 'SELECT Id, Name, Type, AccountNumber, Description, Website FROM Account WHERE Id=\'' + accountid + '\'';            
  139.         List<Account> accountList = Database.query(soql);
  140.        
  141.         if(accountList.size() != 1){
  142.             JSONAccountDetails = '{"error": "Expected only 1 account"}';
  143.             return null;
  144.         }          
  145.        
  146.         string json = '{';     
  147.         json += '"id": "' + accountList[0].Id + '", ';     
  148.         json += '"name": "' + accountList[0].Name + '", ';
  149.         json += '"type": "' + accountList[0].Type + '", ';
  150.         json += '"accountNumber": "' + accountList[0].AccountNumber + '", ';
  151.         json += '"description": "' + accountList[0].Description + '", ';
  152.         json += '"website": "' + accountList[0].Website + '" ';
  153.         json += '}';       
  154.        
  155.         JSONAccountDetails = json.replace('\'', '');
  156.         return null;
  157.     }
  158. }
Saturday, 07 November 2009 13:55:33 (Pacific Standard Time, UTC-08:00)
# Friday, 06 November 2009

Because the format of Salesforce record identifiers are deterministic, there's always a concern when using them in Site URLs that someone will 'guess' record IDs and gain access to records.

If the data you're publishing is public anyway, like Solutions, Events, or Press Releases, then maybe it's no big deal. But when Sites are being used as landing pages for sensitive data, like Opportunities or Partner Accounts, then some more effort is required to protect the records.

Since most records correlate with an Account or Contact, in a portal context it's simple enough to check the record relationship. Otherwise, URL encryption is really the only option for anonymous visitors.

A couple blog entries here and here discuss the use of MD5 hash for creating encrypted URLs.

These solutions use custom fields and triggers to correlate a web page request with an originating request record.

With the recent release of custom settings in Winter 10, there's now the option of storing private keys and using PKI encryption to sign record identifiers and decrypt them.

Unfortunately, the Crypto class documentation is a bit 'cryptic' (pun intended) on how to implement full PKI encryption, since it only demonstrates how to sign strings, and not decrypt them. Hopefully, someone can enlighten me on how this might work. I have a C# RC4 encryption engine that I'm considering converting to Apex, but I obviously prefer to use the native Crypto class, if possible.

(Sample code from Apex Developers Guide)
  1. String algorithmName = 'RSA';
  2. String key = 'pkcs8 format private key';
  3. Blob privateKey = EncodingUtil.base64Decode(key);
  4. Blob input = Blob.valueOf('12345qwerty');
  5. Crypto.sign(algorithmName, input, privateKey);
Friday, 06 November 2009 00:34:25 (Pacific Standard Time, UTC-08:00)
# Wednesday, 04 November 2009

I'm hooked on Single Page Applications (or SPA for short). If you work in Google Apps everyday, then you know what I mean. Open up GMail, Calendar, or even a Google map, and there's just a single page load. All subsequent interactions occur asynchronously via AJAX, producing a fluid, responsive user interface.

Contrast this with a traditional web application where you fetch a page and are given a master list of records. Click on a record and a completely new page loads containing record details (aka Master-Detail pattern).

Visualforce and Dialogue Script are inherited from a paradigm that mostly encouraged multi-page apps. I've spent the greater part of 2009 re-writing just about every single DScript app to be a SPA, and now that I'm developing applications for Force.com I'm going through the same exercise.

First, don't make the same mistake I did and assume that an AJAX application using Visualforce is as simple as using the AJAX Toolkit. It'll work as a VF page within Salesforce, but can't be published to Sites for security reasons. I found this fact most unfortunate since the Salesforce AJAX API is one of the best AJAX frameworks I've worked with as it encourages functional programming (FP) and embraces all the good parts about Javascript (yes, I am trying to shed my 15 years of Object-Oriented programming and learn FP... a blog entry for another day).

A SPA treatment can be given to a Visualforce page through the use of the <apex:actionFunction /> and <apex:outputPanel /> controls, which I'll cover in detail in Part 2.

The swim lanes diagram below (click here for the full version) demonstrates the lifecyle of a Visualforce SPA page.

1) The browser requests the web page which is rendered with default data.
2) Subsequent page clicks call Apex controller methods via actionFunction.
3) The controller methods construct JSON formatted strings. 3) When the request is complete, actionFunctions re-render outputPanels that contain script templates for processing the returned JSON arrays.

Wednesday, 04 November 2009 21:42:09 (Pacific Standard Time, UTC-08:00)