# Friday, 06 September 2013

Not to be confused with "rapper" class.

The concept of a "wrapper class" occurs frequently in Force.com development. Object oriented design encourages the use of classes to encapsulate data and behaviors that may mutate the data, or interact with other classes.

The goals of an Apex wrapper class are:

  • Decouple data (records) from behaviors (methods)
  • Validate inputs
  • Minimize side effects
  • Enable unit testing
  • Handle exceptions

This article provides an example wrapper class template to address these goals and explores each aspect of the wrapper class individually.

Register for Dreamforce 13 and attend my session on "Leap: An Apex Development Framework" for training on how to use the Leap framework to generate wrapper classes, triggers, and more. Follow @codewithleap on Twitter for updates on Apex design patterns.

All Apex wrapper classes share a few common properties and methods. To avoid repeatedly cut-n-pasting the same common code into each wrapper class, a base class is created for all wrapper classes to inherit through the "extends" keyword.

public abstract class WrapperClassBase {
     public Id id;  
     public boolean success = true;
     public List<String> errors = new List<String>();
     public boolean hasErrors(){ return errors.size() > 0;}
     public void clearErrors() { success = true; errors.clear();}
}

Now, with a base class available, the simple wrapper class looks like the following:

public with sharing class Order extends WrapperClassBase {
     public Order__c record   = null;
     public static final String SFIELDS = 'Id, Name'; // ... add other fields here
     
     public Order(){}
     
     public Order withSObject(Order__c orderRecord){
          this.record = orderRecord;
          this.id = orderRecord.Id;
          return this;
     }
     
     public Order withId(ID recordId){
          id = recordId;
          String sFieldsToQuery = Order.SFIELDS;
          record = Database.query('SELECT ' + Order.SFIELDS + ' FROM Order__c WHERE Id=:id LIMIT 1');
          return this;
     }
     
     public Order validate(){         
          /*
          Validate fields here. Set this.success = false and populate this.errors.add('err message');
          */
          return this;
     }
     
     public Order doSomething(){
          return this;
     }
     
     private Map<ID,LineItem> m_lineItems = null;
     public Map<Id,LineItem> lineItems{
          get{
               if(m_lineItems == null){
                    m_lineItems = LineItem.fromRecords( this.lineItemRecords );
               }
               return m_lineItems;
          }
     }
     
     private List<OrderLineItem__c> m_lineItemRecords = null;
     public List<OrderLineItem__c> lineItemRecords{
          get{
               if(m_lineItemRecords == null){
                    m_lineItemRecords = Database.query('SELECT ' + LineItem.SFIELDS + ' FROM OrderLineItem__c WHERE Order__c=:id');
               }
               return m_lineItemRecords;
          }
     }
     
     public static Map<ID,Order> fromRecords(List<Order__c> records){
          Map<ID,Order> orders = new Map<ID,Order>();
          for(Order__c o : records){
               orders.put(o.Id, new Order().withSObject(o));
          }
          return orders;
     }
     
     public String toJSON(){
          Map<String, String> r = new Map<String, String>();
          List<String> fieldNames = Order.SFIELDS.split(',');
          for(String fName : fieldNames){
               r.put(fName, String.valueOf( this.record.get(fName) ));
          }
          return JSON.serialize(r);
     }
}

Inheritance

As of this writing, Apex does not allow inheriting the core SObject class (which would be ideal).

Record encapsulation uses the strongly typed Order__c record, rather than the abstractly typed SObject, in order to use the benefits of strongly typed objects in the Force IDE, such as auto-complete of the fields. Moving the record to the base class would require constantly casting SObject to the strong type.

Class Name

For custom objects, it's common to remove the namespace prefix and __c suffix from the wrapper class name. For standard objects, always prefix the class name with "SFDC", or some other naming convention, to avoid conflicts.

Wrapping Standard Objects

Creating wrapper classes with the same name as standard objects, although possible, is discouraged. Class names supersede standard object names, such that if the intent is to create a standard Account object, but a class named 'Account' already exists, the code will not compile because the compiler is trying to create an instance of the wrapper class and not the standard object.

To get around this, use a standard naming convention; SFDCAccount, SFDCContact, or SFDCLead; to differentiate the wrapper class names from their respective standard objects.

Construction

SObjects are constructed in 2 contexts:

  • withSObject(SObject record): The record has already been retrieved via a SOQL statement and the data just needs to be wrapped in a class container.
  • withId(ID recordId): The ID of a record is known, but has not yet been retrieved from the database.
The actual class constructor accepts no arguments. The builder pattern is used to construct the class and kick off a fluent chain of subsequent methods in a single line.

Once constructed, SObject fields are accessed directly through the public 'record' property, as in:

new Order().withID(someid).record.Custom_Field__c

This convention is chosen over creating getCustomField() and setCustomField() properties for brevity and to make use of code auto-complete features,. However, if mutability of the SObject record, or it's fields, are a concern then the public accessor can be modified to 'private' and corresponding get/set properties added.

SFIELDS

Each wrapper class exposes a static public string named SFIELDS for use in selecting all fields for a record. This is equivalent to writing "SELECT * FROM TABLE_NAME" in traditional SQL syntax.

The SFIELDS string can be periodically auto-generated from a Leap task to keep wrapper class field definitions in sync with the data model, or manually updated with just a subset of fields to be used by the wrapper class.

Builder Pattern

The real magic in the wrapper class template is it's ability to chain several methods together in a single line, commonly referred to as the Builder Pattern and discussed in a previous article, Developing Fluent Interfaces With Apex.

Using the Order__c wrapper class example above, the following is possible:

     Order o = new Order().withSObject(objectList.get(0)).doSomething();
     if(o.validate().hasErrors()){
          //handle exceptions
     }

The return types of builder methods must be the same as the wrapper class type. Each builder method returns 'this' to allow method chaining. Builder pattern is useful in the early stages of development when the exact method behaviors and system architecture is not entirely known (see 40-70 Rule to Technical Architecture) and allows a compositional flow to development, incrementally adding new features without significant refactoring effort.

Child Object Relationships

A wrapper class represents a single instance of a Salesforce record. Depending on how lookup relationships are defined, wrapper classes will usually be either a parent (master) or child (detail) of some other records, which also have wrapper classes defined.

The "fromRecords" utility method is provided to easily construct collections of child objects retrieved from SOQL queries. Collections of child wrapper classes are stored as Maps that support the quick lookup of child wrapper classes by their record ID.

Properties and Side Effects

The #1 cause of software entropy in Apex development is unwanted "side effects", which is the dependency on class variables that can be modified by other methods.

The wrapper class template encourages lazy initialization of properties to protect access to member variable. Lazy initialization also avoids repeated queries for the same records, which is a common cause for exceeding governor limits.

Java has not yet evolved to support class properties. But Apex does, and wrapper classes are an opportunity to use them whenever possible. For the sake of brevity, properties are preferred over methods, whenever possible. This Microsoft .NET article on choosing between properties and methods is very applicable to Apex.

For Developers doing a lot of client-side Javascript development in the UI, the use of server-side Apex properties closely approximates the associative array behavior of Javascript objects, and maintains a consistent coding style across code bases.

Unit Testing

Wrapper classes provide a clean interface for unit testing behaviors on objects. The Winter '14 release requires that unit tests be managed in a separate file from the wrapper class. Common convention is to always create a unit test file for each wrapper class with a suffix of 'Tests' in the class name.

Exception Handling

Without a clear exception handling strategy, it can be confusing for Developers to know how a class handles exceptions. Does it consistently bubble up, or catch all exceptions? There is no equivalent to the Java 'throws' keyword in Apex. To remedy this, the wrapper class template base provides a boolean 'success' flag that can be set by any method at any time.

When setting success=false, the exception handling code should also add to the List errors collection, giving an explanation of what went wrong in the transaction. It is the responsibility of calling objects/methods to check for success or hasErrors() after any transaction.

JSON Serialization

Wrapper classes can be serialized to JSON and returned to requesting clients for use in UI binding. The ToJSON() method is provided in the wrapper class template and can be customized to serialize the class.

Friday, 06 September 2013 10:46:42 (Pacific Daylight Time, UTC-07:00)
# Tuesday, 01 January 2013

Note: Apex can now be called from workflows as Invocable Methods using the Process Builder.

Here is a simple hack to call Apex classes from workflow rules.

Problem: Salesforce has a magnificently declarative environment for creating point-and-click applications and workflows, but one area that gets particularly gnarly is executing business rules in response to changes in state.

Given a problem like "When Opportunity stage equals 'Closed-Won', send the order to the back office system for processing", the Business Analyst has a good idea of "when" the business process should be executed. The Developer knows "how" the process should be executed.

The result is often the development of a trigger that includes both the "when" and "how" logic merged into a single class. The trigger ultimately ends up containing code to detect state changes; a task otherwise best left to workflow rule conditions.

Future enhancements to the business rules require the BA to submit a change request to the Developer, impairing the overall agility of the system.

(Some discussions of detecting record state in trigger can be found here, here, and here.)

The Solution: Calling Apex From Outbound Messages
Empower the System Administrator/BA to create workflow rules that call Apex classes in response to declarative conditions.

Create a workflow rule with an outbound message action that calls a message handler (hosted on Heroku), that in turn calls back to a Salesforce REST resource.

Components of the Outbound Message:

  1. The endpoint URL is hosted on Heroku. The outbound message handler receives the message and issues a callback to Salesforce using the path provided after the root URL.
  2. Pass the session ID to the endpoint (Note: the 'User to send as' must have permissions to call and invoke the Apex REST web service)
  3. Pass only the Id of object meeting the workflow condition. This gets passed back to the REST service as an "oid" parameter (object id).

Getting Started:

Download the Heroku outbound message handler from Github (link).

git clone https://github.com/cubiccompass/sfdc-om-workflow
Build the solution and deploy to Heroku.
mvn package
git init
git commit -am "Initial commit"
heroku apps:create omhandler
git push heroku master
Create a workflow rule with an outbound message action that calls the Heroku hosted endpoint.
Create a Salesforce REST resource for handling the callback.

To see the workflow in action, view the Heroku web service logs while updating records in Salesforce that trigger the workflow rule.
heroku logs -tail

Errata:

IWorkflowTask: In the real world, I'm evolving this design pattern to include an IWorkflowTask interface to clearly distinguish which business objects handle workflow actions. The execute() method takes a WorkflowContext object that includes more details from the outbound message.

Daisy Chaining Workflows: It's important that workflow tasks record or modify some state after executing a task in order to allow for triggering follow-up workflow actions. For example, an OrderProcessor workflow task might update an Order__c status field to "Processed". This allows System Administrators to create follow-up workflow rules/actions, such as sending out emails.

Security: Use HTTPS/SSL endpoints to ensure session information is not subject to man in the middle attacks.

Idempotence: Salesforce does not guarantee that each outbound message will be sent only once (although it's mostly consistent with 1:1 messaging). REST resources should be developed to handle the rare instance where a message might be received twice. In the use case above, the code should be designed to defend against submitting the same order twice; possibly by checking a 'Processed' flag on a record before submitting to a back-office system.

Governor Limits: Workflow tasks are called asynchronously, so there's a decent amount of processing and execution freedom using this pattern.

Tuesday, 01 January 2013 11:57:10 (Pacific Standard Time, UTC-08:00)
# Saturday, 15 December 2012

Technical Architects make many tough decisions on a daily basis, often with incomplete information. Colin Powell's 40-70 rule is helpful when facing such situations.

He says that every time you face a tough decision you should have no less than forty percent and no more than seventy percent of the information you need to make the decision. If you make a decision with less than forty percent of the information you need, then you're shooting from the hip and will make too many mistakes.

The second part of the decision making rule is what surprises many leaders. They often think they need more than seventy percent of the information before they can make a decision. But in reality, if you get more than seventy percent of the information you need to make the decision, then the opportunity to add value has usually passed, or the competition has beaten you to the punch. And with today's agile development and continuous integration (CI) methodologies, you can afford to iterate on an architecture with incomplete information.

A key element that supports Powell’s rule is the notion that intuition is what separates great Technical Architects from average ones. Intuition is what allows us to make tough decisions well, but many of us ignore our gut. We want certainty that we are making the right decision, but that's not possible. People who want certainty in their decisions end up missing opportunities, not leading.

Making decisions with only 40%-70% of the information requires responsibly communicating the technical architecture + how changes will be implemented as more information becomes available.

Architecture + Continuous Integration Process = Agility.

Architecture alone is not a sufficient solution and can leave a solution inflexible to change. "Release early and often" is the new mantra in cloud development.

The best way to manage risk as a TA with 40-70% of the information is to constantly ask yourself 2 questions:
1) What is the simplest possible solution to the current problem?
2) How will future changes be implemented?

Within the realm of Salesforce, a Technical Architecture conducive to CI is achieved primarily through 3 design patterns:
* Declarative configuration
* Custom Settings
* Hybrid apps / Web Tabs / Canvas

1) Declarative configuration. First and foremost, it's the obligation of a TA to maximize the point-and-click configuration of any solution. This is done by using as many out of box features as possible.
2) Custom settings: When coding is required, externalizing the behaviors and conditional branches to custom settings gives System Admins and Business Analysts the ability to fine tune a solution as more information becomes available. For example, rather than hardcoding a callout URL in a trigger, move the URL to a custom setting.
3) Hybrid / Web Tabs / Canvas: For ISVs and custom application development, an IFRAME wrapper to an app hosted on Heroku provides the greatest agility to pivot on a solution. Code changes can be pushed several times per day without having to go through the AppExchange package and review process. Matching the look and feel of Salesforce within a Hybrid or canvas app can provide the best of both worlds; a native Salesforce business application with code managed on a separate platform.

Saturday, 15 December 2012 13:44:55 (Pacific Standard Time, UTC-08:00)
# Tuesday, 13 November 2012

Salesforce recently posted this tantalizing brainteaser question "When do you think Salesforce will reach 1B transactions per day?"

Analyzing the recent numbers from trust.salesforce.com reveals some interesting patterns and trends:

  • Peaks tend to occur most often on a Tuesday
  • Weekend traffic drops to about 1/3rd of weekday activity
  • The average growth between peaks is 0.52% per week
Extrapolating these numbers out reveals that Tuesday, December 18th, 2012 is the highest probability date to hit 1B transactions. But that date is pretty close to the holidays, so it's a tough call.

What is your prediction?

Tuesday, 13 November 2012 22:41:35 (Pacific Standard Time, UTC-08:00)
# Monday, 22 October 2012

I frequently use the FizzBuzz interview question when interviewing Salesforce developer candidates.

The original FizzBuzz interview question goes something like this:

Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

The output from the first 15 numbers should look like this:

1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz

It's a great question because the interviewer can evolve the requirements and take the discussion in many different directions.

A good interview is a lot like auditioning a drummer for a rock band. You want to start off with something easy, then "riff" on an idea to get a sense of the candidates listening skills and ability to create variations on a theme (aka refactoring).

Unfortunately, most interviews have the intimidating premise of pass/fail, so the key to an effective interview is in setting up the question so that the interview candidate understands it is okay to constantly change and revise their answers, and that the interview will be evolving around a central concept; which is FizzBuzz.

The questions below gradually get harder by design, and at some point the candidate may not have an answer. That's okay. As an interviewer, you need to know:

a) How does the candidate respond when asked to do something they don't understand?
b) If we hired this person, what is the correct onboarding and mentoring plan for this candidate to help them be successful?

I'll drop hints during the question setup, using buzzwords like "TDD" (test-driven development), "unit testing", and "object oriented design", hoping the candidate might ask clarifying questions before jumping into code, like "Oh, you want to do TDD. Should I write the unit test first?"

So, on to the code. The fundamental logic for FizzBuzz requires a basic understanding of the modulo operator; which, in all fairness, is not a particularly valuable thing to know on a daily basis, but is often the minimum bar for testing the meets "Computer Science or Related 4 Year Degree" requirement in many job descriptions, since it's universally taught in almost all academic curriculums.

After the first round, the basic logic for FizzBuzz should look something like this:

function doFizzBuzz(){
     for(integer i=1; i <= 100; i++){
          String output = '';
          if( i % 3 == 0 ){
               output += 'Fizz';
          }
          if( i % 5 == 0 ){
               output += 'Buzz';
          }
          if(output == ''){
               output = string.valueOf(i);
          }
          System.debug(output);
     }
}

Some things interviewers will be looking for:

  • Use and understanding of the Modulo operator
  • Efficiency. Is mod calculated twice for each value to meet the compound "FizzBuzz" requirement?
  • Using a 0 based loop index and printing numbers 0-99 instead of 1-100
  • Unclear coding blocks or control flow (confusing use of parentheses or indenting)

Even if the candidate misses one of these points, they can usually get over the hurdle quickly with a bit of coaching.

"So, let's evolve this function into an Apex class."

For experienced Salesforce Developers, you can start gauging familiarity with Apex syntax; but be flexible. More experienced Developers/Architects will probably think faster in pseudo code, and Java Developers (if you're gauging potential to become a Force.com Developer) will want to use their syntax.

Refactoring the basic logic above into an Apex class might look something like this:

public class FizzBuzz {
     public void run(){
          for(integer i=1; i <= 100; i++){
               if(math.mod(i, 3) == 0){
                    System.debug('Fizz');
               }
               if(math.mod(i, 5) == 0){
                    System.debug('Buzz');
               }
               else{
                    System.debug(i);
               }
          }
     }
}

"Okay. How would you test that? Let's write a unit test".

If the candidate is not familiar with Force.com Development, now might be a good opportunity to explain that 75% minimum test coverage is required to deploy code.

A basic unit test should look something like:

     public static testMethod void mainTests(){    
          FizzBuzz fb = new FizzBuzz();
          fb.run();
     }    

The test runner will report 100% unit test coverage by virtue of executing the entire run() method within a testMethod. But is this really aligned with the true spirit and principle of unit testing? Not really.

A more precise follow-up question might be: "How would you Assert the expected output of FizzBuzz?"

In it's current state, FizzBuzz is just emitting strings. Does the candidate attempt to parse and make assertions on the string output?

At this point, it's helpful to start thinking in terms of TDD, or Test Driven Development, and attempt to write a unit test before writing code. One possible solution is the Extract Method design pattern, creating methods for isFizz() and isBuzz(), then test to assert those methods are working correctly.

public class FizzBuzz {    

     private void run(){    
          for(integer i=1; i <= 100; i++){
               String output = '';              
               if( isFizz(i) ){
                    output += 'Fizz';
               }
               if( isBuzz(i) ){
                    output += 'Buzz';
               }
               if(output == ''){
                    output = string.valueOf(i);
               }
               System.debug(output);
          }
     }
   
     static final integer FIZZ_MULTIPLE = 3;
     private boolean isFizz(integer n){
          return ( math.mod(n, FIZZ_MULTIPLE) == 0);
     }

     static final integer BUZZ_MULTIPLE = 5;
     private boolean isBuzz(integer n){
          return ( math.mod(n, BUZZ_MULTIPLE) == 0);
     }

     public static testmethod void fizzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isFizz(1));
          System.assertEquals(false, fb.isFizz(2));
          System.assertEquals(true,  fb.isFizz(3));
          System.assertEquals(false, fb.isFizz(4));
          System.assertEquals(false, fb.isFizz(5));
     }
    
     public static testmethod void buzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isBuzz(1));
          System.assertEquals(false, fb.isBuzz(2));
          System.assertEquals(false, fb.isBuzz(3));
          System.assertEquals(false, fb.isBuzz(4));
          System.assertEquals(true,  fb.isBuzz(5));
     }   

     public static testmethod void fizzBuzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(true, fb.isFizz(15));
          System.assertEquals(true, fb.isBuzz(15));
     }
}

This is a considerable improvement, but the test coverage is now only at 40%. The run() method is still leaving some technical debt behind to be refactored.

I may drop the candidate a hint about Model-View-Controller and ask how they might deconstruct this class into it's constituent parts.

There are no DML or objects to access, so effectively there is no Model.

But the run() method is currently overloaded with FizzBuzz logic (controller) and printing the output (view). We can further extract the logic into a List of strings to be rendered in any form by the run() method.

public class FizzBuzz { 

     private void run(){    
          for(String element : this.getFizzBuzzList()){
               system.debug(element);
          }
     }
   
     private List<string> getFizzBuzzList(){
          List<string> fizzBuzzList = new List<string>();
          for(integer i=1; i <= 100; i++){
               string listElement = '';

               if( isFizz(i) ){
                    listElement = 'Fizz';
               }
               if( isBuzz(i) ){
                    listElement += 'Buzz';
               }
               if(listElement == ''){
                    listElement = string.valueOf(i);
               }

               fizzBuzzList.add(listElement);
          }
          return fizzBuzzList;
     }
    
     static final integer FIZZ_MULTIPLE = 3;
     private boolean isFizz(integer n){
          return ( math.mod(n, FIZZ_MULTIPLE) == 0);
     }
    
     static final integer BUZZ_MULTIPLE = 5;
     private boolean isBuzz(integer n){
          return ( math.mod(n, BUZZ_MULTIPLE) == 0);
     }
    
     public static testmethod void fizzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isFizz(1));
          System.assertEquals(true,  fb.isFizz(3));
          System.assertEquals(false, fb.isFizz(5));
     }
    
     public static testmethod void buzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isBuzz(1));
          System.assertEquals(false, fb.isBuzz(3));
          System.assertEquals(true,  fb.isBuzz(5));
     }
    
     public static testmethod void fizzBuzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(true, fb.isFizz(15));
          System.assertEquals(true, fb.isBuzz(15));
     }
    
     public static testmethod void fizzBuzzListTests(){
          FizzBuzz fb = new FizzBuzz();
          //0 based offsets.
          System.assertEquals(100, fb.getFizzBuzzList().size() );
          System.assertEquals('1', fb.getFizzBuzzList().get(0) );
          System.assertEquals('Fizz', fb.getFizzBuzzList().get(2) );
          System.assertEquals('4', fb.getFizzBuzzList().get(3) );
          System.assertEquals('Buzz', fb.getFizzBuzzList().get(4) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(14) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(29) );
     }
}

Test coverage is now at 90% after extracting the run() print logic into a unit testable method that returns a list. The last 10% can be easily covered by calling run() anywhere inside a testMethod.

If there's time remaining in the interview, a good enhancement is to add dynamic ranges. Instead of printing 1-100, modify the class to support any range of numbers. Basically, this is just testing the candidate's ability to manage class constructor arguments.

public class FizzBuzz {    
     private final integer floor;
     private final integer ceiling;    

     public FizzBuzz(){
          floor      = 1;
          ceiling = 100;
     }
    
     public FizzBuzz(integer input_floor, integer input_ceiling){
          floor = input_floor;
          ceiling = input_ceiling;
     }
    
     private void run(){    
          for(String element : this.getFizzBuzzList()){
               system.debug(element);
          }
     }
    
     private List<string> getFizzBuzzList(){
          List<string> fizzBuzzList = new List<string>();
          for(integer i=floor; i <= ceiling; i++){
               string listElement = '';
               if( isFizz(i) ){
                    listElement = 'Fizz';
               }
               if( isBuzz(i) ){
                    listElement += 'Buzz';
               }
               if(listElement == ''){
                    listElement = string.valueOf(i);
               }
               fizzBuzzList.add(listElement);
          }

          return fizzBuzzList;
     }

    
     static final integer FIZZ_MULTIPLE = 3;
     private boolean isFizz(integer n){
          return ( math.mod(n, FIZZ_MULTIPLE) == 0);
     }
     
     static final integer BUZZ_MULTIPLE = 5;
     private boolean isBuzz(integer n){
          return ( math.mod(n, BUZZ_MULTIPLE) == 0);
     }

    
     public static testmethod void fizzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isFizz(1));         
          System.assertEquals(true,  fb.isFizz(3));
          System.assertEquals(false, fb.isFizz(5));
     }
    
     public static testmethod void buzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(false, fb.isBuzz(1));         
          System.assertEquals(false, fb.isBuzz(3));         
          System.assertEquals(true,  fb.isBuzz(5));
     }
    
     public static testmethod void fizzBuzzTests(){
          FizzBuzz fb = new FizzBuzz();
          System.assertEquals(true, fb.isFizz(15));
          System.assertEquals(true, fb.isBuzz(15));
     }
   
     public static testmethod void fizzBuzzListTests(){
          //Use a 0 based index range to make fetching/testing list offsets easier.
          FizzBuzz fb = new FizzBuzz(0, 100);
          System.assertEquals(101, fb.getFizzBuzzList().size() );
          System.assertEquals('1', fb.getFizzBuzzList().get(1) );
          System.assertEquals('2', fb.getFizzBuzzList().get(2) );
          System.assertEquals('Fizz', fb.getFizzBuzzList().get(3) );
          System.assertEquals('4', fb.getFizzBuzzList().get(4) );
          System.assertEquals('Buzz', fb.getFizzBuzzList().get(5) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(15) );
          System.assertEquals('FizzBuzz', fb.getFizzBuzzList().get(30) );
     }
}

I will usually follow-up this question with questions about boundary checking and programmatic validation rules.

"Should FizzBuzz be allowed to accept negative numbers?"

"Should the ceiling value always be greater than the floor?"

If yes to either of these, then how would the candidate implement validation rules and boundary checks? This very quickly gets into writing more methods and more unit tests, but mirrors the reality of day-to-day Force.com development.

Once variables get introduced at class scope, then this is a good opportunity to have discussions about side-effects and immutability.

"What happens 6 months later when another Developer comes along and tries to modify the ceiling or floor variables in new methods?"

"How can you prevent this from happening?"

"What are the benefits of initializing variables only once and declaring them 'final'?"

An experienced Developer will likely have a grasp of functional programming techniques and the long-term benefits of minimizing side-effects and keeping classes immutable.

And finally, these unit tests are all written inline. How would the candidate separate tests from production classes?

For more information about refactoring patterns, check out this list of patterns or read Martin Fowler's brilliant book Refactoring: Improving the Design of Existing Code.

Monday, 22 October 2012 16:53:22 (Pacific Daylight Time, UTC-07:00)
# Sunday, 30 October 2011

Integrating CRM with ERP/Financial systems can be a challenge. Particularly if the systems are from 2 different vendors, which is often the case when using Salesforce.com CRM.

At Facebook, we've gone through several iterations of integrating Salesforce with Oracle Financials and the team has arrived at a fairly stable and reliable integration process (kudos to Kumar, Suresh, Gopal, Trevor, and Sunil for making this all work).

Here is the basic flow (see diagram below):

1) The point at which Salesforce CRM needs to pass information to Oracle is typically once an Opportunity has been closed/won and an order or contract has been signed.

2) Salesforce is configured to send an outbound message containing the Opportunity ID to an enterprise service bus (ESB) that is configured to listen for specific SOAP messages from Salesforce.

3) The ESB accepts the outbound message (now technically an inbound message on the receiver side) and asserts any needed security policies, such as whitelist trusting the source of the message.

4) This is the interesting part. Because the Salesforce outbound message wizard only allows for the exporting of fields on a single object, the ESB must call back to retrieve additional information about the order; such as the Opportunity line items, Account, and Contacts associated with the Order.

In Enterprise Application Integration (EAI) speak, this is referred to as a Content Enrichment pattern.

5) An apex web service on Salesforce receives the enrichment request, queries all the additional order details, and returns a canonical XML message back to the ESB.

6) The ESB receives the enriched message and begins processing validation and de-duplication rules, then transforms the message into an object that can be consumed by Oracle.

7) The ESB then inserts the Order into Oracle.

8) The Oracle apps API inserts/updates the various physical tables for the order and throws any exceptions.

Sunday, 30 October 2011 17:15:23 (Pacific Standard Time, UTC-08:00)
# Sunday, 21 August 2011

Dreamforce 11 is just around the corner and fellow Facebook Engineer Mike Fullmore and myself have been invited to speak at the following panel:

Enterprise Engineering
Friday, September 2
10:00 a.m. - 11:00 a.m.
Can you really develop at 5x a regular speed when you're at enterprise scale? In this session, a panel of enterprise technical engineers will discuss engineering best practices for the Sales Cloud, Service Cloud, Chatter and Force.com. Topics include security, sandbox, integration, Apex, and release management.

Speakers: Mike Leach, Facebook, Inc.; David Swidan, Seagate Technology LLC; Mike Fullmore, Facebook, Inc.

In case you're not able to attend, here are the high level points from our presentation. :-)

Moving Fast on Force.com

Facebook has been using Salesforce for several months to rapidly prototype, build, and deploy a number of line of business applications to meet the needs of a hyper-growth organization. This presentation shares some best practices that have evolved at Facebook to help develop on the Force.com platform.

People

Before sharing details about Facebook's processes, methodologies, and tools; it's important to point out that the people on the enterprise engineering team are what really make things happen. Each Engineer is able to work autonomously and carry a project through from design to deployment. Great people instinctively take great pride in their work and consistently take the initiative to deliver awesomeness. I would be remiss not to point them out here. All these Engineers operate at MVP levels.
The effort that goes into recruiting a great development team should not be underestimated. Recruiting an awesome team involves several people doing hundreds of phone screens and dozens of interviews. Facebook is in a unique situation in its history and we don't take it for granted that we have access to unprecedented resources and talent. It's actually very humbling to work with such a stellar team at such a great company.

("yes" we're still hiring)

Business Processes

Projects and applications generally fall into one of 9 major process buckets. Engineers at Facebook seeking to have a high impact will typically either have a breadth or depth of knowledge. Some focus on the long-term intricate details and workflows of a single business process while others are able to move around and generally lead several, concurrent, short-term development efforts in any business area.

Sandbox->Staging->Deploy

Each Project has it's own development sandbox. Additionally, each Engineer may also have their own personal sandbox. When code is ready to be deployed, it's packaged using the Ant migration tool format and typically tested in 2 sandboxes: 1 daily refreshed staging org to ensure all unit tests will run and there are no metadata conflicts, and a full sandbox deploy to give business managers an opportunity to test using real-world data.

Change sets are rarely used, but may be the best option for first time deployments of large applications that have too many metadata dependencies to reliably be identified by hand.

The online security scanner is used as a resource during deployment to identify any potential security issues. A spreadsheet is used for time-series analysis of scanner results to understand code quality trends.

Once a package has been reviewed, tested, and approved for deployment; a release Engineer deploys the package to production using Ant. This entire process is designed to support daily deployments. There are typically 3-5 incremental change deployments per week.

Obligatory Chaotic Process Diagram

"Agile" and "process" are 2 words that are not very complimentary. Agile teams must find an equilibrium of moving fast yet maintaining high quality code. Facebook trusts every Engineer will make the right decisions when pushing changes. When things go wrong, we conduct a post-mortem or retrospective against an "ideal" process to identify what trade-offs were made, why, and where improvements can be made.

All Engineers go through a 6 week orientation "bootcamp" to understand the various processes.

Typical Scrum Development Process

The development "lingua franca" within Silicon Valley, and for most Salesforce service providers, tends to be Scrum. Consultants and contractors provide statements of work and deliver progress reports around "sprints". Scrum training is readily available by a number of agile shops.

This industry standard has been adopted internally and keeps multiple projects and people in sync. Mike Fullmore developed a Force.com app named "Scrumbook" for cataloguing projects, sprints, and stories.


A basic Force.com project template with key milestones has been created to give Project Managers an idea of when certain activities should take place. Whenever possible we prefer to avoid a "waterfall" or "big bang" mentality; preferring to launch with minimal functionality, validate assumptions with end-users, then build on the app in subsequent sprints.


Manage The Meta(data)

The general line of demarcation within IT at Facebook is:
  • Admins own the data
  • Engineers own the metadata
The Salesforce Metadata API is a tremendously powerful resource for scaling an enterprise, yet remaining highly leveraged and lean. We've developed custom metadata tools to help us conduct security audits and compare snapshot changes.


(Credit to our Summer Intern, Austin Wang, who initiated the development of internal tools! )

Change Management

The advantage to using Salesforce is the ability to use declarative configuration and development techniques to produce functional applications, then use more powerful Apex and Visualforce tools to maximize apps around business core competencies. "Clicks over code" is a common mantra in Salesforce shops, and Facebook is no exception.

A change management matrix is a useful tool for determining when "clicks-over-code" is preferred over a more rigorous change management process.

Sunday, 21 August 2011 11:52:04 (Pacific Daylight Time, UTC-07:00)
# Sunday, 22 May 2011
DocuSign's Hackathon on May 15th and 16th, 2011 brought out some serious talent to DocuSign's new office in downtown San Francisco. For 2 days, developers took a shot at the $25K purse in 3 categories; consumer, enterprise, and mobile.

My app, SocialSign, was based on the T-Mobile Fave 5 campaign:
  • Select 5 Facebook friends
  • Sign an unlimited voice/text contract through DocuSign
  • Merchants manage the contract in Salesforce.
I admittedly spent about 75% of the hackathon working with the Facebook app canvas and Salesforce Sites APIs, which probably explains why SocialSign placed as runner-up in the consumer category (other developers got far more creative with the actual Docusign API).

In the end, it was a great opportunity to demonstrate the feasibility of conducting social commerce through Facebook using Salesforce CRM to manage contacts, orders, and contracts. Thank you DocuSign for hosting such a great event!

(Video demo of SocialSign available here and embedded below)

Sunday, 22 May 2011 20:24:38 (Pacific Daylight Time, UTC-07:00)
# Sunday, 27 March 2011
JAWS is "Just A Web Shell" framework developed by the Facebook IT group for running Force.com web applications on iOS (iPhone/iPad) devices.

JAWS is a "hybrid" mobile application framework that provides a native iOS application on the mobile desktop that instantiates a web browser to a Force.com web page.




Prerequisites:
The Force.com-Toolkit-JAWS-for-iOS project on GitHub includes all of the following steps and source code for building a mobile iOS app on Force.com.

1) Start in XCode by creating a new View-based iPad or iPhone application.



2) Define a UIWebView controller with a default URL of https://login.salesforce.com.
Append the retURL parameter to the URL to define which Visualforce page to load upon authentication.


3) Launch the Interface Designer and create a simple toolbar with UIWebView to emulate a basic browser UI.



4) Build and launch the iOS simulator from XCode to view the Salesforce login page loaded within the UIWebView



5) Upon login, the return URL (retURL) Visualforce page is loaded. In this case, using the jQuery Mobile framework to display a simple mobile UI.
 


That's it! The native web shell runs the Visualforce directly on iOS. With some crafty mobile-friendly UI design in HTML5, the end-user may not even detect the app is actually running entirely in the cloud on Force.com!
Sunday, 27 March 2011 10:38:30 (Pacific Standard Time, UTC-08:00)
# Saturday, 12 March 2011



One thing I really enjoy about Force.com development is the ability to learn something new everyday. I subscribe to the blogs of just about everyone on the inaugural Force.com MVPs list and make it a standard practice to follow what they are doing and learn new tricks. In addition to blogging, some of these Developers have written excellent books on Force.com and Google App Engine Development (I have them all on my shelf) or make significant contributions to the open source community. Congrats to all of them.

Thank you Salesforce for including me in this list. I'm very honored to call these guys my "peers".

Saturday, 12 March 2011 18:16:24 (Pacific Standard Time, UTC-08:00)
# Saturday, 26 February 2011
Hannes raises a great question about Change Sets in my previous blog post on Release Management.

"One thing I really like about the deployment connections are, that you have a deployment history right in your salesforce live-org. With comments and an insight into the packages.

Which pros do you see as for deploying with the ant script?"


We occasionally use change sets at Facebook. The decision to use Ant vs. Change Sets is akin to a Vi vs. Emacs religious war. They both accomplish the same thing, just in different ways.

General best practice guidelines:
  • Initial deployments of new custom applications are much easier using Change Sets
  • Incremental deployments are more easily managed using Ant
The decision to standardize on Ant packages is largely for compliance reasons. When there is a need to internally document who changed what and when, a SVN repository of all changes with author and reviewer notes satisfies most compliance requirements (such as Sarbanes-Oxley)

Some high level thoughts on the 2 approaches:
  • Some metadata simply is not packageable via Ant, therefore Change Sets must be used
  • The dependency checker in the Change Set packager, while not 100% perfect, is much more reliable than manually adding 100+ individual components
  • In the absence of an IT-Engineer who can create Ant packages, business process owners and non-technical users always have the fallback option of using change sets
  • Prior to Spring '11, invalid inbound change sets could not be deleted, resulting in an accumulation of cruft
  • The ability to delete deployed change sets in Spring '11 removes the ability to audit change history (a compliance violation)
  • The ephemeral nature of some sandboxes means constantly re-establishing deployment connections.
Saturday, 26 February 2011 10:07:06 (Pacific Standard Time, UTC-08:00)
# Monday, 21 February 2011
Managing the release of Force.com changes in a fast paced environment requires an agile development methodology with a focus on:
  1. Continuous Integration - Support daily deployments
  2. Minimalism - Incrementally deploy only what is needed, and nothing more
  3. Reliability - Changes should be 100% backwards compatible and not break existing functionality
The Force.com Migration Tool is a key component to a successful release management process.

Release Management Overview
  • Start the refresh of a sandbox named "staging"
  • Package a release using Eclipse
  • Create an ant package and validate the deployment against the staging sandbox
  • Deploy the changes to production
It's assumed that standard change approvals, code review, and user acceptance testing have already taken place prior to these steps.

Setting Up The Environment

(Note: The following examples are all using Macbook Bash shell. The windows shell counterparts for making directories and copying files are very similar)

Download the Force.com Migration Tool and unzip to a temp directory.

Create a directory for managing ANT deployment packages named something like "changes". Copy the ant-salesforce.jar file to this directory. (It may be easier to manage changes from within the Eclipse workspace folder)

This directory will contain 2 files used by all releases.
~/Documents/workspace/changes/ant-salesforce.jar
~/Documents/workspace/changes/sforce.properties

Manually create the sforce.properties file from the following template.
# sforce.properties
#

# =================== Sandbox Org Credentials ===============
sandbox.username	= user@domain.com.staging
sandbox.password	= password
sandbox.url			= https://test.salesforce.com

# =================== Package Deployment Credentials ================
deploy.username		= user@domain.com
deploy.password		= password
deploy.url			= https://login.salesforce.com

# Use 'https://login.salesforce.com' for production or developer edition (the default if not specified).
# Use 'https://test.salesforce.com for sandbox.
Creating the Deployment Package

The Force.com Migration Tool expects a structured directory that includes a package.xml file describing the changes to be deployed. You can manually create this structure but it's far easier to let Eclipse create the package for you.

1) Begin by opening Eclipse and starting a new Force.com project.


2) Give the project a change-specific name and provide connection credentials to the source development environment (not production).


3) When given the option to choose initial project contents, check the "Selected metadata components" option and click "Choose..."


4) Expand the component tree folders and select only the components to be deployed (see principle #2 on minimalism. "Deploy only what is needed, and nothing more")


5) You should now have a project in the structure required by the Force.com migration tool


6) Create a new sub-directory under the changes directory; in this case named CHG12345; and copy the 'src' directory to this folder.
mkdir ~/Documents/workspace/changes/CHG12345
cp -r ~/Documents/workspace/CHG12345/src ~/Documents/workspace/changes/CHG12345

7) Copy the following ant build.xml template to the change directory.

<project name="Salesforce Package Deployment Script" default="usage" basedir="." xmlns:sf="antlib:com.salesforce">
	<property file="../sforce.properties"/>

	<target name="usage">
		<echo>Usage: ant [task]
Task options:
ant validate: Simulates a deployment of the package and returns pass/fail debugging information
ant deploy: Deploys the package items defined in package/package.xml
ant undeploy: Rolls back deployed items (must follow ant deploy)
	</target>
	
	<target name="validate">
		<sf:deploy 
			username="${sandbox.username}" 
			password="${sandbox.password}" 
			serverurl="${sandbox.url}" 
			deployRoot="src" 
			logType="Detail" 
			checkOnly="true"
			runAllTests="true"
			pollWaitMillis="10000"
			maxPoll="200" />
	</target>
	
	<target name="deploy">
		<sf:deploy 
			username="${deploy.username}" 
			password="${deploy.password}" 
			serverurl="${deploy.url}" 
			deployRoot="src"
			pollWaitMillis="10000" 
			maxPoll="200">
		</sf:deploy>
	</target>
	
	<target name="undeploy">
		<sf:deploy username="${deploy.username}" password="${deploy.password}" serverurl="${deploy.url}" deployRoot="src"/>
	</target>
</project>


Some notes on the build.xml template
  • Notice that the environment variables are pulled in from sforce.properties in the parent directory.
  • The "validate" target uses a sandbox for testing the deployment
  • Winter '11 introduced the ability to refresh dev and config sandboxes daily. Validating a deployment against a recently refreshed sandbox will identify any potential production deployments upfront
  • The "deploy" target uses the production credentials
  • "undeploy" must be run immediately after a "deploy" to work. Undeploy has some quarky requirements and is not always known to work as expected
Deploying the Package

All that is left to do is run ant from the change directory and call the specific validate or deploy targets. If validation or deployment time out, then adjust the polling attributes on the targets.

Note: It's common to check-in the package to a source control repository prior to deployment to production.

cd ~/Documents/workspace/changes/CHG12345
ant validate
ant deploy
Monday, 21 February 2011 21:00:00 (Pacific Standard Time, UTC-08:00)
# Sunday, 23 January 2011
Salesforce Administrators and Developers are routinely required to manipulate large amounts of data in a single task.

Examples of batch processes include:
  • Deleting all Leads that are older than 2 years
  • Replacing all occurrences of an old value with a new value
  • Updating user records with a new company name

The tools and options available are:
  1. Browser-based Admin features / Execute anonymous in System Log
  2. Data loader
  3. Generalized Batch Apex (Online documentation)
  4. Specialized Batch Apex
Option 1 (Admin Settings) represents the category of features and tools available when directly logging into Salesforce via a web browser. The transactions are typically synchronous and subject to Governor Limits.

Option 2 (Data Loader) provides Admins with an Excel-like approach to downloading data using the Apex Data Loader, manipulating data on a local PC, then uploading the data back to Salesforce. Slightly more powerful than browser-based tools, doesn't require programming skills, and subject to web service API governor limits (which are more generous). But also requires slightly more manual effort and introduces the possibility of human error when mass updating records.

Option 3 (Generalized Batch Apex) introduces the option of asynchronous batch processes that can manipulate up to 50 million records in a single batch. Doesn't require programming (if using the 3 utility classes provided below in this blog post) and can be executed directly through the web browser; but limited to the general use cases supported by the utility classes. Some general purpose batch Apex utility classes are provided at the end of this article.

Option 4 (Specialized Batch Apex) requires Apex programming and provides the most control of batch processing of records (such as updating several object types within a batch or applying complex data enrichment before updating fields).

Batch Apex Class Structure:

The basic structure of a batch apex class looks like:

global class BatchVerbNoun implements Database.Batchable<sObject>{
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query); //May return up to 50 Million records
    }
  
    global void execute(Database.BatchableContext BC, List<sObject> scope){       
        //Batch gets broken down into several smaller chunks
        //This method gets called for each chunk of work, passing in the scope of records to be processed
    }
   
    global void finish(Database.BatchableContext BC){   
        //This method gets called once when the entire batch is finished
    }
}
An Apex Developer simply fills in the blanks. The start() and finish() methods are both executed once, while the execute() method gets called 1-N times, depending on the number of batches.

Batch Apex Lifecycle

The Database.executeBatch() method is used to start a batch process. This method takes 2 parameters: instance of the batch class and scope.

BatchUpdateFoo batch = new BatchUpdateFoo();
Database.executeBatch(batch, 200);
The scope parameter defines the max number of records to be processed in each batch. For example, if the start() method returns 150,000 records and scope is defined as 200, then the overall batch will be broken down into 150,000/200 batches, which is 750. In this scenario, the execute() method would be called 750 times; and each time passed 200 records.

A note on batch sizes: Even though batch processes have significantly more access to system resources, governor limits still apply. A batch that executes a single DML operation may shoot for a batch scope of 500+. Batch executions that initiate a cascade of trigger operations will need to use a smaller scope. 200 is a good general starting point.

The start() method is called to determine the size of the batch then the batch is put into a queue. There is no guarantee that the batch process will start when executeBatch() is called, but 90% of the time the batch will start processing within 1 minute.

You can login to Settings/Monitor/Apex Jobs to view batch progress.


Unit Testing Batch Apex:
The asynchronous nature of batch apex makes it notoriously difficult to unit test and debug. At Facebook, we use a general Logger utility that logs debug info to a custom object (adding to the governor limit footprint). The online documentation for batch apex provides some unit test examples, but the util methods in this post use a short hand approach to achieving test coverage.

Batch Apex Best Practices:
  • Use extreme care if you are planning to invoke a batch job from a trigger. You must be able to guarantee that the trigger will not add more batch jobs than the five that are allowed. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.
  • When you call Database.executeBatch, Salesforce.com only places the job in the queue at the scheduled time. Actual execution may be delayed based on service availability.
  • When testing your batch Apex, you can test only one execution of the execute method. You can use the scope parameter of the executeBatch method to limit the number of records passed into the execute method to ensure that you aren't running into governor limits.
  • The executeBatch method starts an asynchronous process. This means that when you test batch Apex, you must make certain that the batch job is finished before testing against the results. Use the Test methods startTest and stopTest around the executeBatch method to ensure it finishes before continuing your test.
  • Use Database.Stateful with the class definition if you want to share variables or data across job transactions. Otherwise, all instance variables are reset to their initial state at the start of each transaction.
  • Methods declared as future are not allowed in classes that implement the Database.Batchable interface.
  • Methods declared as future cannot be called from a batch Apex class.
  • You cannot call the Database.executeBatch method from within any batch Apex method.
  • You cannot use the getContent and getContentAsPDF PageReference methods in a batch job.
  • In the event of a catastrophic failure such as a service outage, any operations in progress are marked as Failed. You should run the batch job again to correct any errors.
  • When a batch Apex job is run, email notifications are sent either to the user who submitted the batch job, or, if the code is included in a managed package and the subscribing organization is running the batch job, the email is sent to the recipient listed in the Apex Exception Notification Recipient field.
  • Each method execution uses the standard governor limits anonymous block, Visualforce controller, or WSDL method.
  • Each batch Apex invocation creates an AsyncApexJob record. Use the ID of this record to construct a SOQL query to retrieve the job’s status, number of errors, progress, and submitter. For more information about the AsyncApexJob object, see AsyncApexJob in the Web Services API Developer's Guide.
  • All methods in the class must be defined as global.
  • For a sharing recalculation, Salesforce.com recommends that the execute method delete and then re-create all Apex managed sharing for the records in the batch. This ensures the sharing is accurate and complete.
  • If in the course of developing a batch apex class you discover a bug during a batch execution, Don't Panic. Simply login to the admin console to monitor Apex Jobs and abort the running batch.


Utility Batch Apex Classes:

The following batch Apex classes can be copied and pasted into any Salesforce org and called from the System Log (or Apex) using the "Execute Anonymous" feature. The general structure of these utility classes are:
  • Accept task-specific input parameters
  • Execute the batch
  • Email the admin with batch results once complete
To execute these utility batch apex classes.
1. Open the System Log

2. Click on the Execute Anonymous input text field.

3. Paste any of the following batch apex classes (along with corresponding input parameters) into the Execute Anonymous textarea, then click "Execute".


BatchUpdateField.cls
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id, CompanyName from User';
BatchUpdateField batch = new BatchUpdateField(query, 'CompanyName', 'Bedrock Quarry');
Database.executeBatch(batch, 100); //Make sure to execute in batch sizes of 100 to avoid DML limit error
*/
global class BatchUpdateField implements Database.Batchable<sObject>{
    global final String Query;
    global final String Field;
    global final String Value;
   
    global BatchUpdateField(String q, String f, String v){
        Query = q;
        Field = f;
        Value = v;
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject> scope){   
        for(sobject s : scope){
            s.put(Field,Value);
         }
        update scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchUpdateField ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, CompanyName from User';
        BatchUpdateField batch = new BatchUpdateField(query, 'CompanyName', 'Bedrock Quarry');
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
BatchSearchReplace.cls
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id, Company from Lead';
BatchSearchReplace batch = new BatchSearchReplace(query, 'Company', 'Sun', 'Oracle');
Database.executeBatch(batch, 100); //Make sure to execute in batch sizes of 100 to avoid DML limit error
*/
global class BatchSearchReplace implements Database.Batchable<sObject>{
    global final String Query;
    global final String Field;
    global final String SearchValue;
    global final String ReplaceValue;
   
    global BatchSearchReplace(String q, String f, String sValue, String rValue){
        Query = q;
        Field = f;
        SearchValue = sValue;
        ReplaceValue = rValue;
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject&> scope){   
        for(sobject s : scope){
            string currentValue = String.valueof( s.get(Field) );
            if(currentValue != null && currentValue == SearchValue){
                s.put(Field, ReplaceValue);
            }
         }
        update scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchSearchReplace ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, Company from Lead';
        BatchSearchReplace batch = new BatchSearchReplace(query, 'Company', 'Foo', 'Bar');
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
BatchRecordDelete.cls:
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id from ObjectName where field=criteria';
BatchRecordDelete batch = new BatchRecordDelete(query);
Database.executeBatch(batch, 200); //Make sure to execute in batch sizes of 200 to avoid DML limit error
*/
global class BatchRecordDelete implements Database.Batchable<sObject>{
    global final String Query;
   
    global BatchRecordDelete(String q){
        Query = q;   
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject&> scope){       
        delete scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchRecordDelete ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, CompanyName from User where CompanyName="foo"';
        BatchRecordDelete batch = new BatchRecordDelete(query);
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
Sunday, 23 January 2011 12:07:02 (Pacific Standard Time, UTC-08:00)
# Tuesday, 14 December 2010
If you’re a Developer, then Dreamforce 2010 was a very good year. Perhaps there was a new killer business user feature announced in a Sales breakout session somewhere, but I unfortunately missed it. The conference kicked-off with Cloudstock on Monday and each subsequent day brought about one announcement after another targeting cloud developers.

The ultimate in serendipitous geekery had to be the Node.js session with Ryan Dahl. One day I’m hacking away on node.js and the next I’m running into the core developer at CloudStock. I’m really hooked on this new recipe for the cloud of Linux+Node+NoSQL (is there a cool acronym for this stack yet? LinodeSQL?). Thread based web server processing is starting to feel “old school” thanks to Ryan.

Database.com was the major announcement on Tuesday and, in my opinion, was way past due. The .NET open source toolkit co-launched with Salesforce in 2006 was built on the premise of using Salesforce as a language agnostic platform. Whether you are a Java, C#, Ruby, or PHP Developer should be irrelevant when using a database in the cloud that is accessible via web services (Given that ~50% of enterprise IT shops have Microsoft Developers on staff and C# adoption continues to grow, it seemed logical to win over this community with next generation tools and services that make them more productive in the cloud).

However, the launch of Apex and the AppExchange brought about a few years of obligatory Marketing and promotion of only native platform features while the language agnostic "hybrid" crowd sat patiently, admiring the work of Simon Fell's web services API and the potential for real-time integration between apps.

The “language agnosticism” of database.com was further reinforced with the announced acquisition of Heroku. Whether the Ruby community would have naturally gravitated to database.com on their own or the acquisition was necessary to accelerate and demonstrate the value will be perpetually debated.

But the Heroku acquisition somewhat makes sense to me. Back in April I wrote the following about VMForce:

"I think other ORM Developer communities, such as Ruby on Rails Developers, will appreciate what is being offered with VMForce, prompting some to correctly draw parallels between VMForce and Engine Yard."

Same concept, different Ruby hosting vendor (Engine Yard has the low-level levers necessary for enterprise development IMO). The ORM mentality of RoR Developers; who are simply tired of futzing around with relational DBs, indexes, and clusters; are a good D-Day beachhead from which Salesforce can launch their new platform message.

Salesforce Marketing will probably need to tread carefully around the message of “Twitter and Groupon use Ruby on Rails” to maintain credibility in this community. While these statements are technically true, Fail Whales galore prompted Twitter to massively rearchitect their platform, which resulted in the development of Flock DB and crap loads of memcache servers.

The fact remains that very few massively scaled cloud services run on a relational database. Twitter, Groupon, Facebook, and most other sites run on eventually consistent, massively scaled NoSQL (Not only SQL) architectures. Only Salesforce has developed the intellectual property, talent, and index optimizing algorithms to carry forward relational ACID transactions into the cloud.

The pricing and scalability of database.com appear to fit well for SMB apps or 1-3 month ephemeral large enterprise apps (campaigns or conference apps like Dreamforce.com).

REST API
The RESTful interface hack blogged back in May will soon be a fully supported feature in Spring ‘11.

SiteForce
SiteForce looked pretty impressive. I’m guessing SiteForce is the work of SiteMasher talent. Web Designers and Developers accustomed to using apps like Dreamweaver or Front Page will definitely want to check this out.

Governor Limits
Oh yeah, we all hate ‘em but understand they’re a necessary evil to keep rogue code from stealing valuable computing resources away from other tenants on the platform. The big news was that the number of governor limits will be dropping from ~55 down to 16 in the next major release by removing the trigger context limits (this brought serious applause from the crowd).

Platform State of the Union
The Developer platform state of the union was full of surprises. Shortly after being given a Developer Hero award for the Chatter Bot app developed earlier this year, Salesforce demonstrated full breakpoint/step-through debugging between Eclipse and a Salesforce Org.

This is a skunkworks-type project still in it’s infancy that will hopefully see the light of day. The demo definitely left me wondering “How’d he do that? Persistent UDP connections? Is that HTTP or some other protocol? Is a database connection being left open? What are the timeout limitations? How does Eclipse get a focus callback from a web browser?”

Permission Sets
Where were they? I was really hoping Salesforce would go all in and take their cloud database technology to the next level of access control by announcing granular permission management with the release of permission sets.

This is a subtle feature not widely appreciated by most Salesforce users or admins, but any Salesforce org with more than 100 users understands the need for permission sets.

Conclusion
The technology and features were great, but the real highlight of the conference was networking with people.

I really need to hang out with more Salesforce employees now that I live in the bay. Conversations with the Salesforce CIO, Evangelists, Engineers, and Product Managers were energizing.

To have our CIO and IT Team attend Dreamforce and be aligned on Force.com as a strategic platform is invigorating and makes Facebook an exciting place to work.

The family of Salesforce friends on Twitter continues to grow. It’s always great to meet online friends in person and hang out with existing friends. See you all at Dreamforce 2011!

Honored to receive one of three Developer Hero awards. Thank you Salesforce!
Tuesday, 14 December 2010 22:39:19 (Pacific Standard Time, UTC-08:00)
# Sunday, 21 November 2010

"How can I add a picklist field?"
"Can I modify this report?"
"Would you please update the Lead trigger to de-dupe by email address?"

Salesforce Administrators are faced with these, and many more questions on a routine basis. Salesforce.com CRM provides sufficiently granular access permissions to control precisely what an end user can and cannot do. As end users click around Salesforce, the general rule is "If you can do it, you probably have permission". However, the same thing cannot be said for System and Delegated Administrators.

Once a user is given administrative access to application settings, then more training and monitoring must be imposed. In short, a "change management process" must be implemented.

There are a number of business drivers for a change management process:
  1. Compliance - Your industry may require that certain changes be reviewed, approved, documented, and deployed in a methodical way
  2. Productivity - The smallest change in a user interface can result in hours of lost productivity if users aren't given advance warning or training
  3. Reliability - Prevent the deployment of changes that may break existing workflows or processes

Change management processes attempt to answer all or some of the following questions:
  • Who approved and/or made the change?
  • Why was the change needed?
  • When was the change made?
  • How was the change deployed?

A change management matrix can help identify how each type of change should be managed.

Steps for creating a change management matrix:
  • Create a list of common Salesforce changes in a spreadsheet
  • Define a spectrum of change management categories (For example, red/yellow/green lights)
  • Periodically sit down with all Sys Admins, Developers and Approvers and review how the organization should respond to each type of change category
  • There will always be exceptions. Add an "Exceptions" column to the matrix and document them
  • Train admins on the use of built-in auditing tools to ensure compliance with the CM process

Sunday, 21 November 2010 13:07:15 (Pacific Standard Time, UTC-08:00)
# Tuesday, 26 October 2010

Career__c career = [SELECT Id, Name FROM Career__c WHERE Culture__c='Cool' AND Location__c='Palo Alto, CA' AND Description__c LIKE 'Salesforce%' AND PerkId__c IN (select Id from Perk__c) LIMIT 1];
system.assertEquals('Software Application Developer', career.Name);


Join an incredible team that is shaping the future of Facebook with the development of enterprise apps in the cloud.

Apply for this position

Force.com Application Developer
Palo Alto, CA

Description
Facebook is seeking an experienced application developer to join the IT team and participate in the development, integration and maintenance of force.com applications supporting Facebook data centers consigned inventory asset tracking. This is a full-time position based in our main office in Palo Alto.

Responsibilities:
•    Technical design, configuration, development and testing of Force.com custom applications, interfaces and reports;
•    Model, analyze and develop or extend persistent database structures which are non-intrusive to the base application code and which effectively and efficiently implement business requirements;
•    Integrate force.com applications to other facebook external or internal Business Applications and tools.
•    Develop UI and ACL tailored to facebook employees and suppliers.
•    Apply sound release management and configuration management principles to ensure the stability of production environments;
•    Participate in all phases of software development/implementation life cycle including functional analysis, development of technical requirements, prototyping, coding, testing, deployment and support;
•    Participate in peer design and code review and analyze and troubleshoot issues in a complex applications environment, which include Salesforce.com, force.com, Oracle E-Business Application Suite R12 and custom built lamp stack based tools and systems. 
•    Research and understand force.com capabilities to recommend best design/implementation approaches and meet business requirements.
•    Plan and implement data migration and conversion activities;
•    Provide daily and 24x7 on-call support

Requirements:
•    Bachelor's in Computer Science, any engineering field, or closely related field, or foreign equivalent;
•    Passionate about Salesforce and building apps on the force.com platform
•    At least 6 years of design, configuration, customization and development experience with Salesforce in general and Force.com
•    Strong development background with APEX and VisualForce.
•    Strong knowledge of Salesforce/Force.com API and toolkits for integration.
•    Strong understanding of RDBMS concepts and programming using SQL and SOQL;
•    Background in database modeling and performance tuning;
•    Knowledge of best practices around securing and auditing custom applications on Force.com
•    Background in design and development of web based solutions using Java, JavaScript, Ajax, SOAP, XML and web programming in general.
•    Strong experience with business process and workflow and translating them into system requirements.




Tuesday, 26 October 2010 08:15:01 (Pacific Daylight Time, UTC-07:00)
# Saturday, 10 July 2010


Facebook has nearly 500 million users (at the time of this writing) and are poised to transform how customer relationships are acquired, cultivated, and supported.

Consider the evolution of Contact management over the past 50 years:
  • Paper: Write names, addresses, birthdays, and other notes down on paper
  • Rolodex: Everyone trades business cards and keeps a local paper copy
  • PC Software: Rolodex moved to PC (Spreadsheets, Goldmine, Act!)
  • Client / Server Software: Many people in same office share common contact database (Siebel, MS CRM)
  • Software as a Service: Contacts moved to Internet hosted server. Accessible from anywhere (Salesforce.com)
  • Crowd sourced: 3rd parties pooling contact information to improve data quality, keep lists up to date (Plaxo, Jigsaw)
  • Social Networking: Contact maintains singular identity. Always up to date. Control of privacy and disclosure (LinkedIn, Facebook)
The consumer now has more power than ever. Consumers now generate more information about themselves than can be generated by 3rd parties.

Buying a list of leads that may be interested in buying camping gear will have no where near the effectiveness of publishing an ad on Facebook targeted at consumers with a self-identified interest in camping (See The Role of Advertising at Facebook).

Consumers will increasingly defer to their friends recommendations on which restaurants to visit, which shoes to buy, and which cars to drive.

CRM systems no longer contain the master records for Contact information.

Businesses must evolve past collecting and cleansing Contacts and instead collect meta-information that refers to customer-managed online profiles, such as Facebook and LinkedIn, for these resources are now truly authoritative. When there is a conflict between CRM Contact information and an online social network profile, the social profile will be master record.

Websites must evolve to become applications. Web forms soliciting contact information will become a thing of the past. Consumers will not have the patience to do anything more than a single click to identify interest in a product or service.

The video embedded in this blog post (and available here) demonstrates a Cool Sites application integrated with Facebook Connect and Salesforce CRM.
Saturday, 10 July 2010 16:31:06 (Pacific Daylight Time, UTC-07:00)
# Thursday, 08 July 2010

No, not this Trigger... keep reading...

Trigger development (apologies to Roy Rogers' horse) is not done on a daily basis by a typical Force.com Developer.

In my case, Trigger development is similar to using regular expressions (regex) in that I often rely on documentation and previously developed code examples to refresh my memory, do the coding, then put it aside for several weeks/months.

I decided to create a more fluent Trigger template to address the following challenges and prevent me from repeatedly making the same mistakes:

  • Bulkification best practices not provisioned by the Trigger creation wizard
  • Use of the 7 boolean context variables in code (isInsert, isBefore, etc...) greatly impairs readability and long-term maintainability
  • Trigger.old and Trigger.new collections are not available in certain contexts
  • Asynchronous trigger support not natively built-in

The solution was to create a mega-Trigger that handles all events and delegates them accordingly to an Apex trigger handler class.

You may want to customize this template to your own style. Here are some design considerations and assumptions in this template:

  • Use of traditional event method names on the handler class (OnBeforeInsert, OnAfterInsert)
  • Maps are used where they are most relevant
  • Objects in map collections cannot be modified, however there is nothing in the compiler to prevent you from trying. Remove them whenever not needed.
  • Maps are most useful when triggers modify other records by IDs, so they're included in update and delete triggers
  • Encourage use of asynchronous trigger processing by providing pre-built @future methods.
  • @future methods only support collections of native types. ID is preferred using this style.
  • Avoid use of before/after if not relevant. Example: OnUndelete is simpler than OnAfterUndelete (before undelete is not supported)
  • Provide boolean properties for determining trigger context (Is it a Trigger or VF/WebService call?)
  • There are no return values. Handler methods are assumed to assert validation rules using addError() to prevent commit.

References:
Apex Developers Guide - Triggers
Steve Anderson - Two interesting ways to architect Apex triggers

AccountTrigger.trigger

trigger AccountTrigger on Account (after delete, after insert, after undelete, 
after update, before delete, before insert, before update) {
	AccountTriggerHandler handler = new AccountTriggerHandler(Trigger.isExecuting, Trigger.size);
	
	if(Trigger.isInsert && Trigger.isBefore){
		handler.OnBeforeInsert(Trigger.new);
	}
	else if(Trigger.isInsert && Trigger.isAfter){
		handler.OnAfterInsert(Trigger.new);
		AccountTriggerHandler.OnAfterInsertAsync(Trigger.newMap.keySet());
	}
	
	else if(Trigger.isUpdate && Trigger.isBefore){
		handler.OnBeforeUpdate(Trigger.old, Trigger.new, Trigger.newMap);
	}
	else if(Trigger.isUpdate && Trigger.isAfter){
		handler.OnAfterUpdate(Trigger.old, Trigger.new, Trigger.newMap);
		AccountTriggerHandler.OnAfterUpdateAsync(Trigger.newMap.keySet());
	}
	
	else if(Trigger.isDelete && Trigger.isBefore){
		handler.OnBeforeDelete(Trigger.old, Trigger.oldMap);
	}
	else if(Trigger.isDelete && Trigger.isAfter){
		handler.OnAfterDelete(Trigger.old, Trigger.oldMap);
		AccountTriggerHandler.OnAfterDeleteAsync(Trigger.oldMap.keySet());
	}
	
	else if(Trigger.isUnDelete){
		handler.OnUndelete(Trigger.new);	
	}
}

AccountTriggerHandler.cls

 
public with sharing class AccountTriggerHandler {
	private boolean m_isExecuting = false;
	private integer BatchSize = 0;
	
	public AccountTriggerHandler(boolean isExecuting, integer size){
		m_isExecuting = isExecuting;
		BatchSize = size;
	}
		
	public void OnBeforeInsert(Account[] newAccounts){
		//Example usage
		for(Account newAccount : newAccounts){
			if(newAccount.AnnualRevenue == null){
				newAccount.AnnualRevenue.addError('Missing annual revenue');
			}
		}
	}
	
	public void OnAfterInsert(Account[] newAccounts){
		
	}
	
	@future public static void OnAfterInsertAsync(Set<ID> newAccountIDs){
		//Example usage
		List<Account> newAccounts = [select Id, Name from Account where Id IN :newAccountIDs];
	}
	
	public void OnBeforeUpdate(Account[] oldAccounts, Account[] updatedAccounts, Map<ID, Account> accountMap){
		//Example Map usage
		Map<ID, Contact> contacts = new Map<ID, Contact>( [select Id, FirstName, LastName, Email from Contact where AccountId IN :accountMap.keySet()] );
	}
	
	public void OnAfterUpdate(Account[] oldAccounts, Account[] updatedAccounts, Map<ID, Account> accountMap){
		
	}
	
	@future public static void OnAfterUpdateAsync(Set<ID> updatedAccountIDs){
		List<Account> updatedAccounts = [select Id, Name from Account where Id IN :updatedAccountIDs];
	}
	
	public void OnBeforeDelete(Account[] accountsToDelete, Map<ID, Account> accountMap){
		
	}
	
	public void OnAfterDelete(Account[] deletedAccounts, Map<ID, Account> accountMap){
		
	}
	
	@future public static void OnAfterDeleteAsync(Set<ID> deletedAccountIDs){
		
	}
	
	public void OnUndelete(Account[] restoredAccounts){
		
	}
	
	public boolean IsTriggerContext{
		get{ return m_isExecuting;}
	}
	
	public boolean IsVisualforcePageContext{
		get{ return !IsTriggerContext;}
	}
	
	public boolean IsWebServiceContext{
		get{ return !IsTriggerContext;}
	}
	
	public boolean IsExecuteAnonymousContext{
		get{ return !IsTriggerContext;}
	}
}
Thursday, 08 July 2010 14:16:12 (Pacific Daylight Time, UTC-07:00)
# Tuesday, 29 June 2010
There are really only 2 tech blogs that I read; TechCrunch.com and ReadWriteWeb.com (RWW). They both provide balanced coverage of the consumer and enterprise markets while remaining objective. You don't get the sense that site sponsors and advertisers are driving the stories. I admire their integrity.

I've been particularly impressed with RWW's coverage on the cloud and the Internet of Things, so I was absolutely thrilled when Alex Williams ran with a story yesterday about Chatter Bot titled "How to Connect an Office Building to an Activity Stream". Check it out! (A hint on the title of this blog :-) )

Tuesday, 29 June 2010 11:32:15 (Pacific Daylight Time, UTC-07:00)
# Thursday, 24 June 2010

Chatter Developer Challenge / Hackathon 2010 Roundup

The Chatter Developer Challenge sponsored by Salesforce encouraged Developers to create a wide variety of applications that demonstrate the new Salesforce Chatter API.

The challenge culminated in a Hackthon event on June 22nd 2010 at the San Jose Convention Center where prizes were awarded for various applications.

My entry, Chatter Bot, demonstrated the use of Chatter within a Facility Management application that captured physical world events and moved them to the cloud to produce Chatter feed posts.

Chatter Bot is a system comprised of 4 major components:

  • Arduino board with motion and light sensors (C/C++)
  • Proxy Service (Java Processing.org Environment)
  • Salesforce Sites HTTP Listener (Visualforce/Apex)
  • Facility Management App (Force.com database and web forms)
(Source code to all components available at the bottom of this post)

I was elated to learn a few days before the hackathon that Chatter Bot had been selected as a finalist and I was strongly encouraged to attend. So I packed up Chatter Bot to take the 2 hour flight from Portland to San Jose.

It wasn't until I arrived at the airport that it suddenly dawned on me how much Chatter Bot bares a striking resemblance to a poorly assembled explosive device. Apparently the TSA agent handling the X-Ray machine thought so too and I was taken aside for the full bomb sniffing and search routine.

It crossed my mind to add a bit levity to the situation by making some kind of remark, but I quickly assessed that I was probably one misinterpreted comment away from being whisked off in handcuffs to some TSA lockup room. Ironically, I had no problem with security in San Jose coming back. They must be accustomed to these types of devices in Silicon Valley.

Upon arriving in San Jose, I setup Chatter Bot and configured the San Jose Convention Center (SJCC) as a Building Facility (Custom object) to be monitored.

Several assets were created to represent some rooms within the SJCC.

Finally, the Chatter Bot was associated with a particular room (Asset) through an intersection object called AssetSensors that relates a device ID (usually a MAC address) and an Asset.

Within minutes the motion and light sensors were communicating to the cloud via my laptop and reporting on activity in the Hackathon room.

Given the high quality and functionality of fellow competitors apps, such as the very cool Chatter for Android app by Jeff Douglas, and observations from the public voting, I thought Chatter Bot might be a little too "out of the box" to take a prize. It was a genuinely surreal and surprising moment when I learned Chatter Bot received the grand prize.

Thank you Salesforce for hosting such a great event and thank you to the coop-etition for the encouraging exchange of ideas and feedback during the challenge!

Arduino Sensor

/////////////////////////////
//VARS
//the time when the sensor outputs a low impulse
long unsigned int lowIn;

//the amount of milliseconds the sensor has to be low 
//before we assume all motion has stopped
long unsigned int pause = 5000;

boolean lockLow = true;
boolean takeLowTime;  

int LDR_PIN = 2;    // the analog pin for reading the LDR (Light Dependent Resistor)
int PIR_PIN = 3;    // the digital pin connected to the PIR sensor's output
int LED_PIN = 13;

byte LIGHT_ON    = 1;
byte LIGHT_OFF   = 0;
byte previousLightState  = LIGHT_ON;
int lightLastChangeTimestamp = 0;
unsigned int LIGHT_ON_MINIMUM_THRESHOLD = 1015;
unsigned long lastListStateChange = 0; //Used to de-bounce borderline transitions.

// Messages
int SENSOR_MOTION = 1;
int SENSOR_LIGHT  = 2;

/////////////////////////////
//SETUP
void setup(){  
  //PIR initialization
  pinMode(PIR_PIN, INPUT);
  pinMode(LED_PIN, OUTPUT);
  digitalWrite(PIR_PIN, LOW);
  
  Serial.begin(9600);
  
  InitializeLED();
  InitializeLightSensor();
  InitializeMotionSensor();
}

////////////////////////////
//LOOP
void loop(){
  
  if(digitalRead(PIR_PIN) == HIGH){
    digitalWrite(LED_PIN, HIGH);   //the led visualizes the sensors output pin state
    if(lockLow){
      //makes sure we wait for a transition to LOW before any further output is made:
      lockLow = false;
      writeMeasure(SENSOR_MOTION, HIGH);
      delay(50);
      digitalWrite(LED_PIN, LOW);   //the led visualizes the sensors output pin state
    }
    takeLowTime = true;
  }

  if(digitalRead(PIR_PIN) == LOW){
    digitalWrite(LED_PIN, LOW);  //the led visualizes the sensors output pin state
    if(takeLowTime){
      lowIn = millis();          //save the time of the transition from high to LOW
      takeLowTime = false;       //make sure this is only done at the start of a LOW phase
    }
    
    //if the sensor is low for more than the given pause, 
    //we assume that no more motion is going to happen
    if(!lockLow && millis() - lowIn > pause){
      //makes sure this block of code is only executed again after 
      //a new motion sequence has been detected
      lockLow = true;
      writeMeasure(SENSOR_MOTION, LOW);
      delay(50);
    }
  }
  
  ProcessLightSensor();
}

boolean InitializeLED(){
  Serial.println("INIT: Initializing LED (should see 3 blinks)... ");
  for(int i=0; i < 3; i++){
    digitalWrite(LED_PIN, HIGH);
    delay(500);
    digitalWrite(LED_PIN, LOW);
    delay(500);
  }
}

//the time we give the motion sensor to calibrate (10-60 secs according to the datasheet)
int calibrationTime = 10;

boolean InitializeMotionSensor(){
  //give the sensor some time to calibrate
  Serial.print("INIT: Calibrating motion sensor (this takes about ");
  Serial.print(calibrationTime);
  Serial.print(" seconds) ");
  for(int i = 0; i < calibrationTime; i++){
    Serial.print(".");
    delay(1000);
  }
  Serial.println(" done");
  Serial.println("INIT: SENSOR ACTIVE");
  delay(50);
}

boolean InitializeLightSensor(){
  Serial.print("INIT: Initializing light sensor. Light on threashold set to ");
  Serial.println(LIGHT_ON_MINIMUM_THRESHOLD);
  Serial.println("INIT: 20 samples follow...");
  for(int i = 0; i < 20; i++){
    int lightLevelValue = analogRead(LDR_PIN);
    Serial.print("INIT: ");
    Serial.println(lightLevelValue);
  }
}

boolean ProcessLightSensor(){
  byte currentState = previousLightState;
  int lightLevelValue = analogRead(LDR_PIN);  // returns value 0-1023. 0=max light. 1,023 means no light detected.
  
  if(lightLevelValue < LIGHT_ON_MINIMUM_THRESHOLD){
     currentState = LIGHT_ON;
  }
  else{
     currentState = LIGHT_OFF;
  }
  
  if(LightStateHasChanged(currentState) && !LightStateIsBouncing() ){
    previousLightState = currentState; 
    
    if(currentState == LIGHT_ON){
      writeMeasure(SENSOR_LIGHT, HIGH);
    }
    else{
      writeMeasure(SENSOR_LIGHT, LOW);
    }
    
    delay(2000);
    lightLastChangeTimestamp = millis();
    
    return true;
  }
  else{
    return false; 
  }
}

boolean LightStateHasChanged(byte currentState){
   return currentState != previousLightState; 
}

//De-bounce LDR readings in case light switch is being quickly turned on/off
unsigned int MIN_TIME_BETWEEN_LIGHT_CHANGES = 5000;
boolean LightStateIsBouncing(){
   if(millis() - lightLastChangeTimestamp < MIN_TIME_BETWEEN_LIGHT_CHANGES){
      return true; 
   }
   else{
      return false; 
   }
}

byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; 
char deviceID[ ] = "007DEADBEEF0";
//Format MEASURE|version|DeviceID|Sensor Type|State (on/off)
void writeMeasure(int sensorType, int state){
  Serial.print("MEASURE|v1|");
  
  Serial.print(deviceID);
  Serial.print("|");
  
  if(sensorType == SENSOR_MOTION)
    Serial.print("motion|");
  else if(sensorType == SENSOR_LIGHT)
    Serial.print("light|");
  else
    Serial.print("unknown|");
  
  if(state == HIGH)
    Serial.print("on");
  else if(state == LOW)
    Serial.print("off");
  else
    Serial.print("unknown");
  
  Serial.println("");
}

Chatter Bot Proxy (Processing.org Environment)

import processing.serial.*;

Serial port;
String buffer = "";

void setup()
{
    size(255,255);
    println(Serial.list());
    port = new Serial(this, "COM7", 9600);
}

void draw()
{
  if(port.available() > 0){
    int inByte = port.read();
    print( char(inByte) );
    if(inByte != 10){ //check newline
      buffer = buffer + char(inByte);
    }
    else{
       if(buffer.length() > 1){
          if(IsMeasurement(buffer)){
              postToForce(buffer);
          }
          buffer = "";
          port.clear();
       }
    }
  }
}

boolean IsMeasurement(String message){
  return message.indexOf("MEASURE") > -1;
}

void postToForce(String message){
  String[] results = null;
  try
  {
    URL url= new URL("http://listener-developer-edition.na7.force.com/api/measure?data=" + message);
    URLConnection connection = url.openConnection();

    connection.setRequestProperty("User-Agent",  "Mozilla/5.0 (Processing)" );
    connection.setRequestProperty("Accept",  "text/plain,text/html,application/xhtml+xml,application/xml" );
    connection.setRequestProperty("Accept-Language",  "en-us,en" );
    connection.setRequestProperty("Accept-Charset",  "utf-8" );
    connection.setRequestProperty("Keep-Alive",  "300" );
    connection.setRequestProperty("Connection",  "keep-alive" );
    
    results = loadStrings(connection.getInputStream());  
  }
  catch (Exception e) // MalformedURL, IO
  {
    e.printStackTrace();
  }

  if (results != null)
  {
    for(int i=0; i < results.length; i++){
      println( results[i] );
    }
  }
}

Visualforce Site Chatter Listener

<apex:page controller="measureController" action="{!processRequest}" 
contentType="text/plain; charset=utf-8" showHeader="false" 
standardStylesheets="false" sidebar="false">
{!Result}
</apex:page>

Controller

public with sharing class measureController {
	public void processRequest(){
    	if(Data != null){
    		system.debug('data= ' + Data);
    	}
    	
    	CreateFeedPosts();
    }
    
    private void CreateFeedPosts(){
    	if(AssetDeviceBindings.size() == 0)
    		return;
    	
    	for(AssetSensor__c binding : AssetDeviceBindings){
	    	FeedPost newFeedPost = new FeedPost();
	    	newFeedPost.parentId = binding.Asset__c;
			newFeedPost.Type = 'TextPost';
	        newFeedPost.Body = FeedPostMessage();
	        insert newFeedPost;
    	}
    }
    
    private string FeedPostMessage(){
    	if(AssetDeviceBindings.size() == 0)
    		return '';
    	
    	if(SensorType == 'motion'){
    		if(State == 'on')
    			return 'Motion detected';
    		else
    			return 'Motion stopped';
    	}
    	else if(SensorType == 'light'){
    		return 'Lights turned ' + State;
    	}
    	else
    		return 'Unknown sensor event';
    }
    
    private List<AssetSensor__c> m_assetSensor = null;
    public List<AssetSensor__c> AssetDeviceBindings{
    	get{
    		if(m_assetSensor == null){
    			m_assetSensor = new List<AssetSensor__c>();
    			if(DeviceID != null){
    				m_assetSensor = [select Id, Name, Asset__c, DeviceID__c from AssetSensor__c where DeviceID__c=:DeviceID limit 500];
    			}
    		}
    		return m_assetSensor;
    	}
    }
    
    private integer EXPECTED_MESSAGE_PARTS = 5;
    private integer DATA_MESSAGE_TYPE = 0;
    private integer DATA_VERSION	= 1;
    private integer DATA_DEVICEID	= 2;
    private integer DATA_SENSOR_TYPE= 3;
    private integer DATA_STATE		= 4;
    
    private List<string> m_dataParts = null;
    public List<string> DataParts{
    	get{
    		if(m_dataParts == null && Data != null){
    			m_dataParts = Data.split('\\|');
    		}
    		return m_dataParts;
    	}
    }
    
    public string Version{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_VERSION];
    		}
    		else
    			return null;
    	}
    }
    
    public string DeviceID{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_DEVICEID];
    		}
    		else
    			return null;
    	}
    }
    
    public string SensorType{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_SENSOR_TYPE];
    		}
    		else
    			return null;
    	}
    }
    
    public string State{
    	get{
    		if(Data != null && DataParts.size() >= EXPECTED_MESSAGE_PARTS){
    			return DataParts[DATA_STATE];
    		}
    		else
    			return null;
    	}
    } 
    
    private string m_data = null;
    public string Data{
    	get{
    		if(m_data == null && ApexPages.currentPage().getParameters().get('data') != null){
    			m_data = ApexPages.currentPage().getParameters().get('data');
    		}
    		return m_data;
    	}
    }
    
    private string m_result = '';
    public String Result{
    	get{
    		return 'ok';
    	}
    }
    
    public static testMethod void tests(){
    	Asset testAsset = new Asset();
    	testAsset.Name = 'Test Asset';
    	testAsset.AccountID = [select Id from Account order by CreatedDate desc limit 1].Id;
    	insert testAsset;
    	
    	AssetSensor__c binding = new AssetSensor__c();
    	binding.Name = 'Test Binding';
    	binding.DeviceID__c = '007DEADBEEF9';
    	binding.Asset__c = testAsset.Id;
    	insert binding;
    	
    	measureController controller = new measureController();
    	controller.processRequest();
    	system.assert(controller.Data == null);
    	system.assert(controller.DataParts == null);
    	system.assert(controller.Version == null);
    	system.assert(controller.DeviceID == null);
    	system.assert(controller.SensorType == null);
    	system.assert(controller.State == null);
    	
    	string TEST_MEASURE = 'MEASURE|v1|007DEADBEEF9|motion|on';
    	ApexPages.currentPage().getParameters().put('data', TEST_MEASURE);
    	controller = new measureController();
    	controller.processRequest();
    	system.assert(controller.Data == TEST_MEASURE);
    	system.assert(controller.DataParts != null);
    	system.assert(controller.DataParts.size() == 5);
    	system.assert(controller.Version == 'v1');
    	system.assert(controller.DeviceID == '007DEADBEEF9');
    	system.assert(controller.SensorType == 'motion');
    	system.assert(controller.State == 'on');
    	
    	system.assert(controller.AssetDeviceBindings != null);
    	system.assert(controller.AssetDeviceBindings.size() == 1);
    	system.assertEquals('007DEADBEEF9', controller.AssetDeviceBindings[0].DeviceID__c);
    	system.assertEquals(testAsset.Id, controller.AssetDeviceBindings[0].Asset__c);
    	
    	system.assert(controller.Result == 'ok');
    }
}
Thursday, 24 June 2010 18:03:10 (Pacific Daylight Time, UTC-07:00)
# Monday, 21 June 2010
I'm very happy to announce that Cool Sites was released this weekend. Cool Sites provides a gallery of pre-built page templates, plugins, and sites built on Salesforce Sites.

Basic web content management tools and workflows for creating navigation menus and web pages are included. Check it out! http://www.getcoolsites.com


Monday, 21 June 2010 15:43:39 (Pacific Daylight Time, UTC-07:00)
# Monday, 07 June 2010

Here's a fun video put together for my Salesforce Chatter Developer Challenge entry.

Monday, 07 June 2010 09:57:05 (Pacific Daylight Time, UTC-07:00)
# Friday, 28 May 2010
A true REST interface with full support for HTTP Verbs, status codes, and URIs is currently not available on the Salesforce.com platform. However, a simple REST-like interface for getting objects can be developed using Salesforce Sites, Visualforce, and Apex.

This example uses a free Developer Edition with a Site named 'api' that uses only 2 Visualforce pages named 'rest' and 'error'. The rest page accepts a single parameter named 'soql', executes the SOQL query, and returns a JSON formatted response.



The error page is also used to generically handle all 40x and 50x errors.



The body of the error page returns a simple JSON message that the api is unavailable.
<apex:page contenttype="application/x-JavaScript; charset=utf-8" 
showheader="false" standardstylesheets="false" sidebar="false">
{"status": 500, "error": "api currently not available"}
</apex:page>

The rest Visualforce page (full source at bottom of this post) accepts a SOQL parameter and returns JSON results. To get a list of all Leads with their First and Last names, you'd use the SOQL

select Id, FirstName, LastName from Lead
and pass this query to the REST interface in a GET format such as (example here)
http://cubic-compass-developer-edition.na7.force.com/api?soql=select%20Id,%20FirstName,%20LastName%20from%20Lead

Note that the rest page is defined as the default handler for the site named 'api', so it's not required in the URL.
This simple interface supports any flavor of SOQL, including the WHERE and LIMIT keywords, so you can pass queries like

select Id, FirstName, LastName from Lead where LastName='Smith' limit 20
REST interfaces often assume the unique ID of an object is the last portion of the URL request. This can similarly be achieved with a query like (example here)
select Id, FirstName, LastName from Lead where Id='00QA00000019xkpMAA' limit 1

All of these example queries will only return the Id field by default. To fix this, update the Sites Public Access Settings and grant Read access to the Lead object.





The new URL rewriting feature in Summer 10 provides some additional options the necessary means to implementing a RESTful interface with full support for object URIs and linking.

Visualforce Source Code for rest.page
<apex:page controller="RESTController" action="{!processRequest}" 
contentType="application/x-JavaScript; charset=utf-8" showHeader="false" 
standardStylesheets="false" sidebar="false">
{!JSONResult}
</apex:page>
Apex Source Code for RESTController.cls
public with sharing class RESTController {
	public void processRequest(){
		validateRequest();		
    	if( HasError )
    		return;
    	
    	//Add support for other types of verbs here
    	processGetQuery();
    }
    
    static final string ERROR_MISSING_SOQL_PARAM = 'Bad Request. Missing soql parameter';
    static final string ERROR_SOBJECT_MISSING	 = 'Bad Request. Could not parse SObject name from SOQL';
    static final string ERROR_FROM_MISSING		 = 'Bad request. SOQL missing FROM keyword';
    public void validateRequest(){
    	if(Query == null){
    		errorResponse(400, ERROR_MISSING_SOQL_PARAM);
    	}
    	else if(sObjectName == null){
    		//Force a get of object name property.
    		//Detailed error response should already be logged by sObjectName parser
    	}
    }
    
    public boolean HasError = False;
    private void errorResponse(integer errorCode, string errorMessage){
    	JSONResponse.putOpt('status', new JSONObject.value(errorCode));
    	JSONResponse.putOpt('error', new JSONObject.value(errorMessage));
    	HasError = True;
    }
        
    public void processGetQuery(){
    	Map<String, Schema.SObjectField> fieldMap = Schema.getGlobalDescribe().get(SObjectName).getDescribe().fields.getMap();
    	List<JSONObject.value> objectValues = new List<JSONObject.value>();
    	List<sObject> resultList = Database.query(Query);
 		
    	for(sObject obj : resultList){
    		JSONObject json = new JSONObject();
    		json.putOpt('id', new JSONObject.value( obj.Id ));
    		for(SObjectField field : fieldMap.values() ){
    			try{
    				string f = field.getDescribe().getName();
    				string v = String.valueOf( obj.get(field) );
    				json.putOpt(f, new JSONObject.value( v ));
    			}
    			catch(Exception ex){
    				//Ignore. Field not included in query
    			}
    		}
			objectValues.add(new JSONObject.value(json));
    	}
    	JSONResponse.putOpt('status', new JSONObject.value(200));
    	JSONResponse.putOpt('records', new JSONObject.value(objectValues));
    }
    
    private string m_query = null;
    public string Query{
    	get{
    		if(m_query == null && ApexPages.currentPage().getParameters().get('soql') != null){
    			m_query = ApexPages.currentPage().getParameters().get('soql');
    		}
    		return m_query;
    	}
    }

	static final string SOQL_FROM_TOKEN = 'from ';    
    private string m_sObject = null;
    public string sObjectName{
    	get{
    		if(m_sObject == null && Query != null){
    			string soql = Query.toLowerCase();
    			
    			integer sObjectStartToken = soql.indexOf(SOQL_FROM_TOKEN);
    			if(sObjectStartToken == -1){
    				errorResponse(400, ERROR_FROM_MISSING);
    				return null;
    			}
    			sObjectStartToken += SOQL_FROM_TOKEN.length(); 
    			
    			integer sObjectEndToken = soql.indexOf(' ', sObjectStartToken);
    			if(sObjectEndToken == -1)
    				sObjectEndToken = soql.length();
    			
    			m_sObject = Query.substring(sObjectStartToken, sObjectEndToken);
    			m_sObject = m_sObject.trim();
    			system.debug('m_sObject = ' + m_sObject);
    		}
    		return m_sObject;
    	}
    }
    
    private JsonObject m_jsonResponse = null;
    public JSONObject JSONResponse{
    	get{
    		if(m_jsonResponse == null)
    			m_jsonResponse = new JSONObject();
    		return m_jsonResponse;
		}
		set{ m_jsonResponse = value;}
	}
    
	public String getJSONResult() {
    	return JSONResponse.valueToString();
	}
	
	public static testMethod void unitTests(){
		RESTController controller = new RESTController();
		controller.processRequest();
		system.assertEquals(True, controller.HasError);
		system.assertEquals(True, controller.JSONResponse.has('status'));
		system.assertEquals(400, controller.JSONResponse.getValue('status').num);
		system.assertEquals(True, controller.JSONResponse.has('error'));
		system.assertEquals(ERROR_MISSING_SOQL_PARAM, controller.JSONResponse.getValue('error').str);
		
		controller = new RESTController();
		ApexPages.currentPage().getParameters().put('soql', 'select Id fro Lead');
		controller.processRequest();
		system.assertEquals(True, controller.HasError);
		system.assertEquals(True, controller.JSONResponse.has('status'));
		system.assertEquals(400, controller.JSONResponse.getValue('status').num);
		system.assertEquals(ERROR_FROM_MISSING, controller.JSONResponse.getValue('error').str);
		
		controller = new RESTController();
		ApexPages.currentPage().getParameters().put('soql', 'select Id from Lead');
		controller.processRequest();
		system.assertEquals(False, controller.HasError);
		system.assertEquals('Lead', controller.sObjectName);
		
		Lead testLead = new Lead(FirstName = 'test', LastName = 'lead', Company='Bedrock', Email='fred@flintstone.com');
        insert testLead;
        
        controller = new RESTController();
		ApexPages.currentPage().getParameters().put('soql', 'select Id from Lead where email=\'fred@flintstone.com\'');
		controller.processRequest();
		system.assertEquals(False, controller.HasError);
		system.assertEquals('Lead', controller.sObjectName);
		system.assertEquals(True, controller.JSONResponse.has('records'));
	}
}
Friday, 28 May 2010 12:34:08 (Pacific Daylight Time, UTC-07:00)
# Friday, 14 May 2010

What if the buildings you worked in could participate in Salesforce Chatter feeds? What if the products you shipped could automatically create Cases in Salesforce when they needed servicing? More objects are becoming embedded with sensors and gaining the ability to communicate. This is enabling the next major advancement in the cloud; the Internet of "Things".

Force.com provides an ideal platform for sensor data with the ability to relate information in the physical world to native or custom objects. My previous blog post on Chatter highlighted the Salesforce user experience and ability for people to interact in the cloud. This post demonstrates using Force.com and Chatter to capture information from objects in the physical world and posting to Chatter feeds using the Chatter web services API.

This application is a very basic Facility Management app. There is a single custom object named "Building" that is comprised of many "Assets", such as Conference Room, Main Entry, Air Conditioner, and Heating System. (See this application in action in the video at the end of this blog post).

Sensors on these assets report their readings to Salesforce in the form of Chatter FeedPost records so that when someone walks into a conference room, the Asset record for that room is updated with Chatter information to the effect of "Motion detected in Conference Room".

The feed posts appear to be created by a Salesforce User named "Environment Bot". This is essentially an API user account for reporting environmental sensor activity.

People can comment on the bot's posts. For example, building maintenance personnel may notice the temperature increasing in some rooms and post comments like "Hey, is anyone on this? The AC appears to be broken on the 3rd floor". Or, a night watchman may notice movement around the Main Entry after business hours and log some comments about what he noticed on patrol. Chatter Bot also has the ability to post pictures and share them as links to Chatter Feeds.

Because the facility management application is utilizing the Chatter API, other Chatter enabled apps may be used to augment and enhance the application. For example, installing the Chatter Timelines application from Ron Hess provides a nice linear visualization of what sensor events occurred and when.

The Chatter Bot is built using an Arduino Duemilanove electronics prototyping platform with Ethernet shield and a Radio Shack breadboard with motion and light sensors.

The Arduino sketch source code just runs in a loop polling the sensors and then notifies Salesforce via a proxy service when environmental changes are detected.

I initially designed the Chatter bot to talk directly to the cloud, but later discovered there are more benefits in having bots communicate through a proxy service to Salesforce.

Industry applications
There are a number of possible industry applications that can leverage this framework:

  • Continuous Emissions Monitoring (CEM) Systems
  • Home / Business Alarm Systems
  • Shipment / Automobile location tracking
  • Environmental Control Systems
  • Healthcare Biosensors

If you'd like to learn more about interfacing Salesforce with the physical world via sensors, then please vote for my proposed session "The Chatter of Things" for Dreamforce in December 2010 to see Chatter Bot live and in action.

The number of objects far exceeds the number of people and there is great potential in using Force.com to enable the Internet of Things. There are many more enhancements I'll be making to this platform. I look forward to sharing them.

Video
This video provides a brief demonstration (4:44) of the Facility Management Chatter Bot in action.

Friday, 14 May 2010 09:46:00 (Pacific Daylight Time, UTC-07:00)
# Thursday, 29 April 2010

The VMForce value proposition:
  1. Download Eclipse and SpringSource
  2. Signup for a Salesforce Development account and define your data model
  3. Write your Java app using objects that serialize to Salesforce
  4. Drag and drop your app onto a VMWare hosted service to Force.com to deploy
The partnership breaks down as:
  1. VMWare hosts your app
  2. Salesforce hosts your database
The 2 are seamlessly integrated so that Java Developers can effectively manage the persistence layer as a black box in the cloud without worrying about setting up an Oracle or MySql database, writing stored procedures, or managing database performance and I/O.

This is all great news for Java Developers. It's yet another storage option on the VMWare cloud (I'm assuming VMWare remains fairly agnostic beyond this relationship and Force.com becomes one of many persistence options to Spring Source developers).

For larger organizations already using Salesforce but developing their custom Java apps, this opens up some new and attractive options.

Existing Salesforce Developers may have wondered if Java would replace Apex and Visualforce, prompting a Salesforce blog post aptly titled "In case you were wondering...". In short, "no". Apex and Visualforce will continue to evolve and be the primary platform for developing Salesforce native apps. I personally will continue to use Apex and Visualforce for all development when the data is stored in Salesforce unless compelling requirements come along to use VMForce (most likely that have particular DNS, bandwidth, or uptime needs).

So why the partnership between VMWare and Salesforce? When Salesforce announced Apex back in 2007 it was met with broad acceptance, but some common criticisms were:
  • Why another DSL (Domain Specific Language)?
  • Why can't I leverage my existing Java skills to write business apps?
  • Salesforce is written in Java. Can I upload my existing Java apps to the cloud?
These criticisms were coupled with some looming 800 pound Gorillas in the room (Amazon and VMWare) pushing virtualization as the basis for cloud computing while Salesforce promoted the non-virtualized, multi-tenant path to cloud computing.

They both can't be right. Or can they? CIO's are being bombarded with virtualization as a viable cloud computing solution, so I think Salesforce has wisely taken a step back and taken a position that says "We do declarative, hosted databases better than anyone else. Go ahead and pursue the virtualization path for your apps and leverage our strength in data management as the back end".

Over time, the bet is that VMForce customers will also discover the declarative configuration tools for form-based CRUD (Create/Read/Update/Delete) apps can meet the rapid prototyping and development needs of most any line of business apps.

For object-oriented developers, Salesforce provides a persistence layer that meets or exceeds any ORM (Object Relational Mapping) or NoSQL solution. The impedance mismatch between objects and relational databases is widely known, and VMForce solves this problem very elegantly.

I think other ORM Developer communities, such as Ruby on Rails Developers, will appreciate what is being offered with VMForce, prompting some to correctly draw parallels between VMForce and Engine Yard.

In my experience working with Azure, I cannot emphasize enough how difficult it was to work through the database and storage aspects on even the most simplest application design. Sure, C# is a dream to work with (compared to both Java and Apex) and ASP.NET works well enough for most applications, but Microsoft leaves so many data modeling and storage decisions to the Developer in the name flexibility, which ultimately means sacrificing simplicity, reliability, and in some cases scalability.

Some final thoughts, observations and questions on VMForce:
  • Are there any debugging improvements when using VMForce relative to Apex/VF?
  • The connection between VMWare and Salesforce is presumably via webservices and not natively hosted in the same datacenter. Does this imply some performance and latency tradeoffs when using VMForce? (Update: No. Per the comment from David Schach, the app VM is running in the same datacenter as the Force.com DB)
  • Licensing: No idea what the pricing will be. Will there be a single biller or will Developers get separate invoices from VMWare and Salesforce for bandwidth/computing and storage?
  • It strikes me as quite simple to develop Customer/Partner portals or eCommerce solutions in Java that skirt the limitations of some Salesforce license models when supporting large named-user/low authentication audiences. Will Salesforce limit the types and numbers of native objects that can be serialized through VMForce?
  • Will VMForce apps be validated and listed on the AppExchange? If so, will they be considered hybrid or native? What security review processes will be enforced?
  • Why only the teaser? Ending a great demo with "and it should be available sometime later this year" just seemed deflating. I think Business Users and Developers respond to this kind of promotion much differently. It would be far better to leave Developers with some early release tools in hand immediately after the announcement and capitalize on the moment. Business Users, however, can be shown Chatter, and other future release features, to satiate their long term interests.

Update:Jesper Joergensen has an excellent blog post that answers many of these questions. Thanks Jesper!

Thursday, 29 April 2010 14:38:34 (Pacific Daylight Time, UTC-07:00)
# Tuesday, 09 March 2010

If you just want the high level summary, I can spare you the time of reading this lengthy blog article and summarize Chatter in the following image.

Salesforce Chatter is basically Facebook for the enterprise and one of the greatest things to come along since sliced bread (besides Jack Bauer). Chatter is a collaboration platform that supports status publishing and the ability to follow people and objects (Salesforce records).

After seeing a Tweet with instructions to email iwantchatter@salesforce.com to participate in the pilot program, I contacted Salesforce and got on the waiting list. I executed some standard legal agreements (Chatter is still considered pre-launch) and Chatter was enabled in our Salesforce org within a couple days. I would suggest "selling" your org in the body of your pilot program request with facts that might help the already overwhelmed Salesforce staff determine which clients might make the best case studies for using Chatter.

Chatter enables the new UI theme, which I've been requesting for several weeks since the launch of Spring '10. Awesome news since this was not available with the initial Spring 10 rollout.


Setting expectations with users.
I was the eager admin excited to get my hands on new features, then it dawned on me that other users might have questions about the change. In a company of < 10 users, this is no big deal. But I'm guessing a larger org may want to do a more methodical rollout.

After enabling Chatter I sent out an email to everyone simply stating "This is going to rock. If you've used Facebook, then you'll understand what the new feature is about. There's also a new theme activated."

Some Salesforce admins on Twitter have suggested just enabling the new UI, setting off the fire alarm as a distraction, then running out of the building. Whatever works! :-) My feeling is that there should be no delay enabling the new UI. The majority of users will love it.


Email Alerts
One feature that really stands out is the ability to receive an email alert whenever certain events occur. I think this is a smart move on Salesforce's part. Each user has the new ability to enable/disable email alerts under Personal Setup "My Chatter Settings".

As much as Google Wave, Wikis, and other social business software may promote the benefits of replacing email with collaboration platforms, it's just never really panned out. There are just too many Outlook and Gmail users out there with investments in email filters and routing rules for driving business process. I left these features enabled (the default setting).

Based on past experience, I had a concern that Chatter emails might eventually overwhelm my inbox (which I have a particular GTD obsession for managing), so before proceeding any further I created a GMail label and filtering rule specifically for Chatter.

Now all emails from "Salesforce Chatter" automatically get tagged and sorted into their own folder in GMail. The equivalent can be easily accomplished in Outlook Rules. This might be a good tip for Salesforce Admins to share in their Chatter rollout email.


Chatter Settings and Feed Tracking

Administrators can define which objects are enabled for Chatter collaboration and which fields on those objects will trigger automatic Chatter updates.

This is a very simple and easy to use 2 panel user interface with Objects on the left and fields on the right. You select which fields will trigger a Chatter alert when modified. The left panel has an excellent UI element that tells you how many fields are being tracked on that particular object, so you don't have to drill down to each object one at a time to identify feed tracking hot spots.

If you've worked with object history tables in Salesforce, you'll be familiar with what this interface is providing. Now with Chatter, in addition to logging history changes, you're also posting messages to the Chatter stream. History table and Chatter feeds are 2 completely separate features, athough they are semantically the same.

Some objects had feed tracking enabled by default. Most did not. Of the ones enabled, they had 2-5 fields already pre-selected. I could not discern any particular pattern as to how or why certain defaults were configured. I'd say the defaults look "balanced" and it does appear that someone put some thought into a reasonable amount of feed traffic on frequently used CRM object/field combinations. There is a "Restore Defaults" link in the right panel of each object. Clicking these restores the defaults.


People and Profile Tabs
One final Administrative step is to add the People and Profile tabs to your main applications. Just as you can view/manage your profile and find your friends in Facebook, Chatter provides Profile and People tabs to accomplish similar tasks.

Chatter will work without these tabs, but users will only be able to incrementally discover other people who comment on particular objects. I added these 2 tabs to all our applications to get the full benefit of Chatter and apply some consistency in the UI. The People tab provides a list view of all "Colleagues" within the Salesforce Org. The Profile tab allows users to define how they appear to other people; including photo, status, and description.

The "Update Photo" feature with image cropper file is probably one of the first features Chatter users will use on the Profile page.

I found it interesting that I could, as a System Administrator, edit other peoples profiles. That initially struck me as "big brother-ish" since I'm so accustomed to passively using social media platforms, and not actually administrating them. The Chatter Profile pages also contain a link to the existing User Detail page template, which I know Admins will appreciate.

One thing I really like about Chatter is that Salesforce didn't complicate the configuration by providing a full access control list (ACL) wrapper with specific Read/Write permissions per object. If you can view a object, you can jump right in and chime in on Chatter without wondering if you only have read only permissions to watch what other people are saying, but not be able to contribute yourself.

Granted somebody at some time likely raised the concern "But what if some CEO only wants employees to read his status messages and not comment on them?". I'm glad Salesforce resisted that level of access permissions in Chatter.


Following
The first introduction to "Following" will likely be on the People page where users are given the opportunity to subscribe to what particular people are posting as their status message. In such a small org, such as ours, you can follow everyone with just a few clicks. But it made me wonder if a "Follow All" button might be handy for larger orgs.

Chatter uses what is commonly referred to as an "asymmetric follow" architecture. In other words, I can follow you but you don't necessarily have to follow me. This is how Twitter works. Facebook, however, uses a symmetric system where we must both mutually agree to be friends to follow each others posts and activities.

It makes sense Salesforce would not want to use Facebook's symmetric following because it's assumed right out of the box all users are colleagues in a single organization. You only need to decide which colleagues activity you want in your stream.

Maybe one day when Chatter is enabled in a Salesforce-to-Salesforce configuration it may be beneficial to limit who is following your activity (for example, would Michael Dell want all his suppliers following his Chatter simply because a SForce-to-SForce bridge was enabled? I'd guess not.... but only Michael can answer that question).

I can see dialogues taking place in the workspace along the lines of "Yeah, I track that industry pretty closely. Follow me on Chatter if you want more information".


Using Chatter
I never really used the Salesforce Home page for much more than reviewing my Tasks lists. 99% of my time in Salesforce has aways been working in records. But that now changes with Chatter since the Home page is the central hub for aggregating all the people and content you are following. The home page is now "the business stream" and the potential opportunity for exploiting its power is huge.


(Note about Screenshot: Yes, Chatter can be pretty boring when you're the first person using it. Fortunately, I have the StanBot API User to keep me company (future post) until adoption catches on with the others :-) )

The first time you drill down on any record details with Chatter feeds enabled you're prompted with some next step options and the option to view a 2 minute video on Chatter.

As developer, we can all appreciate the detail that goes into not only developing a new feature, but also deflecting support calls and questions with simple, easy to understand tutorials and documentation. I give Salesforce 5 out of 5 stars here.

Chatter is so well designed and so very similar to Facebook and other social apps, that I'd be surprised if 80% of Salesforce users couldn't click on "Close", skip the tutorials, and figure out most of Chatter on their own.


Collaboration and Development In The Cloud
While there are many cool features in Chatter, the fact that this platform is hosted in the cloud and can be extended to include pretty much any web service into the business stream is what makes it so powerful. There is no software to install, any Admin can setup Chatter in just a few minutes, and collaboration is baked into the platform as a core feature (ie there's no additional license fee to use Chatter).

Part 2 of my Chatter review will get into the specifics of Chatter enabling existing Salesforce apps and taking a peek into the Chatter API and new types of apps that can be developed. Stay tuned!

Tuesday, 09 March 2010 13:55:26 (Pacific Standard Time, UTC-08:00)