# Sunday, 27 May 2012



The announcement that Facebook intended to acquire Instagram made headlines. While the focus of these headlines primarily centered around the $1B valuation and 50 Million user audience, the true story went largely untold.

Instagram's story marks an inflection point in computing history whereby a start-up embracing cloud infrastructure is no longer the exception, but now considered the rule for building a successful company.

Consider the fundamentals of their meteoric rise over 2 years:
  • 2 people start with an idea
  • Create an iPhone application with hosting on Amazon Web Services (AWS)
  • Distribute the app on Apple's app store
  • Grow an audience
  • Evolve and enhance the product
  • Leverage elastic scalability on AWS
The costs for a v1 product were extraordinarily low and gave them an opportunity to pivot at any point without being grounded by capitalized infrastructure investments (i.e. building their own colo servers and doing their own hosting).

At the time of Instagram's acquisition they had added only 11 more employees, mostly hired in the few months leading up to their acquisition.

Some tips and lessons learned for other start-ups considering embracing the cloud as their development and hosting platform:

Choose a platform and master it: Instagram selected Amazon Web Services, but there are many options available. Don't hedge your bets by designing for platform portability. Dig deep into the capabilities of your chosen platform and exploit them.

Think Ephemeral: Your cloud web servers and storage can disappear at any point. Anticipate and design for this fact. If you have a client server background, then take a step back and grasp the concepts of ephemeral storage and computing before applying old world concepts to the cloud.

Share Knowledge: This is still a new frontier and we're all constantly learning new tips and tricks about how to best utilize cloud computing. Many people at Facebook (including myself) were fans of the Instagram Engineering blog long before the acquisition. Share what you've learned and others will reciprocate.

Build for the long term: The Instagram team did not build a company to be sold. They built a company that could have continued to grow indefinitely, and perhaps even overtaken competing services. There are so many legacy business processes and consumer applications whose growth is restricted by their architecture. Embrace the cloud with the intent to build something enduring and everlasting.

Sunday, 27 May 2012 10:26:04 (Pacific Daylight Time, UTC-07:00)
# Sunday, 30 October 2011

Integrating CRM with ERP/Financial systems can be a challenge. Particularly if the systems are from 2 different vendors, which is often the case when using Salesforce.com CRM.

At Facebook, we've gone through several iterations of integrating Salesforce with Oracle Financials and the team has arrived at a fairly stable and reliable integration process (kudos to Kumar, Suresh, Gopal, Trevor, and Sunil for making this all work).

Here is the basic flow (see diagram below):

1) The point at which Salesforce CRM needs to pass information to Oracle is typically once an Opportunity has been closed/won and an order or contract has been signed.

2) Salesforce is configured to send an outbound message containing the Opportunity ID to an enterprise service bus (ESB) that is configured to listen for specific SOAP messages from Salesforce.

3) The ESB accepts the outbound message (now technically an inbound message on the receiver side) and asserts any needed security policies, such as whitelist trusting the source of the message.

4) This is the interesting part. Because the Salesforce outbound message wizard only allows for the exporting of fields on a single object, the ESB must call back to retrieve additional information about the order; such as the Opportunity line items, Account, and Contacts associated with the Order.

In Enterprise Application Integration (EAI) speak, this is referred to as a Content Enrichment pattern.

5) An apex web service on Salesforce receives the enrichment request, queries all the additional order details, and returns a canonical XML message back to the ESB.

6) The ESB receives the enriched message and begins processing validation and de-duplication rules, then transforms the message into an object that can be consumed by Oracle.

7) The ESB then inserts the Order into Oracle.

8) The Oracle apps API inserts/updates the various physical tables for the order and throws any exceptions.

Sunday, 30 October 2011 17:15:23 (Pacific Standard Time, UTC-08:00)
# Sunday, 21 August 2011

Dreamforce 11 is just around the corner and fellow Facebook Engineer Mike Fullmore and myself have been invited to speak at the following panel:

Enterprise Engineering
Friday, September 2
10:00 a.m. - 11:00 a.m.
Can you really develop at 5x a regular speed when you're at enterprise scale? In this session, a panel of enterprise technical engineers will discuss engineering best practices for the Sales Cloud, Service Cloud, Chatter and Force.com. Topics include security, sandbox, integration, Apex, and release management.

Speakers: Mike Leach, Facebook, Inc.; David Swidan, Seagate Technology LLC; Mike Fullmore, Facebook, Inc.

In case you're not able to attend, here are the high level points from our presentation. :-)

Moving Fast on Force.com

Facebook has been using Salesforce for several months to rapidly prototype, build, and deploy a number of line of business applications to meet the needs of a hyper-growth organization. This presentation shares some best practices that have evolved at Facebook to help develop on the Force.com platform.

People

Before sharing details about Facebook's processes, methodologies, and tools; it's important to point out that the people on the enterprise engineering team are what really make things happen. Each Engineer is able to work autonomously and carry a project through from design to deployment. Great people instinctively take great pride in their work and consistently take the initiative to deliver awesomeness. I would be remiss not to point them out here. All these Engineers operate at MVP levels.
The effort that goes into recruiting a great development team should not be underestimated. Recruiting an awesome team involves several people doing hundreds of phone screens and dozens of interviews. Facebook is in a unique situation in its history and we don't take it for granted that we have access to unprecedented resources and talent. It's actually very humbling to work with such a stellar team at such a great company.

("yes" we're still hiring)

Business Processes

Projects and applications generally fall into one of 9 major process buckets. Engineers at Facebook seeking to have a high impact will typically either have a breadth or depth of knowledge. Some focus on the long-term intricate details and workflows of a single business process while others are able to move around and generally lead several, concurrent, short-term development efforts in any business area.

Sandbox->Staging->Deploy

Each Project has it's own development sandbox. Additionally, each Engineer may also have their own personal sandbox. When code is ready to be deployed, it's packaged using the Ant migration tool format and typically tested in 2 sandboxes: 1 daily refreshed staging org to ensure all unit tests will run and there are no metadata conflicts, and a full sandbox deploy to give business managers an opportunity to test using real-world data.

Change sets are rarely used, but may be the best option for first time deployments of large applications that have too many metadata dependencies to reliably be identified by hand.

The online security scanner is used as a resource during deployment to identify any potential security issues. A spreadsheet is used for time-series analysis of scanner results to understand code quality trends.

Once a package has been reviewed, tested, and approved for deployment; a release Engineer deploys the package to production using Ant. This entire process is designed to support daily deployments. There are typically 3-5 incremental change deployments per week.

Obligatory Chaotic Process Diagram

"Agile" and "process" are 2 words that are not very complimentary. Agile teams must find an equilibrium of moving fast yet maintaining high quality code. Facebook trusts every Engineer will make the right decisions when pushing changes. When things go wrong, we conduct a post-mortem or retrospective against an "ideal" process to identify what trade-offs were made, why, and where improvements can be made.

All Engineers go through a 6 week orientation "bootcamp" to understand the various processes.

Typical Scrum Development Process

The development "lingua franca" within Silicon Valley, and for most Salesforce service providers, tends to be Scrum. Consultants and contractors provide statements of work and deliver progress reports around "sprints". Scrum training is readily available by a number of agile shops.

This industry standard has been adopted internally and keeps multiple projects and people in sync. Mike Fullmore developed a Force.com app named "Scrumbook" for cataloguing projects, sprints, and stories.


A basic Force.com project template with key milestones has been created to give Project Managers an idea of when certain activities should take place. Whenever possible we prefer to avoid a "waterfall" or "big bang" mentality; preferring to launch with minimal functionality, validate assumptions with end-users, then build on the app in subsequent sprints.


Manage The Meta(data)

The general line of demarcation within IT at Facebook is:
  • Admins own the data
  • Engineers own the metadata
The Salesforce Metadata API is a tremendously powerful resource for scaling an enterprise, yet remaining highly leveraged and lean. We've developed custom metadata tools to help us conduct security audits and compare snapshot changes.


(Credit to our Summer Intern, Austin Wang, who initiated the development of internal tools! )

Change Management

The advantage to using Salesforce is the ability to use declarative configuration and development techniques to produce functional applications, then use more powerful Apex and Visualforce tools to maximize apps around business core competencies. "Clicks over code" is a common mantra in Salesforce shops, and Facebook is no exception.

A change management matrix is a useful tool for determining when "clicks-over-code" is preferred over a more rigorous change management process.

Sunday, 21 August 2011 11:52:04 (Pacific Daylight Time, UTC-07:00)
# Sunday, 22 May 2011
DocuSign's Hackathon on May 15th and 16th, 2011 brought out some serious talent to DocuSign's new office in downtown San Francisco. For 2 days, developers took a shot at the $25K purse in 3 categories; consumer, enterprise, and mobile.

My app, SocialSign, was based on the T-Mobile Fave 5 campaign:
  • Select 5 Facebook friends
  • Sign an unlimited voice/text contract through DocuSign
  • Merchants manage the contract in Salesforce.
I admittedly spent about 75% of the hackathon working with the Facebook app canvas and Salesforce Sites APIs, which probably explains why SocialSign placed as runner-up in the consumer category (other developers got far more creative with the actual Docusign API).

In the end, it was a great opportunity to demonstrate the feasibility of conducting social commerce through Facebook using Salesforce CRM to manage contacts, orders, and contracts. Thank you DocuSign for hosting such a great event!

(Video demo of SocialSign available here and embedded below)

Sunday, 22 May 2011 20:24:38 (Pacific Daylight Time, UTC-07:00)
# Sunday, 27 March 2011
JAWS is "Just A Web Shell" framework developed by the Facebook IT group for running Force.com web applications on iOS (iPhone/iPad) devices.

JAWS is a "hybrid" mobile application framework that provides a native iOS application on the mobile desktop that instantiates a web browser to a Force.com web page.




Prerequisites:
The Force.com-Toolkit-JAWS-for-iOS project on GitHub includes all of the following steps and source code for building a mobile iOS app on Force.com.

1) Start in XCode by creating a new View-based iPad or iPhone application.



2) Define a UIWebView controller with a default URL of https://login.salesforce.com.
Append the retURL parameter to the URL to define which Visualforce page to load upon authentication.


3) Launch the Interface Designer and create a simple toolbar with UIWebView to emulate a basic browser UI.



4) Build and launch the iOS simulator from XCode to view the Salesforce login page loaded within the UIWebView



5) Upon login, the return URL (retURL) Visualforce page is loaded. In this case, using the jQuery Mobile framework to display a simple mobile UI.
 


That's it! The native web shell runs the Visualforce directly on iOS. With some crafty mobile-friendly UI design in HTML5, the end-user may not even detect the app is actually running entirely in the cloud on Force.com!
Sunday, 27 March 2011 10:38:30 (Pacific Standard Time, UTC-08:00)
# Saturday, 12 March 2011



One thing I really enjoy about Force.com development is the ability to learn something new everyday. I subscribe to the blogs of just about everyone on the inaugural Force.com MVPs list and make it a standard practice to follow what they are doing and learn new tricks. In addition to blogging, some of these Developers have written excellent books on Force.com and Google App Engine Development (I have them all on my shelf) or make significant contributions to the open source community. Congrats to all of them.

Thank you Salesforce for including me in this list. I'm very honored to call these guys my "peers".

Saturday, 12 March 2011 18:16:24 (Pacific Standard Time, UTC-08:00)
# Saturday, 26 February 2011
Hannes raises a great question about Change Sets in my previous blog post on Release Management.

"One thing I really like about the deployment connections are, that you have a deployment history right in your salesforce live-org. With comments and an insight into the packages.

Which pros do you see as for deploying with the ant script?"


We occasionally use change sets at Facebook. The decision to use Ant vs. Change Sets is akin to a Vi vs. Emacs religious war. They both accomplish the same thing, just in different ways.

General best practice guidelines:
  • Initial deployments of new custom applications are much easier using Change Sets
  • Incremental deployments are more easily managed using Ant
The decision to standardize on Ant packages is largely for compliance reasons. When there is a need to internally document who changed what and when, a SVN repository of all changes with author and reviewer notes satisfies most compliance requirements (such as Sarbanes-Oxley)

Some high level thoughts on the 2 approaches:
  • Some metadata simply is not packageable via Ant, therefore Change Sets must be used
  • The dependency checker in the Change Set packager, while not 100% perfect, is much more reliable than manually adding 100+ individual components
  • In the absence of an IT-Engineer who can create Ant packages, business process owners and non-technical users always have the fallback option of using change sets
  • Prior to Spring '11, invalid inbound change sets could not be deleted, resulting in an accumulation of cruft
  • The ability to delete deployed change sets in Spring '11 removes the ability to audit change history (a compliance violation)
  • The ephemeral nature of some sandboxes means constantly re-establishing deployment connections.
Saturday, 26 February 2011 10:07:06 (Pacific Standard Time, UTC-08:00)
# Monday, 21 February 2011
Managing the release of Force.com changes in a fast paced environment requires an agile development methodology with a focus on:
  1. Continuous Integration - Support daily deployments
  2. Minimalism - Incrementally deploy only what is needed, and nothing more
  3. Reliability - Changes should be 100% backwards compatible and not break existing functionality
The Force.com Migration Tool is a key component to a successful release management process.

Release Management Overview
  • Start the refresh of a sandbox named "staging"
  • Package a release using Eclipse
  • Create an ant package and validate the deployment against the staging sandbox
  • Deploy the changes to production
It's assumed that standard change approvals, code review, and user acceptance testing have already taken place prior to these steps.

Setting Up The Environment

(Note: The following examples are all using Macbook Bash shell. The windows shell counterparts for making directories and copying files are very similar)

Download the Force.com Migration Tool and unzip to a temp directory.

Create a directory for managing ANT deployment packages named something like "changes". Copy the ant-salesforce.jar file to this directory. (It may be easier to manage changes from within the Eclipse workspace folder)

This directory will contain 2 files used by all releases.
~/Documents/workspace/changes/ant-salesforce.jar
~/Documents/workspace/changes/sforce.properties

Manually create the sforce.properties file from the following template.
# sforce.properties
#

# =================== Sandbox Org Credentials ===============
sandbox.username	= user@domain.com.staging
sandbox.password	= password
sandbox.url			= https://test.salesforce.com

# =================== Package Deployment Credentials ================
deploy.username		= user@domain.com
deploy.password		= password
deploy.url			= https://login.salesforce.com

# Use 'https://login.salesforce.com' for production or developer edition (the default if not specified).
# Use 'https://test.salesforce.com for sandbox.
Creating the Deployment Package

The Force.com Migration Tool expects a structured directory that includes a package.xml file describing the changes to be deployed. You can manually create this structure but it's far easier to let Eclipse create the package for you.

1) Begin by opening Eclipse and starting a new Force.com project.


2) Give the project a change-specific name and provide connection credentials to the source development environment (not production).


3) When given the option to choose initial project contents, check the "Selected metadata components" option and click "Choose..."


4) Expand the component tree folders and select only the components to be deployed (see principle #2 on minimalism. "Deploy only what is needed, and nothing more")


5) You should now have a project in the structure required by the Force.com migration tool


6) Create a new sub-directory under the changes directory; in this case named CHG12345; and copy the 'src' directory to this folder.
mkdir ~/Documents/workspace/changes/CHG12345
cp -r ~/Documents/workspace/CHG12345/src ~/Documents/workspace/changes/CHG12345

7) Copy the following ant build.xml template to the change directory.

<project name="Salesforce Package Deployment Script" default="usage" basedir="." xmlns:sf="antlib:com.salesforce">
	<property file="../sforce.properties"/>

	<target name="usage">
		<echo>Usage: ant [task]
Task options:
ant validate: Simulates a deployment of the package and returns pass/fail debugging information
ant deploy: Deploys the package items defined in package/package.xml
ant undeploy: Rolls back deployed items (must follow ant deploy)
	</target>
	
	<target name="validate">
		<sf:deploy 
			username="${sandbox.username}" 
			password="${sandbox.password}" 
			serverurl="${sandbox.url}" 
			deployRoot="src" 
			logType="Detail" 
			checkOnly="true"
			runAllTests="true"
			pollWaitMillis="10000"
			maxPoll="200" />
	</target>
	
	<target name="deploy">
		<sf:deploy 
			username="${deploy.username}" 
			password="${deploy.password}" 
			serverurl="${deploy.url}" 
			deployRoot="src"
			pollWaitMillis="10000" 
			maxPoll="200">
		</sf:deploy>
	</target>
	
	<target name="undeploy">
		<sf:deploy username="${deploy.username}" password="${deploy.password}" serverurl="${deploy.url}" deployRoot="src"/>
	</target>
</project>


Some notes on the build.xml template
  • Notice that the environment variables are pulled in from sforce.properties in the parent directory.
  • The "validate" target uses a sandbox for testing the deployment
  • Winter '11 introduced the ability to refresh dev and config sandboxes daily. Validating a deployment against a recently refreshed sandbox will identify any potential production deployments upfront
  • The "deploy" target uses the production credentials
  • "undeploy" must be run immediately after a "deploy" to work. Undeploy has some quarky requirements and is not always known to work as expected
Deploying the Package

All that is left to do is run ant from the change directory and call the specific validate or deploy targets. If validation or deployment time out, then adjust the polling attributes on the targets.

Note: It's common to check-in the package to a source control repository prior to deployment to production.

cd ~/Documents/workspace/changes/CHG12345
ant validate
ant deploy
Monday, 21 February 2011 21:00:00 (Pacific Standard Time, UTC-08:00)
# Sunday, 20 February 2011
I've decided to borrow an old Harley-Davidson saying as my definition for the cloud.

"If I have to explain, you wouldn't understand"

Once you're riding the cloud development wave, you'll never turn back :-)

Sunday, 20 February 2011 19:38:13 (Pacific Standard Time, UTC-08:00)
# Sunday, 23 January 2011
Salesforce Administrators and Developers are routinely required to manipulate large amounts of data in a single task.

Examples of batch processes include:
  • Deleting all Leads that are older than 2 years
  • Replacing all occurrences of an old value with a new value
  • Updating user records with a new company name

The tools and options available are:
  1. Browser-based Admin features / Execute anonymous in System Log
  2. Data loader
  3. Generalized Batch Apex (Online documentation)
  4. Specialized Batch Apex
Option 1 (Admin Settings) represents the category of features and tools available when directly logging into Salesforce via a web browser. The transactions are typically synchronous and subject to Governor Limits.

Option 2 (Data Loader) provides Admins with an Excel-like approach to downloading data using the Apex Data Loader, manipulating data on a local PC, then uploading the data back to Salesforce. Slightly more powerful than browser-based tools, doesn't require programming skills, and subject to web service API governor limits (which are more generous). But also requires slightly more manual effort and introduces the possibility of human error when mass updating records.

Option 3 (Generalized Batch Apex) introduces the option of asynchronous batch processes that can manipulate up to 50 million records in a single batch. Doesn't require programming (if using the 3 utility classes provided below in this blog post) and can be executed directly through the web browser; but limited to the general use cases supported by the utility classes. Some general purpose batch Apex utility classes are provided at the end of this article.

Option 4 (Specialized Batch Apex) requires Apex programming and provides the most control of batch processing of records (such as updating several object types within a batch or applying complex data enrichment before updating fields).

Batch Apex Class Structure:

The basic structure of a batch apex class looks like:

global class BatchVerbNoun implements Database.Batchable<sObject>{
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query); //May return up to 50 Million records
    }
  
    global void execute(Database.BatchableContext BC, List<sObject> scope){       
        //Batch gets broken down into several smaller chunks
        //This method gets called for each chunk of work, passing in the scope of records to be processed
    }
   
    global void finish(Database.BatchableContext BC){   
        //This method gets called once when the entire batch is finished
    }
}
An Apex Developer simply fills in the blanks. The start() and finish() methods are both executed once, while the execute() method gets called 1-N times, depending on the number of batches.

Batch Apex Lifecycle

The Database.executeBatch() method is used to start a batch process. This method takes 2 parameters: instance of the batch class and scope.

BatchUpdateFoo batch = new BatchUpdateFoo();
Database.executeBatch(batch, 200);
The scope parameter defines the max number of records to be processed in each batch. For example, if the start() method returns 150,000 records and scope is defined as 200, then the overall batch will be broken down into 150,000/200 batches, which is 750. In this scenario, the execute() method would be called 750 times; and each time passed 200 records.

A note on batch sizes: Even though batch processes have significantly more access to system resources, governor limits still apply. A batch that executes a single DML operation may shoot for a batch scope of 500+. Batch executions that initiate a cascade of trigger operations will need to use a smaller scope. 200 is a good general starting point.

The start() method is called to determine the size of the batch then the batch is put into a queue. There is no guarantee that the batch process will start when executeBatch() is called, but 90% of the time the batch will start processing within 1 minute.

You can login to Settings/Monitor/Apex Jobs to view batch progress.


Unit Testing Batch Apex:
The asynchronous nature of batch apex makes it notoriously difficult to unit test and debug. At Facebook, we use a general Logger utility that logs debug info to a custom object (adding to the governor limit footprint). The online documentation for batch apex provides some unit test examples, but the util methods in this post use a short hand approach to achieving test coverage.

Batch Apex Best Practices:
  • Use extreme care if you are planning to invoke a batch job from a trigger. You must be able to guarantee that the trigger will not add more batch jobs than the five that are allowed. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.
  • When you call Database.executeBatch, Salesforce.com only places the job in the queue at the scheduled time. Actual execution may be delayed based on service availability.
  • When testing your batch Apex, you can test only one execution of the execute method. You can use the scope parameter of the executeBatch method to limit the number of records passed into the execute method to ensure that you aren't running into governor limits.
  • The executeBatch method starts an asynchronous process. This means that when you test batch Apex, you must make certain that the batch job is finished before testing against the results. Use the Test methods startTest and stopTest around the executeBatch method to ensure it finishes before continuing your test.
  • Use Database.Stateful with the class definition if you want to share variables or data across job transactions. Otherwise, all instance variables are reset to their initial state at the start of each transaction.
  • Methods declared as future are not allowed in classes that implement the Database.Batchable interface.
  • Methods declared as future cannot be called from a batch Apex class.
  • You cannot call the Database.executeBatch method from within any batch Apex method.
  • You cannot use the getContent and getContentAsPDF PageReference methods in a batch job.
  • In the event of a catastrophic failure such as a service outage, any operations in progress are marked as Failed. You should run the batch job again to correct any errors.
  • When a batch Apex job is run, email notifications are sent either to the user who submitted the batch job, or, if the code is included in a managed package and the subscribing organization is running the batch job, the email is sent to the recipient listed in the Apex Exception Notification Recipient field.
  • Each method execution uses the standard governor limits anonymous block, Visualforce controller, or WSDL method.
  • Each batch Apex invocation creates an AsyncApexJob record. Use the ID of this record to construct a SOQL query to retrieve the job’s status, number of errors, progress, and submitter. For more information about the AsyncApexJob object, see AsyncApexJob in the Web Services API Developer's Guide.
  • All methods in the class must be defined as global.
  • For a sharing recalculation, Salesforce.com recommends that the execute method delete and then re-create all Apex managed sharing for the records in the batch. This ensures the sharing is accurate and complete.
  • If in the course of developing a batch apex class you discover a bug during a batch execution, Don't Panic. Simply login to the admin console to monitor Apex Jobs and abort the running batch.


Utility Batch Apex Classes:

The following batch Apex classes can be copied and pasted into any Salesforce org and called from the System Log (or Apex) using the "Execute Anonymous" feature. The general structure of these utility classes are:
  • Accept task-specific input parameters
  • Execute the batch
  • Email the admin with batch results once complete
To execute these utility batch apex classes.
1. Open the System Log

2. Click on the Execute Anonymous input text field.

3. Paste any of the following batch apex classes (along with corresponding input parameters) into the Execute Anonymous textarea, then click "Execute".


BatchUpdateField.cls
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id, CompanyName from User';
BatchUpdateField batch = new BatchUpdateField(query, 'CompanyName', 'Bedrock Quarry');
Database.executeBatch(batch, 100); //Make sure to execute in batch sizes of 100 to avoid DML limit error
*/
global class BatchUpdateField implements Database.Batchable<sObject>{
    global final String Query;
    global final String Field;
    global final String Value;
   
    global BatchUpdateField(String q, String f, String v){
        Query = q;
        Field = f;
        Value = v;
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject> scope){   
        for(sobject s : scope){
            s.put(Field,Value);
         }
        update scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchUpdateField ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, CompanyName from User';
        BatchUpdateField batch = new BatchUpdateField(query, 'CompanyName', 'Bedrock Quarry');
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
BatchSearchReplace.cls
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id, Company from Lead';
BatchSearchReplace batch = new BatchSearchReplace(query, 'Company', 'Sun', 'Oracle');
Database.executeBatch(batch, 100); //Make sure to execute in batch sizes of 100 to avoid DML limit error
*/
global class BatchSearchReplace implements Database.Batchable<sObject>{
    global final String Query;
    global final String Field;
    global final String SearchValue;
    global final String ReplaceValue;
   
    global BatchSearchReplace(String q, String f, String sValue, String rValue){
        Query = q;
        Field = f;
        SearchValue = sValue;
        ReplaceValue = rValue;
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject&> scope){   
        for(sobject s : scope){
            string currentValue = String.valueof( s.get(Field) );
            if(currentValue != null && currentValue == SearchValue){
                s.put(Field, ReplaceValue);
            }
         }
        update scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchSearchReplace ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, Company from Lead';
        BatchSearchReplace batch = new BatchSearchReplace(query, 'Company', 'Foo', 'Bar');
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
BatchRecordDelete.cls:
/*
Run this batch from Execute Anonymous tab in Eclipse Force IDE or System Log using the following

string query = 'select Id from ObjectName where field=criteria';
BatchRecordDelete batch = new BatchRecordDelete(query);
Database.executeBatch(batch, 200); //Make sure to execute in batch sizes of 200 to avoid DML limit error
*/
global class BatchRecordDelete implements Database.Batchable<sObject>{
    global final String Query;
   
    global BatchRecordDelete(String q){
        Query = q;   
    }
   
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator(query);
    }
   
    global void execute(Database.BatchableContext BC, List<sObject&> scope){       
        delete scope;
    }
   
    global void finish(Database.BatchableContext BC){   
        AsyncApexJob a = [Select Id, Status, NumberOfErrors, JobItemsProcessed,
            TotalJobItems, CreatedBy.Email
            from AsyncApexJob where Id = :BC.getJobId()];
       
        string message = 'The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.';
       
        // Send an email to the Apex job's submitter notifying of job completion. 
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {a.CreatedBy.Email};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Salesforce BatchRecordDelete ' + a.Status);
        mail.setPlainTextBody('The batch Apex job processed ' + a.TotalJobItems + ' batches with '+ a.NumberOfErrors + ' failures.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });   
    }
   
    public static testMethod void tests(){
        Test.startTest();
        string query = 'select Id, CompanyName from User where CompanyName="foo"';
        BatchRecordDelete batch = new BatchRecordDelete(query);
        Database.executeBatch(batch, 100);
        Test.stopTest();
    }
}
Sunday, 23 January 2011 12:07:02 (Pacific Standard Time, UTC-08:00)