# Tuesday, 14 December 2010
If you’re a Developer, then Dreamforce 2010 was a very good year. Perhaps there was a new killer business user feature announced in a Sales breakout session somewhere, but I unfortunately missed it. The conference kicked-off with Cloudstock on Monday and each subsequent day brought about one announcement after another targeting cloud developers.

The ultimate in serendipitous geekery had to be the Node.js session with Ryan Dahl. One day I’m hacking away on node.js and the next I’m running into the core developer at CloudStock. I’m really hooked on this new recipe for the cloud of Linux+Node+NoSQL (is there a cool acronym for this stack yet? LinodeSQL?). Thread based web server processing is starting to feel “old school” thanks to Ryan.

Database.com was the major announcement on Tuesday and, in my opinion, was way past due. The .NET open source toolkit co-launched with Salesforce in 2006 was built on the premise of using Salesforce as a language agnostic platform. Whether you are a Java, C#, Ruby, or PHP Developer should be irrelevant when using a database in the cloud that is accessible via web services (Given that ~50% of enterprise IT shops have Microsoft Developers on staff and C# adoption continues to grow, it seemed logical to win over this community with next generation tools and services that make them more productive in the cloud).

However, the launch of Apex and the AppExchange brought about a few years of obligatory Marketing and promotion of only native platform features while the language agnostic "hybrid" crowd sat patiently, admiring the work of Simon Fell's web services API and the potential for real-time integration between apps.

The “language agnosticism” of database.com was further reinforced with the announced acquisition of Heroku. Whether the Ruby community would have naturally gravitated to database.com on their own or the acquisition was necessary to accelerate and demonstrate the value will be perpetually debated.

But the Heroku acquisition somewhat makes sense to me. Back in April I wrote the following about VMForce:

"I think other ORM Developer communities, such as Ruby on Rails Developers, will appreciate what is being offered with VMForce, prompting some to correctly draw parallels between VMForce and Engine Yard."

Same concept, different Ruby hosting vendor (Engine Yard has the low-level levers necessary for enterprise development IMO). The ORM mentality of RoR Developers; who are simply tired of futzing around with relational DBs, indexes, and clusters; are a good D-Day beachhead from which Salesforce can launch their new platform message.

Salesforce Marketing will probably need to tread carefully around the message of “Twitter and Groupon use Ruby on Rails” to maintain credibility in this community. While these statements are technically true, Fail Whales galore prompted Twitter to massively rearchitect their platform, which resulted in the development of Flock DB and crap loads of memcache servers.

The fact remains that very few massively scaled cloud services run on a relational database. Twitter, Groupon, Facebook, and most other sites run on eventually consistent, massively scaled NoSQL (Not only SQL) architectures. Only Salesforce has developed the intellectual property, talent, and index optimizing algorithms to carry forward relational ACID transactions into the cloud.

The pricing and scalability of database.com appear to fit well for SMB apps or 1-3 month ephemeral large enterprise apps (campaigns or conference apps like Dreamforce.com).

REST API
The RESTful interface hack blogged back in May will soon be a fully supported feature in Spring ‘11.

SiteForce
SiteForce looked pretty impressive. I’m guessing SiteForce is the work of SiteMasher talent. Web Designers and Developers accustomed to using apps like Dreamweaver or Front Page will definitely want to check this out.

Governor Limits
Oh yeah, we all hate ‘em but understand they’re a necessary evil to keep rogue code from stealing valuable computing resources away from other tenants on the platform. The big news was that the number of governor limits will be dropping from ~55 down to 16 in the next major release by removing the trigger context limits (this brought serious applause from the crowd).

Platform State of the Union
The Developer platform state of the union was full of surprises. Shortly after being given a Developer Hero award for the Chatter Bot app developed earlier this year, Salesforce demonstrated full breakpoint/step-through debugging between Eclipse and a Salesforce Org.

This is a skunkworks-type project still in it’s infancy that will hopefully see the light of day. The demo definitely left me wondering “How’d he do that? Persistent UDP connections? Is that HTTP or some other protocol? Is a database connection being left open? What are the timeout limitations? How does Eclipse get a focus callback from a web browser?”

Permission Sets
Where were they? I was really hoping Salesforce would go all in and take their cloud database technology to the next level of access control by announcing granular permission management with the release of permission sets.

This is a subtle feature not widely appreciated by most Salesforce users or admins, but any Salesforce org with more than 100 users understands the need for permission sets.

Conclusion
The technology and features were great, but the real highlight of the conference was networking with people.

I really need to hang out with more Salesforce employees now that I live in the bay. Conversations with the Salesforce CIO, Evangelists, Engineers, and Product Managers were energizing.

To have our CIO and IT Team attend Dreamforce and be aligned on Force.com as a strategic platform is invigorating and makes Facebook an exciting place to work.

The family of Salesforce friends on Twitter continues to grow. It’s always great to meet online friends in person and hang out with existing friends. See you all at Dreamforce 2011!

Honored to receive one of three Developer Hero awards. Thank you Salesforce!
Tuesday, 14 December 2010 22:39:19 (Pacific Standard Time, UTC-08:00)
# Sunday, 05 December 2010

Update: About one year after writing this article I switched to hosting node.js apps on Heroku. Check it out.

Node.js is an impressively fast and lightweight web server based on the Google V8 Javascript engine. I really enjoy working with node.js for the simple elegance of language parity between client and server. The use of server-side Javascript also means taking advantage of common JS patterns, such as event-driven programming and closures (there's just something reassuring and pure about functional programming on the server that gives me a greater sense of confidence when there are no state dependencies between expressions).

In the spirit of the CloudStock event tomorrow, I set out to install Node.js on an Amazon EC2 instance. Amazon is running a promotion on free EC2 micro instances, otherwise micros can be leased for about $0.02 per hour or ~$20 per month.

Step 1

Signup for Amazon EC2 and click on "Launch Instance" to get started.

Step 2

I prefer the default Amazon Linux machine image, but any Linux distribution should work. The default Linux AMI is stripped down and secure out of the box.

Step 3

Select the type of instance. I recommend starting with a Micro for creating a simple Node sandbox.

Step 4

Accept the default advanced instance options by clicking "Continue"

Step 5

Give your instance a name, such as "Node Sandbox".

Step 6

This is the most critical part of the instance provisioning process. If a key pair has not already been defined in EC2, create one by entering a key name then downloading the resulting *.pem file to the local desktop.

Step 7

If this is your first time at Amazon, you'll need to generate a firewall profile for use by the Node Sandbox instance. Allow SSH (port 22) and HTTP (port 80).
We'll initially be hosting Node on port 8080, which unfortunately is not configurable in the instance request wizard. Make a mental note that we'll be coming back to security groups in step 10 to allow port 8080.

Step 8

Review the request and press "Launch" to fire up your Linux virtual machine. Amazon says it could take several minutes to provision the VM. In my experience, the micro instances have only taken seconds.

Step 9

Confirm the new instance is running. Copy the "Public DNS" URL of the instance into a text editor; you'll be using it frequently in the next steps. (Note: Make sure to copy the Public DNS and not the Private DNS, which is only used for internal EC2 connections).



Step 10

Select the instance then click on Security Groups. Modify the group to allow tcp traffic over port 8080. That's it for EC2 configuration.


Step 11

SSH. The remaining steps all require SSH access to the newly provisioned EC2 instance. This article uses the bash terminal on Apple OSX for demonstration.

Remote access for root user is not enabled. Instead, the user "ec2-user" is made available with sudo permissions. Login via SSH using syntax:

ssh -i keyFilePath/keyFile.pem ec2-user@ec2-public-dns

Switch i (-i) authenticates using the identity of the key file created in step 6.
keyFilePath is the path to the key file generated in step 6.
ec2-public-dns is the domain name for the EC2 instance retrieved in step 9.

Note: You may receive the following error when attempting to SSH to EC2.

WARNING: UNPROTECTED PRIVATE KEY FILE!  
Amazon requires the keyfile to be truly private, therefore only readable by you and no other user on the local machine. To fix the issue, change the file mode with:
chmod 700 keyFileName.pem

A successful login will present the following prompt.

Step 12  Download/Copy Node.JS.

Download Node.js to your local file system. In this example, I've downloaded Stable: 2010.11.16 node-v0.2.5.tar.gz

Open a local shell window and copy the package to the EC2 instance using secure copy.
Example:
scp -p -i ../keyPath/keyFile.pem node-v0.2.5.tar.gz ec2-user@ec2-204-236-155-210.us-west-1.compute.amazonaws.com:node.tar.gz

Step 13 Extract

The scp copy should copy node to the home/ec2-user directory. You can extract node and configure/install from this directory. The following steps assume extraction to the root opt directory so the following commands are all executed in the "Super User" sudo context to override the permission errors you'll otherwise encounter while logged in as the ec2-user user.

Copy to opt
sudo cp node.tar.gz ../../opt
cd ../../opt
sudo gunzip -d node.tar.gz
sudo tar -xf node.tar

Change to the node install directory. Listing the contents will display the following:

[ec2-user@ip-xxx-yyy-zzz node-v0.2.5]$ ll
total 112
-rw-r--r-- 1 1000 1000  4674 Nov 17 05:46 AUTHORS
drwxr-xr-x 3 1000 1000  4096 Nov 17 05:46 benchmark
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:46 bin
-rw-r--r-- 1 1000 1000 31504 Nov 17 05:46 ChangeLog
-rwxr-xr-x 1 1000 1000   387 Nov 17 05:46 configure
drwxr-xr-x 7 1000 1000  4096 Nov 17 05:46 deps
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:47 doc
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:46 lib
-rw-r--r-- 1 1000 1000  2861 Nov 17 05:46 LICENSE
-rw-r--r-- 1 1000 1000  2218 Nov 17 05:46 Makefile
-rw-r--r-- 1 1000 1000   413 Nov 17 05:46 README
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:46 src
drwxr-xr-x 8 1000 1000  4096 Nov 17 05:46 test
-rw-r--r-- 1 1000 1000  1027 Nov 17 05:46 TODO
drwxr-xr-x 5 1000 1000  4096 Nov 17 05:46 tools
-rw-r--r-- 1 1000 1000 19952 Nov 17 05:46 wscript

Step 14 Configure and Install

The next step is to configure the node environment. Typing the following will result in a dependency error
sudo ./configure

/opt/node-v0.2.5/wscript:138: error: could not configure a cxx compiler!

To fix this error, we need only to install the GCC compiler from the YUM repository hosted by Amazon

sudo yum install gcc-c++

We're not going to be installing any SSL certificates in this sandbox, so run config without support for Open SSL

sudo ./configure --without-ssl

Now you should be able to make the Node.js package

sudo make


At this point I recommend getting up and going for a walk, making a sandwich, or anything else that will kill up to 10 minutes. The small and micro instances on EC2 have limited access to CPU computing resources, making this part of the install process lengthy.

Once the make is complete, the final step is

sudo make install


You can optionally run sudo make test to verify the install.

Step 15 Hello World

Verify the installation is working by creating a file named example.js with the following contents.

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(8080);
console.log('Server running at http://ec2-204-236-155-210.us-west-1.compute.amazonaws.com:8080');


Then run node from the command line:

node example.js

Open a browser to confirm the domain is accessible from port 8080. That's it!

Because node is running as a shell process you may want to launch the server using the "no hangup" util (or 'forever') to ensure node runs beyond the shell session.

nohup node example.js &

There's huge potential for creating scalable cloud services when combining Amazon Web Services and node.js. Enjoy!


Sunday, 05 December 2010 12:05:57 (Pacific Standard Time, UTC-08:00)
# Sunday, 21 November 2010

"How can I add a picklist field?"
"Can I modify this report?"
"Would you please update the Lead trigger to de-dupe by email address?"

Salesforce Administrators are faced with these, and many more questions on a routine basis. Salesforce.com CRM provides sufficiently granular access permissions to control precisely what an end user can and cannot do. As end users click around Salesforce, the general rule is "If you can do it, you probably have permission". However, the same thing cannot be said for System and Delegated Administrators.

Once a user is given administrative access to application settings, then more training and monitoring must be imposed. In short, a "change management process" must be implemented.

There are a number of business drivers for a change management process:
  1. Compliance - Your industry may require that certain changes be reviewed, approved, documented, and deployed in a methodical way
  2. Productivity - The smallest change in a user interface can result in hours of lost productivity if users aren't given advance warning or training
  3. Reliability - Prevent the deployment of changes that may break existing workflows or processes

Change management processes attempt to answer all or some of the following questions:
  • Who approved and/or made the change?
  • Why was the change needed?
  • When was the change made?
  • How was the change deployed?

A change management matrix can help identify how each type of change should be managed.

Steps for creating a change management matrix:
  • Create a list of common Salesforce changes in a spreadsheet
  • Define a spectrum of change management categories (For example, red/yellow/green lights)
  • Periodically sit down with all Sys Admins, Developers and Approvers and review how the organization should respond to each type of change category
  • There will always be exceptions. Add an "Exceptions" column to the matrix and document them
  • Train admins on the use of built-in auditing tools to ensure compliance with the CM process

Sunday, 21 November 2010 13:07:15 (Pacific Standard Time, UTC-08:00)
# Tuesday, 26 October 2010

Career__c career = [SELECT Id, Name FROM Career__c WHERE Culture__c='Cool' AND Location__c='Palo Alto, CA' AND Description__c LIKE 'Salesforce%' AND PerkId__c IN (select Id from Perk__c) LIMIT 1];
system.assertEquals('Software Application Developer', career.Name);


Join an incredible team that is shaping the future of Facebook with the development of enterprise apps in the cloud.

Apply for this position

Force.com Application Developer
Palo Alto, CA

Description
Facebook is seeking an experienced application developer to join the IT team and participate in the development, integration and maintenance of force.com applications supporting Facebook data centers consigned inventory asset tracking. This is a full-time position based in our main office in Palo Alto.

Responsibilities:
•    Technical design, configuration, development and testing of Force.com custom applications, interfaces and reports;
•    Model, analyze and develop or extend persistent database structures which are non-intrusive to the base application code and which effectively and efficiently implement business requirements;
•    Integrate force.com applications to other facebook external or internal Business Applications and tools.
•    Develop UI and ACL tailored to facebook employees and suppliers.
•    Apply sound release management and configuration management principles to ensure the stability of production environments;
•    Participate in all phases of software development/implementation life cycle including functional analysis, development of technical requirements, prototyping, coding, testing, deployment and support;
•    Participate in peer design and code review and analyze and troubleshoot issues in a complex applications environment, which include Salesforce.com, force.com, Oracle E-Business Application Suite R12 and custom built lamp stack based tools and systems. 
•    Research and understand force.com capabilities to recommend best design/implementation approaches and meet business requirements.
•    Plan and implement data migration and conversion activities;
•    Provide daily and 24x7 on-call support

Requirements:
•    Bachelor's in Computer Science, any engineering field, or closely related field, or foreign equivalent;
•    Passionate about Salesforce and building apps on the force.com platform
•    At least 6 years of design, configuration, customization and development experience with Salesforce in general and Force.com
•    Strong development background with APEX and VisualForce.
•    Strong knowledge of Salesforce/Force.com API and toolkits for integration.
•    Strong understanding of RDBMS concepts and programming using SQL and SOQL;
•    Background in database modeling and performance tuning;
•    Knowledge of best practices around securing and auditing custom applications on Force.com
•    Background in design and development of web based solutions using Java, JavaScript, Ajax, SOAP, XML and web programming in general.
•    Strong experience with business process and workflow and translating them into system requirements.




Tuesday, 26 October 2010 08:15:01 (Pacific Daylight Time, UTC-07:00)
# Sunday, 24 October 2010
Question: If you were given carte blanche to design a cloud platform, which database would you use?
Answer: This is a trick question since "none" is an acceptable answer.

In researching how to manage cached objects for the Cool Mesh project, I found this article on the High Availability blog "Are Cloud-Based Memory Architectures the Next Big Thing?" that articulates a lot of what I'm seeing in progressive cloud design.

Many high availability cloud platforms currently manage all reads and writes through memcached objects before persisting to a database. In some cases, the database write is done asynchronously while the cached object is trusted as authoritative. How can a database be called a system of record if the client is already proceeding on operations with a cached object before it hits the database? Dunno. It's only a matter of time before common sense prevails and we begin to manage in memory objects as the system of record and using disks as a backup or bootstrap index.

Now, would I let 100 people withdraw funds from a single bank account using this architecture? Probably not. A RAM system of record assumes some level of volatility, immutability, and non-transactional requirements.

I see this cloud architecture playing out in a couple phases:
1) RAM based architectures with system of record in memory, but still using disks for backup
2) Solid State Drive (SSD) based architectures that have memory I/O performance, but revert back to system of record on disk

The timing of each phase is largely driven by the cost and performance of SSDs. Cool Mesh is learning towards the phase 1 approach, but also makes use of HTML5 local storage for distribution of object storage, so no real benefit in waiting for phase 2 anyway.

Sunday, 24 October 2010 15:19:23 (Pacific Daylight Time, UTC-07:00)
# Saturday, 23 October 2010

Short Version: The rumors are true. I accepted a position at Facebook with a focus on Salesforce.

(That's about it. Read the long version for more details)

Long Version:
I started working at Facebook in August with a focus on designing and developing apps on Force.com within the IT group. The group is very focused on embracing cloud-based solutions to maintain agility and efficiency during a hyper-growth phase.

My wife and I moved from Portland, OR to San Carlos, CA as part of the transition and simply love it. We've actually been considering a move to Silicon Valley for 2 years for the weather, family, and, well... this is the center of cloud computing. The Facebook position gave us the perfect opportunity to make the jump.

Why Facebook? The 3 primary motivations for working at Facebook are the cloud, a progressive IT group, and company culture.

The Cloud
Facebook's infrastructure is one of the most under appreciated cloud architectures in existence today.

When you consider that Salesforce can support 1.5M customers on 1,000 servers (a 1,500:1 ratio) you understand why we are at an important inflection point in IT infrastructure and must consider cloud-based solution providers at every opportunity. Multi-tenancy is just much more efficient.

Then when I look internally at Facebook supporting 300M users on 30,000 servers (a 10,000:1 ratio) the efficiencies of scalable cloud-based architecture become even more obvious and impressive. (I'd love to share how we accomplish this feat, but you'll just have to apply here first :-). It truly is amazing and gets more efficient with each release)

Facebook as an enterprise class cloud provider will become even more apparent within 10 years as the user base grows and takes their social graph with them into every facet of their life (phones, devices, automobiles). When Mark Zuckerberg coined the term "social utility" back in 2007, the average response was "Huh?... It's just a website where I communicate with my friends".

But as the social graph gets consumed via web service APIs and millions of people communicate through the service, the role of Facebook (and the reasoning behind Mark's comment) becomes much more like an AT&T for the web rather than just a website.

The focus and dedication to cloud-computing is so serious, that we will build all our own datacenters from the ground up going forward, starting first in Prineville, OR.

Progressive Information Technology
I work in the IT group at Facebook in a ground floor opportunity at a rapidly growing company. We have an opportunity to learn from past enterprise IT architectures and challenge the norms using cloud computing and open source. The infrastructure put in place today will very likely still exist in 10+ years.

Unlike traditional IT roles, Facebook IT is responsible for the core product; our datacenters, servers, and technical operations. Our group is aligned very closely with the product Software Engineers.

Every IT asset; from CRM, ERP, Financials, HRM, SCM, ESB,... and a dozen other TLAs, are in it's first or second generation deployment. There are "build or buy" decisions being made constantly. My role often times involves demonstrating how to leverage what we've already "bought" (using Force.com for a variety of specific solutions).

Facebook takes a progressive stance on utilizing Salesforce as a platform and I've additionally been challenged with projects and opportunities that leverage my strengths outside just Salesforce development.

Culture
I'm probably the youngest 40 year old most people will ever meet still enjoy the occasional hackathon, drinking lots of coffee and embracing the most progressive idea or concept emerging at the time in software development.

There is no other company that I can think of than Facebook that is the epitome of this type of culture. There is a kindred spirit amongst all Engineers at Facebook that is hard to explain. I've participated in 2 hackathons so far and will never miss future events.

Most people don't perceive Facebook as a technology company, but the internal esprit de corps is very focused on building cool stuff, simply for the enjoyment of building stuff. Facebook is a technology company.

Side Projects
Maybe I'm being over ambitious to think I'll have any spare time to work on side projects, but here's a status of projects I continue to support and take great interest in on the side.

Passage .NET Portal Framework: This platform is still OEM'd by a couple partners who do the heavy lifting. The XOS framework has really proven that object oriented databases can be scalable and flexible. A couple more partners/resellers are in the process of developing Passage based solutions, so I don't think the end is in sight.

i-Dialogue: Other partners and associates have taken over the day to day hosting and support of i-Dialogue portals and sites. I still respond to any dlog related questions within 48 hours. Another developer has taken over all custom development, but I still enjoy occasionally optimizing the framework and have committed to participating in quarterly reviews/updates/patches.

Cool Sites: Similar to i-Dialogue, this native AppExchange app is primarily driven by partners. Cool Sites was developed to give my creative and web development friends (who rely primarily on HTML/CSS/JS skills) access to the rapidly growing Salesforce customer base.

There is a backlog of Cool Sites plugin requests that involve porting existing i-Dialogue components to Visualforce (Event calendar, knowledge base, shopping carts, partner finder, maps, etc...). Word on the street is that Salesforce acquired a CMS company and is preparing a similar offering for Dreamforce 2010, so I'm in "wait and see" mode re: the future of this project.

Cool Trends: I started this Salesforce native app after working with the Google Maps API on the Azure proof of concept app last Winter. It's a very simple data warehouse (only 2 custom objects) with time-series analysis charts for reporting day-over-day trends.

If time allows, I'll try to submit this to the AppExchange before Dreamforce 2010.

Cool Mesh: After years of designing and developing single-tenant enterprise web applications, the entrepreneur in me began to pursue the "next big thing". Cool Mesh is a massively scalable, multi-tenant, open-source, clustered computing project loosely based on the following ingredients:
  • Amazon EC2
  • Unix
  • Apache
  • Node.js
  • CouchDB / Cassandra
  • Immutable storage
Don't ask me what I'd do with a working prototype. Probably just re-invent business models I've worked on in the past :-)

Chatter Bot: On the heels of receiving an award for Chatter Bot, I started delving into "The Internet of Things" and envisioning what Cloud 3.0 might look like. An idea emerged, based on Cool Mesh, that continues to intrigue me. Still dabbling.

Music in Schools (non-profit):  I still support or volunteer for 3-5 non-profits on Salesforce and really believe in their vision. Immersing myself in a music education non-profit is where I'd like to start devoting my time next. I'm new to the bay area, so there's some research to do. Who knows; if I don't find a perfect NPO, then I'm open to launching one myself.

I'll be wandering the halls of Dreamforce 2010. Don't be a stranger. If you read this blog, then I hope we'll meet at the Tweetup :-)


Saturday, 23 October 2010 13:53:24 (Pacific Daylight Time, UTC-07:00)
# Monday, 26 July 2010
I missed the discussions on "open cloud" and "cloud standards" at OSCON 2010, but indirectly got the gist of it. Standards themselves aren't bad, but it's rarely a good idea to create a standard for the sake of just creating a standard.

I've observed 3 fundamental perspectives and their likely motivators on the issue of cloud standards:

1) Vendor: Invested $XXX million in software stack Y. Wants it proclaimed "standard" (for obvious reasons)
2) Developer: Wants "write once run everywhere" in the name of "portability" (doesn't want to learn 5 different APIs)
3) Consumer: Doesn't care. Wants it to "just work"

If you look across all cloud stacks today, what do you see that is common amongst them and could even remotely be called a "standard"? I see TCP/IP running on Intel x86 CPU architecture.

If someone told you 10 years ago "In order to democratize access to computing resources and enable ubiquitous access to information we all need to standardize on Intel x86 architecture and deliver services over IP", several people would have scoffed at that. It's incredible how far cloud computing has come as a "standard".

There is an art to minimizing standards in order to maximize adoption.

Look at electricity. For years there were debates over whether DC or AC power was better. The US finally standardized on a plug with 120V AC at 60Hz as a "standard" for electricity. 3 very simple data points that led to an explosion in adoption.

But look at the rest of the world. Different combinations of plugs, voltages, and frequencies abound! Fortunately these standards are all "open" and simple enough to allow for converters and transformers to fill the gaps. Fundamentally, the world really only agreed to standardize on alternating current (AC) at a range of voltages between 110-240 Volts. But for consumers, it "just works".

That's the way cloud computing is evolving. Building on the Internet (TCP/IP), running on servers (Intel x86), and bridging the gaps using a few basic tools (JSON, XML, SOAP, REST, CSV).

Check out Nicholas Carr's book "The Big Switch" for the back story on how industrial America moved towards standardized and centralized power utilities and how that parallels what's going on in cloud computing.

The Internet is called "the web" for a reason. It's in reference to its inherently chaotic data structure. Asserting standards to make sense of it won't help. It will only inhibit adoption. The web and cloud computing must be allowed to organically sprawl uninhibited.

Final thoughts:
1) Vendors: Explain to customers which specific primitive cloud services you offer (Windows/Linux, HTTP/SSL/SMTP/FTP) and preempt the discussion on integration by providing APIs to raw computing resources.
2) Developers: Sorry. The days of learning one language or standard are over. "Embrace the cloud!" The next killer app exists at the convergence of several cloud services (think mashups blending location, maps, social graph, consumer reviews, and merchant offers... all in one).
3) Customers: Sorry about the noise. We'll get this sorted out as soon as possible and ensure the cloud "just works". Thank you for your patience :-)


Monday, 26 July 2010 19:58:53 (Pacific Daylight Time, UTC-07:00)
# Saturday, 10 July 2010


Facebook has nearly 500 million users (at the time of this writing) and are poised to transform how customer relationships are acquired, cultivated, and supported.

Consider the evolution of Contact management over the past 50 years:
  • Paper: Write names, addresses, birthdays, and other notes down on paper
  • Rolodex: Everyone trades business cards and keeps a local paper copy
  • PC Software: Rolodex moved to PC (Spreadsheets, Goldmine, Act!)
  • Client / Server Software: Many people in same office share common contact database (Siebel, MS CRM)
  • Software as a Service: Contacts moved to Internet hosted server. Accessible from anywhere (Salesforce.com)
  • Crowd sourced: 3rd parties pooling contact information to improve data quality, keep lists up to date (Plaxo, Jigsaw)
  • Social Networking: Contact maintains singular identity. Always up to date. Control of privacy and disclosure (LinkedIn, Facebook)
The consumer now has more power than ever. Consumers now generate more information about themselves than can be generated by 3rd parties.

Buying a list of leads that may be interested in buying camping gear will have no where near the effectiveness of publishing an ad on Facebook targeted at consumers with a self-identified interest in camping (See The Role of Advertising at Facebook).

Consumers will increasingly defer to their friends recommendations on which restaurants to visit, which shoes to buy, and which cars to drive.

CRM systems no longer contain the master records for Contact information.

Businesses must evolve past collecting and cleansing Contacts and instead collect meta-information that refers to customer-managed online profiles, such as Facebook and LinkedIn, for these resources are now truly authoritative. When there is a conflict between CRM Contact information and an online social network profile, the social profile will be master record.

Websites must evolve to become applications. Web forms soliciting contact information will become a thing of the past. Consumers will not have the patience to do anything more than a single click to identify interest in a product or service.

The video embedded in this blog post (and available here) demonstrates a Cool Sites application integrated with Facebook Connect and Salesforce CRM.
Saturday, 10 July 2010 16:31:06 (Pacific Daylight Time, UTC-07:00)
# Thursday, 08 July 2010

No, not this Trigger... keep reading...

Trigger development (apologies to Roy Rogers' horse) is not done on a daily basis by a typical Force.com Developer.

In my case, Trigger development is similar to using regular expressions (regex) in that I often rely on documentation and previously developed code examples to refresh my memory, do the coding, then put it aside for several weeks/months.

I decided to create a more fluent Trigger template to address the following challenges and prevent me from repeatedly making the same mistakes:

  • Bulkification best practices not provisioned by the Trigger creation wizard
  • Use of the 7 boolean context variables in code (isInsert, isBefore, etc...) greatly impairs readability and long-term maintainability
  • Trigger.old and Trigger.new collections are not available in certain contexts
  • Asynchronous trigger support not natively built-in

The solution was to create a mega-Trigger that handles all events and delegates them accordingly to an Apex trigger handler class.

You may want to customize this template to your own style. Here are some design considerations and assumptions in this template:

  • Use of traditional event method names on the handler class (OnBeforeInsert, OnAfterInsert)
  • Maps are used where they are most relevant
  • Objects in map collections cannot be modified, however there is nothing in the compiler to prevent you from trying. Remove them whenever not needed.
  • Maps are most useful when triggers modify other records by IDs, so they're included in update and delete triggers
  • Encourage use of asynchronous trigger processing by providing pre-built @future methods.
  • @future methods only support collections of native types. ID is preferred using this style.
  • Avoid use of before/after if not relevant. Example: OnUndelete is simpler than OnAfterUndelete (before undelete is not supported)
  • Provide boolean properties for determining trigger context (Is it a Trigger or VF/WebService call?)
  • There are no return values. Handler methods are assumed to assert validation rules using addError() to prevent commit.

References:
Apex Developers Guide - Triggers
Steve Anderson - Two interesting ways to architect Apex triggers

AccountTrigger.trigger

trigger AccountTrigger on Account (after delete, after insert, after undelete, 
after update, before delete, before insert, before update) {
	AccountTriggerHandler handler = new AccountTriggerHandler(Trigger.isExecuting, Trigger.size);
	
	if(Trigger.isInsert && Trigger.isBefore){
		handler.OnBeforeInsert(Trigger.new);
	}
	else if(Trigger.isInsert && Trigger.isAfter){
		handler.OnAfterInsert(Trigger.new);
		AccountTriggerHandler.OnAfterInsertAsync(Trigger.newMap.keySet());
	}
	
	else if(Trigger.isUpdate && Trigger.isBefore){
		handler.OnBeforeUpdate(Trigger.old, Trigger.new, Trigger.newMap);
	}
	else if(Trigger.isUpdate && Trigger.isAfter){
		handler.OnAfterUpdate(Trigger.old, Trigger.new, Trigger.newMap);
		AccountTriggerHandler.OnAfterUpdateAsync(Trigger.newMap.keySet());
	}
	
	else if(Trigger.isDelete && Trigger.isBefore){
		handler.OnBeforeDelete(Trigger.old, Trigger.oldMap);
	}
	else if(Trigger.isDelete && Trigger.isAfter){
		handler.OnAfterDelete(Trigger.old, Trigger.oldMap);
		AccountTriggerHandler.OnAfterDeleteAsync(Trigger.oldMap.keySet());
	}
	
	else if(Trigger.isUnDelete){
		handler.OnUndelete(Trigger.new);	
	}
}

AccountTriggerHandler.cls

 
public with sharing class AccountTriggerHandler {
	private boolean m_isExecuting = false;
	private integer BatchSize = 0;
	
	public AccountTriggerHandler(boolean isExecuting, integer size){
		m_isExecuting = isExecuting;
		BatchSize = size;
	}
		
	public void OnBeforeInsert(Account[] newAccounts){
		//Example usage
		for(Account newAccount : newAccounts){
			if(newAccount.AnnualRevenue == null){
				newAccount.AnnualRevenue.addError('Missing annual revenue');
			}
		}
	}
	
	public void OnAfterInsert(Account[] newAccounts){
		
	}
	
	@future public static void OnAfterInsertAsync(Set<ID> newAccountIDs){
		//Example usage
		List<Account> newAccounts = [select Id, Name from Account where Id IN :newAccountIDs];
	}
	
	public void OnBeforeUpdate(Account[] oldAccounts, Account[] updatedAccounts, Map<ID, Account> accountMap){
		//Example Map usage
		Map<ID, Contact> contacts = new Map<ID, Contact>( [select Id, FirstName, LastName, Email from Contact where AccountId IN :accountMap.keySet()] );
	}
	
	public void OnAfterUpdate(Account[] oldAccounts, Account[] updatedAccounts, Map<ID, Account> accountMap){
		
	}
	
	@future public static void OnAfterUpdateAsync(Set<ID> updatedAccountIDs){
		List<Account> updatedAccounts = [select Id, Name from Account where Id IN :updatedAccountIDs];
	}
	
	public void OnBeforeDelete(Account[] accountsToDelete, Map<ID, Account> accountMap){
		
	}
	
	public void OnAfterDelete(Account[] deletedAccounts, Map<ID, Account> accountMap){
		
	}
	
	@future public static void OnAfterDeleteAsync(Set<ID> deletedAccountIDs){
		
	}
	
	public void OnUndelete(Account[] restoredAccounts){
		
	}
	
	public boolean IsTriggerContext{
		get{ return m_isExecuting;}
	}
	
	public boolean IsVisualforcePageContext{
		get{ return !IsTriggerContext;}
	}
	
	public boolean IsWebServiceContext{
		get{ return !IsTriggerContext;}
	}
	
	public boolean IsExecuteAnonymousContext{
		get{ return !IsTriggerContext;}
	}
}
Thursday, 08 July 2010 14:16:12 (Pacific Daylight Time, UTC-07:00)
# Tuesday, 29 June 2010
There are really only 2 tech blogs that I read; TechCrunch.com and ReadWriteWeb.com (RWW). They both provide balanced coverage of the consumer and enterprise markets while remaining objective. You don't get the sense that site sponsors and advertisers are driving the stories. I admire their integrity.

I've been particularly impressed with RWW's coverage on the cloud and the Internet of Things, so I was absolutely thrilled when Alex Williams ran with a story yesterday about Chatter Bot titled "How to Connect an Office Building to an Activity Stream". Check it out! (A hint on the title of this blog :-) )

Tuesday, 29 June 2010 11:32:15 (Pacific Daylight Time, UTC-07:00)