# Tuesday, 14 December 2010
If you’re a Developer, then Dreamforce 2010 was a very good year. Perhaps there was a new killer business user feature announced in a Sales breakout session somewhere, but I unfortunately missed it. The conference kicked-off with Cloudstock on Monday and each subsequent day brought about one announcement after another targeting cloud developers.

The ultimate in serendipitous geekery had to be the Node.js session with Ryan Dahl. One day I’m hacking away on node.js and the next I’m running into the core developer at CloudStock. I’m really hooked on this new recipe for the cloud of Linux+Node+NoSQL (is there a cool acronym for this stack yet? LinodeSQL?). Thread based web server processing is starting to feel “old school” thanks to Ryan.

Database.com was the major announcement on Tuesday and, in my opinion, was way past due. The .NET open source toolkit co-launched with Salesforce in 2006 was built on the premise of using Salesforce as a language agnostic platform. Whether you are a Java, C#, Ruby, or PHP Developer should be irrelevant when using a database in the cloud that is accessible via web services (Given that ~50% of enterprise IT shops have Microsoft Developers on staff and C# adoption continues to grow, it seemed logical to win over this community with next generation tools and services that make them more productive in the cloud).

However, the launch of Apex and the AppExchange brought about a few years of obligatory Marketing and promotion of only native platform features while the language agnostic "hybrid" crowd sat patiently, admiring the work of Simon Fell's web services API and the potential for real-time integration between apps.

The “language agnosticism” of database.com was further reinforced with the announced acquisition of Heroku. Whether the Ruby community would have naturally gravitated to database.com on their own or the acquisition was necessary to accelerate and demonstrate the value will be perpetually debated.

But the Heroku acquisition somewhat makes sense to me. Back in April I wrote the following about VMForce:

"I think other ORM Developer communities, such as Ruby on Rails Developers, will appreciate what is being offered with VMForce, prompting some to correctly draw parallels between VMForce and Engine Yard."

Same concept, different Ruby hosting vendor (Engine Yard has the low-level levers necessary for enterprise development IMO). The ORM mentality of RoR Developers; who are simply tired of futzing around with relational DBs, indexes, and clusters; are a good D-Day beachhead from which Salesforce can launch their new platform message.

Salesforce Marketing will probably need to tread carefully around the message of “Twitter and Groupon use Ruby on Rails” to maintain credibility in this community. While these statements are technically true, Fail Whales galore prompted Twitter to massively rearchitect their platform, which resulted in the development of Flock DB and crap loads of memcache servers.

The fact remains that very few massively scaled cloud services run on a relational database. Twitter, Groupon, Facebook, and most other sites run on eventually consistent, massively scaled NoSQL (Not only SQL) architectures. Only Salesforce has developed the intellectual property, talent, and index optimizing algorithms to carry forward relational ACID transactions into the cloud.

The pricing and scalability of database.com appear to fit well for SMB apps or 1-3 month ephemeral large enterprise apps (campaigns or conference apps like Dreamforce.com).

REST API
The RESTful interface hack blogged back in May will soon be a fully supported feature in Spring ‘11.

SiteForce
SiteForce looked pretty impressive. I’m guessing SiteForce is the work of SiteMasher talent. Web Designers and Developers accustomed to using apps like Dreamweaver or Front Page will definitely want to check this out.

Governor Limits
Oh yeah, we all hate ‘em but understand they’re a necessary evil to keep rogue code from stealing valuable computing resources away from other tenants on the platform. The big news was that the number of governor limits will be dropping from ~55 down to 16 in the next major release by removing the trigger context limits (this brought serious applause from the crowd).

Platform State of the Union
The Developer platform state of the union was full of surprises. Shortly after being given a Developer Hero award for the Chatter Bot app developed earlier this year, Salesforce demonstrated full breakpoint/step-through debugging between Eclipse and a Salesforce Org.

This is a skunkworks-type project still in it’s infancy that will hopefully see the light of day. The demo definitely left me wondering “How’d he do that? Persistent UDP connections? Is that HTTP or some other protocol? Is a database connection being left open? What are the timeout limitations? How does Eclipse get a focus callback from a web browser?”

Permission Sets
Where were they? I was really hoping Salesforce would go all in and take their cloud database technology to the next level of access control by announcing granular permission management with the release of permission sets.

This is a subtle feature not widely appreciated by most Salesforce users or admins, but any Salesforce org with more than 100 users understands the need for permission sets.

Conclusion
The technology and features were great, but the real highlight of the conference was networking with people.

I really need to hang out with more Salesforce employees now that I live in the bay. Conversations with the Salesforce CIO, Evangelists, Engineers, and Product Managers were energizing.

To have our CIO and IT Team attend Dreamforce and be aligned on Force.com as a strategic platform is invigorating and makes Facebook an exciting place to work.

The family of Salesforce friends on Twitter continues to grow. It’s always great to meet online friends in person and hang out with existing friends. See you all at Dreamforce 2011!

Honored to receive one of three Developer Hero awards. Thank you Salesforce!
Tuesday, 14 December 2010 22:39:19 (Pacific Standard Time, UTC-08:00)
# Sunday, 05 December 2010

Update: About one year after writing this article I switched to hosting node.js apps on Heroku. Check it out.

Node.js is an impressively fast and lightweight web server based on the Google V8 Javascript engine. I really enjoy working with node.js for the simple elegance of language parity between client and server. The use of server-side Javascript also means taking advantage of common JS patterns, such as event-driven programming and closures (there's just something reassuring and pure about functional programming on the server that gives me a greater sense of confidence when there are no state dependencies between expressions).

In the spirit of the CloudStock event tomorrow, I set out to install Node.js on an Amazon EC2 instance. Amazon is running a promotion on free EC2 micro instances, otherwise micros can be leased for about $0.02 per hour or ~$20 per month.

Step 1

Signup for Amazon EC2 and click on "Launch Instance" to get started.

Step 2

I prefer the default Amazon Linux machine image, but any Linux distribution should work. The default Linux AMI is stripped down and secure out of the box.

Step 3

Select the type of instance. I recommend starting with a Micro for creating a simple Node sandbox.

Step 4

Accept the default advanced instance options by clicking "Continue"

Step 5

Give your instance a name, such as "Node Sandbox".

Step 6

This is the most critical part of the instance provisioning process. If a key pair has not already been defined in EC2, create one by entering a key name then downloading the resulting *.pem file to the local desktop.

Step 7

If this is your first time at Amazon, you'll need to generate a firewall profile for use by the Node Sandbox instance. Allow SSH (port 22) and HTTP (port 80).
We'll initially be hosting Node on port 8080, which unfortunately is not configurable in the instance request wizard. Make a mental note that we'll be coming back to security groups in step 10 to allow port 8080.

Step 8

Review the request and press "Launch" to fire up your Linux virtual machine. Amazon says it could take several minutes to provision the VM. In my experience, the micro instances have only taken seconds.

Step 9

Confirm the new instance is running. Copy the "Public DNS" URL of the instance into a text editor; you'll be using it frequently in the next steps. (Note: Make sure to copy the Public DNS and not the Private DNS, which is only used for internal EC2 connections).



Step 10

Select the instance then click on Security Groups. Modify the group to allow tcp traffic over port 8080. That's it for EC2 configuration.


Step 11

SSH. The remaining steps all require SSH access to the newly provisioned EC2 instance. This article uses the bash terminal on Apple OSX for demonstration.

Remote access for root user is not enabled. Instead, the user "ec2-user" is made available with sudo permissions. Login via SSH using syntax:

ssh -i keyFilePath/keyFile.pem ec2-user@ec2-public-dns

Switch i (-i) authenticates using the identity of the key file created in step 6.
keyFilePath is the path to the key file generated in step 6.
ec2-public-dns is the domain name for the EC2 instance retrieved in step 9.

Note: You may receive the following error when attempting to SSH to EC2.

WARNING: UNPROTECTED PRIVATE KEY FILE!  
Amazon requires the keyfile to be truly private, therefore only readable by you and no other user on the local machine. To fix the issue, change the file mode with:
chmod 700 keyFileName.pem

A successful login will present the following prompt.

Step 12  Download/Copy Node.JS.

Download Node.js to your local file system. In this example, I've downloaded Stable: 2010.11.16 node-v0.2.5.tar.gz

Open a local shell window and copy the package to the EC2 instance using secure copy.
Example:
scp -p -i ../keyPath/keyFile.pem node-v0.2.5.tar.gz ec2-user@ec2-204-236-155-210.us-west-1.compute.amazonaws.com:node.tar.gz

Step 13 Extract

The scp copy should copy node to the home/ec2-user directory. You can extract node and configure/install from this directory. The following steps assume extraction to the root opt directory so the following commands are all executed in the "Super User" sudo context to override the permission errors you'll otherwise encounter while logged in as the ec2-user user.

Copy to opt
sudo cp node.tar.gz ../../opt
cd ../../opt
sudo gunzip -d node.tar.gz
sudo tar -xf node.tar

Change to the node install directory. Listing the contents will display the following:

[ec2-user@ip-xxx-yyy-zzz node-v0.2.5]$ ll
total 112
-rw-r--r-- 1 1000 1000  4674 Nov 17 05:46 AUTHORS
drwxr-xr-x 3 1000 1000  4096 Nov 17 05:46 benchmark
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:46 bin
-rw-r--r-- 1 1000 1000 31504 Nov 17 05:46 ChangeLog
-rwxr-xr-x 1 1000 1000   387 Nov 17 05:46 configure
drwxr-xr-x 7 1000 1000  4096 Nov 17 05:46 deps
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:47 doc
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:46 lib
-rw-r--r-- 1 1000 1000  2861 Nov 17 05:46 LICENSE
-rw-r--r-- 1 1000 1000  2218 Nov 17 05:46 Makefile
-rw-r--r-- 1 1000 1000   413 Nov 17 05:46 README
drwxr-xr-x 2 1000 1000  4096 Nov 17 05:46 src
drwxr-xr-x 8 1000 1000  4096 Nov 17 05:46 test
-rw-r--r-- 1 1000 1000  1027 Nov 17 05:46 TODO
drwxr-xr-x 5 1000 1000  4096 Nov 17 05:46 tools
-rw-r--r-- 1 1000 1000 19952 Nov 17 05:46 wscript

Step 14 Configure and Install

The next step is to configure the node environment. Typing the following will result in a dependency error
sudo ./configure

/opt/node-v0.2.5/wscript:138: error: could not configure a cxx compiler!

To fix this error, we need only to install the GCC compiler from the YUM repository hosted by Amazon

sudo yum install gcc-c++

We're not going to be installing any SSL certificates in this sandbox, so run config without support for Open SSL

sudo ./configure --without-ssl

Now you should be able to make the Node.js package

sudo make


At this point I recommend getting up and going for a walk, making a sandwich, or anything else that will kill up to 10 minutes. The small and micro instances on EC2 have limited access to CPU computing resources, making this part of the install process lengthy.

Once the make is complete, the final step is

sudo make install


You can optionally run sudo make test to verify the install.

Step 15 Hello World

Verify the installation is working by creating a file named example.js with the following contents.

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(8080);
console.log('Server running at http://ec2-204-236-155-210.us-west-1.compute.amazonaws.com:8080');


Then run node from the command line:

node example.js

Open a browser to confirm the domain is accessible from port 8080. That's it!

Because node is running as a shell process you may want to launch the server using the "no hangup" util (or 'forever') to ensure node runs beyond the shell session.

nohup node example.js &

There's huge potential for creating scalable cloud services when combining Amazon Web Services and node.js. Enjoy!


Sunday, 05 December 2010 12:05:57 (Pacific Standard Time, UTC-08:00)