Dec 22, 2014

Test Coverage Pattern for Multi-Callout Methods

When you're developing Apex code for integrations with external systems, an issue you always need to overcome is the creation of test coverage to cover your various methods responsible for making callouts to one or more endpoints.

Existing Resources

Salesforce provides a few different ways for you to achieve this:
  1. HttpCalloutMock Interface
  2. StaticResourceCalloutMock
  3. MultiStaticResourceCalloutMock


Problem

However, all three have a similar shortcoming when additional complexity is needed.  With systems integrations, it's not uncommon to require multiple callouts within the same execution.  The three examples from Salesforce can handle that just fine... as long as you don't need to use the same endpoint more than once AND expect different results.

Before we take a look at a solution, here are some details on the existing testing mechanisms and sample usages from the Salesforce documentation.


HttpCalloutMock Interface

The HttpCalloutMock Interface allows you to create a respond() method in a test utility class where a response is constructed.  Within the test coverage class, you tell your test to use the mock utility with a test.setMock() method.

TestUtility (from documentation)


In that code snippet, note the "implements HttpCalloutMock" interface declaration.  HttpCalloutMock requires a respond() method, which accepts an HttpRequest parameter.  Within this method, an HttpResponse is constructed.

Test Method Usage (from documentation)

If you look at their comments within the respond() method, you could intelligently create an HttpResponse based on the request - however, for that endpoint, you'll always receive the same response.  That's not ideal if you're making multiple calls within an execution context and need different results.  Just a few examples might include testing paging ("next_page":2), date/time stamp requirements (if date/time > last received date/time), record count calculations (count # increase after POST), and so on.

StaticResourceCalloutMock and MultipleStaticResourceCalloutMock 

Using these methods, you can leverage Static Resources to maintain you response bodies, which can help keep your Apex code nice and tidy. Rather than implement an HttpCalloutMock interface, you can declare everything within your test coverage.  Here are usage examples of both the single StaticResourceCalloutMock and the MultipleStaticResourceCalloutMock

Test Method Usage of Static ResourceCalloutMock (from documentation)


Test Method Usage of MultipleStaticResourceCalloutMock (from documentation)


Solution

So how do we go about setting up a mechanism to achieve test coverage in a method that requires multiple callouts, including multiple callouts to the same resources where different resutlts are expected?  We'll leverage and extend the first solution, the HttpMockCallout interface.  We'll define a constructor that accepts a map of callout methods to callout endpoints to a list of response details.  We'll also accept a boolean to control whether or not the responses should be re-used or thrown away so a different response can be provided next time. Let's start with a sample call we'll be covering: Here we have two methods (doCallout1 and doCallout2) that make a GET callout to two different endpoints (/resources/example1 and /resources/example2).  We also have a doCallouts() method that uses those callout methods; it calls Callout1 twice and Callout2 once.  It then returns a concatenated string of each callout's response body. If we test without any customizations, using the standard mock interface, here's what it would look like:

Test Utility


Test Class

Our output would be:
{"example":"response1"}:{"example":"response1"}:{"example":"response2"}
Instead of:
{"example":"response1"}:{"example":"response1b"}:{"example":"response2"}
If we modify the Test Utility, can can get the expected results...

Test Utility

While we maintain the use of the HttpCalloutMock interface, we extend it's functionality by providing a new object called "Resp" that will hold individual response bodies, statuses, status codes, and a boolean called "Discard."  
A nested map, called ResponseMap, will be used to pair callout methods to endpoints and the endpoints to a list of these "Resp" records.
Method --> Endpoint --> LIST<Resp>
Within the respond() method of the interface, we'll get a list of Resp's/responses from the map using the provided HttpRequests (from the respond() signature's HttpRequest param), and use the Resp at the top of the list to populate a newly instantiated HttpResponse's details (set its body, result, and result code).
To help with our original problem of being able to provide different responses to calls, using the same endpoints, the "discard" boolean will be used to remove a resp, once moved, from the list, so in subsequent calls, another resp is used to populate the HttpResponse.


Test Method

The test method is only slightly different.  Before using the test.setMock() method, we have to load up the ResponseMap with the responses that are required for the testing in that method.  Now, within our test method, we can set up everything that's needed, from multiple methods, endpoints, with varying response bodies and results, as needed.



Now you can run the test class and get the expected results:
{"example":"response1"}:{"example":"response1b"}:{"example":"response2"}

Dec 14, 2014

What Color Is It?

While swiping away at my tablet, like a madman with my morning cup-o-joe, I came across this novelty of a site:  http://whatcolourisit.scn9a.org/.


The idea is simple... take the hour, minute, and second of the current time and use those values combined as the page background's hex color (#hhmmss).

Admittedly, there's not a lot of business purpose here, but it can be a good development exercise to use as an introduction to re-rendering page components using actionPollers and a few various Visualforce functions.

Can it be done?  With the exception of the minimal polling time being 5 seconds, you know it! 

Here's the end result:



Here's the Visualforce page:














Oct 10, 2014

From Admin to Developer: Learning to Code on Force.com Resources


From Admin to Developer: Learning to Code on Force.com
Are you an Administrator interested in learning more about how and when to use the programmatic tools in Salesforce? Join us for an introduction to Apex and Visualforce that is geared towards Salesforce Administrators. This session will include best practices, real life experiences with using code in Salesforce, and useful resources. You will leave this session with a foundation to start learning how to code in Salesforce and develop on the Force.com platform.

Use the links below to get started on your path to become a Developer!

Apex

  1. Guided Do-It-Yourself
  2. Follow Along
  3. Reading Documentation
  4. Independent Practice
  5. Advanced Learning

Visualforce
Best of luck on your journey!  Let us know how it is going @mgsherms @Scott_VS

Oct 9, 2014

Integrating Salesforce to the Carvoyant API

The Internet of Things. IoT. The connected world. A paradigm shift known by many different names. While how exactly it will look may be unclear, it is quite clear that the technological world as we know it is undergoing a rapid and dynamic change. Pick a vertical, and we see the evolution. For instance, consider transportation, or more specifically cars. We've come a long way from the horseless carriages of yore; every modern car has a computer in it that controls it's oprations called the Electronic Control Module (ECM). Not only does this control many of the car's functions, but it also digitizes these actions. While the ECM is specific to a particular manufacturer or vehicle, there is an interface called the 'On Board Diagnostic' (OBD) connector that serves as a universal gateway to the digitized data of a car's inner workings.

Taking advantage of this interface are devices such as Carvoyant's connector, which plugs in to the OBD device and collects your car's data for your use through their API. This allows you to retrieve details such as your cars movements and position, fuel usage, maintenance needs, and much more. For companies whose businesses revolve around many or a fleet of cars, the prospect of accessing their data on-demand and/or near real-time can be very attractive.

 Setting up Carvoyant

1. Refer to Carvoyant's getting started and documentation here: http://confluence.carvoyant.com/display/PUBDEV/Getting+Started

2. Go to http://developer.carvoyant.com and register for a user account.

3. You will fill out some information for the application you create. The "Register Callback URL" field is very important -- this must be the page on which your web app performing the OAuth2 authentication is located and process the authorization code to exchange for the access token.

4. Now that you have an account, we need some data. go to https://sandbox-driver.carvoyant.com/ . Here you must register for a driver account (different from the user account you set up earlier).

5. Create a few cars. Either use your own cars' VIN numbers or search the web for some of your favorite cars and use those. Don't tell them I sent you.

6. Now let's make some trip data. Go to https://sandbox-simulator.carvoyant.com/ and login once again using your driver account. Click on two points on the map and a series of waypoints will be created between them. You can add more color to your data with the properties to the left, such as fuel usage and engine temperature.

7. That's it! We now have some data that we can use in Salesforce once we perform our integration.

 Salesforce Development

 The API uses the standard server side OAuth2 authorization flow. This involves passing an application key and client secret to the API, getting back an authorization code, and exchanging this code for a token from the API. Authentication is complete and this token is then stored and used for subsequent callouts to the API.

 Let's do a simple callout to the API to get our vehicles and their positions. We will then store them as records in Salesforce and map their position on a Visualforce page with Google Maps. This is what we will end up with (or something similar for the data you create):

The code below consists of a few components. We performed our authentication and saved our token and credentials in a custom setting. The class "CarvoyantIntegration" builds our HTTP request for us by using the results from authentication, the target API endpoint (https://sandbox-api.carvoyant.com/sandbox/api), and the resource/HTTP method passed in from our loadVehicles method. We deserialize the response from the Carvoyant API using the Vehicles/Vehicle inner classes, and thereby have the properties of the vehicle for our use later in the processVehicles method, where we take their values and create records in Salesforce.

 The class below contains a getter that merely fetches the vehicle data created from our callouts for use in our visualforce page.

We will use some of the properties in our table of vehicle information, and use the last waypoint latitude/longitude to map the last known location of the vehicles. The page has code for displaying vehicle record information as well as creating markers for the vehicles we queried for inthe controller above. Check out http://blog.crmscience.com/2014/09/lab-coat-lesson-google-maps-api.html for more detailed information about the Google Maps Javascript API.

What's next? Maybe you want to go in the direction of getting data for your vehicles when there is a change. You can use Carvoyant's subscription service and the Salesforce Streaming API as they have described here to do that http://confluence.carvoyant.com/display/PUBDEV/Force.com Perhaps you want to send emails when something goes wrong with a car, or better yet perform some preventative actions when a car hits certain threshold data points during their usage, such as mileage and engine temperatures. Or maybe you want to be able to know when certain cars enter certain areas you demarcate known as Geofences.

While we are only beginning our journey into the connected world, steps like these will prove to be instrumental in guiding that course and our expectations of it. Do we want cars to communicate with eachother? Perhaps leading to more efficient routing and a final end to traffic jams? Can we optimize our infrastructure through analysis of this data? Will this sort of thing help us keep an eye out on our autonomous cars? Control them even? There is a plethora of ideas in this realm that are waiting to be implemented. And what's next... you decide.


Oct 5, 2014

Lab Coat Lesson: Salesforce Arduino Gateway




The maker movement is alive, well, and has been very much reinvigorated over the last few years. Makers and tinkerers have been creating amazing projects and supporting each other in a large online community, not all too different from our own Salesforce community. The tail end of 2013 and 2014 brought a wave of "smart" wearable tech - 2015 will be no different (check out some of the ideas on Kickstarter).

The contributions to hobby electronics made by companies like Arduino, Adafruit, and Sparkfun make it possible for anyone (yes, anyone - including you) to create a physical working version of something you thought would be a great idea.  Think web controlled pet bowls, smart fishtanks, web-enabled weather logging stations, sun tracking window blinds...

Adafruit created the CC3000, a Wi-Fi enabled breakout board (also, available as a shield), featuring a Texas Instrument chips that allowed you to add Wi-Fi capabilities to your projects.  You could now send web requests to servers or even treat your device as a basic web server to receive requests.    

Then along came the Arduino Yún (Chinese for "cloud", by the way) board - the best of the Arduino you may already know and love with a bridge over to an embedded linux chip.  Now you've got more flexibility in you data processing and web handling.  You could easily a Raspberry Pi (fairly inexpensive embedded linux board) for the project in this post, but then you lose out on the fun of working with the new Yún.  If it were up to me, I'd be saying in my best Oprah voice, "You get a Yún!... and you get a Yún!... and you get a Yún!...) while living in a house made of Yúns - big fan Massimo Banzi, big fan.

So what are we going to build?   How about a little switch box to help quickly destroy all the evidence?



Alright - let's not be destructive, so let's create a few test records via the push of a button.

Sure, you could always execute the same script anonymously, but let's be honest, this seems like more fun!

How's this going to work?  We'll wire together an Arduino Yún and connect to it an LCD and a rotary knob.  Users will twist the rotary knob clockwise or couter-clockwise to select an object (Account, Contact, Lead, Opportunity, or Case).  The user will then push the knob (hooray, built in switch!) which will send a request across the Yún's bridge to the linux side to kick off a Python script.  This script will callout to your Salesforce Org, which will be using an Apex Rest class to kick off the destruction create copious amounts of test data and send back a response to the board.

Not so pretty.

What You'll Need:

Hardware

  1. Arduino Yún (1)
  2. Rotary Encoder Switch
  3. LCD w/ Serial Backpack (backpack allows for LCD use with only 5V, ground, and one data lead instead of the normal rats nest)

Software

  1. A Dev Org (or sandbox, but definitely not production)
  2. Arduino 1.5.5 (beta or later - support for Yún necessary)

Wiring

Note:  I'm using the LCD Backpack (linked above) that uses 3 wires.  The above locations on the LCD are not correct and don't represent the use of the backpack.

Apex Class:  ArduinoGateway


Debug Log Output

Connected App

We'll use a Connected App to provide us with an endpoint to authenticate against so we can securely use our custom Apex Rest classes.
  1. Setup --> Build --> Create --> Apps
  2. Scroll down to the "Connected Apps" section
  3. Click on the "New" button
  4. Populate the "Connected App" details
  5. Check the "Enable OAuth Settings" checkbox and provide the following details:
  6. Click on the "Save" button
  7. Copy the Consumer Key and the Consumer Secret - you'll need these for the Python script below

Arduino Sketch:  ArduinoGateway.ino
Overview:  Our sketch is very simple - in a nutshell, when we flipping our final switch runs a Python script.  More verbosely, after the Yún board boots, it will be listening for a signal on the switchPin (13).  The switchPin will be hot after our final safety toggle is flipped (and illuminated!).  At that moment, we'll use the Yún's bridge from the Arduino side to the linux side to kick off a new Python process to run our script to call out to our Salesforce endpoint.


After you've uploaded the Arduino sketch to the Yun, using the Arduino IDE, you'll need to create the Python script.  Here are the general steps to do so:
  1. ssh <username>@<yun_ip_address> (IE:  root@192.168.1.8)
  2. If prompted, enter your password (Default:  arduino)
  3. "Yes" if prompted to add RSA key to list of known hosts
  4. cd /
  5. ls
  6. mkdir -p /home/root/ArduinoGateway/
  7. vi /home/root/ArduinoGateway/CallSalesforce.py

Python Script:  CallSalesforce.py                                               



So there you have it - with all those pieces in place, you should be able to twist the knob and cycle through the various sObject types.  Within the Apex Class above, we only handle Lead and Account scenarios - but you can just as freely modify all of the code to kick off any Apex code via the webservice class.  Likewise, within the Python script, you'll see that it's currently configured to take two parameters, one for the object type and one for the number of records to create.  The Arduino sketch can be modified to allow for the selection of an sObject type, then the number of records to create, before sending the request to the web service.



Have fun and please share any projects you have in mind or have done!







Sep 22, 2014

Lab Coat Lesson: Using Bootstrap in Visualforce


During the Summer of Hacks last July, I worked with a pair of coders with no prior Salesforce experience. While I worked on the Apex code, I had them work on the front end because Visualforce could just be straight up HTML if you wanted it to be. So the devs - both of whom had far more experience in modern front-end tools and libraries than I did - decided to use Bootstrap for the front end. But this decision came with a stipulation:

"We had to disable the header and standard style sheets for it to work."

This was okay to me for the time because we were building a Salesforce1 app and the mobile container did not require any headers. But now I want the page to function normally in desktop Salesforce just as well as the mobile version, I put the header and standard stylesheet back on.



Yuck! That's no good! Look at what happened to the Salesforce header and tabs. There is obviously some kind of CSS being overridden from the third-party library to Salesforce. Time to figure out how to use Bootstrap and Salesforce together.


Sep 15, 2014

Lab Coat Lesson: Building better Analytic Snapshots



Analytic snapshots should be in every admins toolbox. The ability to report on historical data and track trends over time is beneficial to nearly all organizations. Salesforce Help & Training documentation is a good place to start for beginners learning to setup and configure analytic snapshots for their organization.

Here are some useful tips, best practices, and workarounds for the admin looking to make analytic snapshots more efficient:
  • Find out how Schema Builder can save you clicks and time 
  • Map a lookup field 
  • Use record types in analytic snapshots 
  • Save and protect your working in a secure report folder 

Use the Schema Builder to create your target object and fields!

Create your target object using the Schema Builder. You most likely will not be creating a tab for this object and will not need to use the Wizard for this object. This is the most efficient method for creating or target object to house the fields and data of the source report. You will save many clicks using the schema builder instead of adding fields to an object from the setup menu.

Click Setup > Schema Builder. This is a drap-and-drop environment you can use to create your new target object and all the necessary fields to map from the source report.

First, click on the Elements tab and drag Object into the work pane.

NOTE: Make sure you select Allow Reports or you will have trouble later!



It is recommended that you create your source report first. This way, you will know which fields you want to archive. You can quickly create the fields in the target object based on the fields in the source report.

Using the Schema Builder, drag a field type from the side panel and drop it in your new target object. The field types must be compatible. Most are pretty straightforward, you will usually use the same data type from the source field for your target field (for example, the Amount field maps to a currency field). Make sure you pay attention to field length and decimal places for currency, number, and percent fields. Make sure text fields have at least the same character length. There are some trickier scenarios which will we cover now…

Mapping a Lookup Field:

Following our example of creating an opportunity analytic snapshot, how would we archive the Account Name of the opportunity? This is a lookup field, so create a lookup field related to the Accounts in the target object.

To map a value to this lookup field, you must map the Account ID. Make sure Account ID is included in your source report.


Mapping Record Types

Unfortunately, and despite grumblings from the Salesforce community, you cannot map record types in an analytic snapshot. There is however, an easy workarounds using formula fields. We will store the record type name in a custom field, include this field in the source report, and map it to a text field on the target object.

Formula Field for Record Types

Here at CRM Science, we found an even easier way to map record types to analytic snapshots than this Salesforce knowledge article suggests: Can I map the record type in an Analytic Snapshot?

It recommends creating a text formula field and entering the following formula:



You see, following this articles advice, you would have to actively manage your formula. What if you created new record types? Or changed the name of an existing record type? You would have to update the formula.

Follow this simple advice to create the formula and forget it! It will manage itself.

  1. Create a Text Formula Field on the object 
  2. Name it “Record Type” or something similar. I do not use “Opportunity Record Type” in order to distinguish it from the standard's field name. 
  3. Enter the following into the formula:  “RecordType.Name”
  4. Save and forget it! 
This formula will simply copy the record type name into the field. No need for all that extra code that you will have to remember to update later when record types change.

Create an “Analytic Snapshot” folder to protect all your hard work

The last thing you need is to setup, configure, and schedule the analytic snapshot only to have an end user make change to the source report and ruin everything!

I cannot stress this enough, save your source report to a folder that is Read Only, or grant only Viewer access if the new report sharing model is enabled, to all users except those trusted to make changes to analytic snapshots.

If non-admins have edit access to the source report, they can (and we all know, will) edit the report. This puts all the field mappings at risk of breaking and your analytic snapshot failing. So protect your hard work and create a Read Only report folder for your source report!

This advice should prove useful for seasoned analytic snapshot creators. If you are new to analytic snapshots, remember these tips! To get started, check out this page to learn how to setup and configure analytic snapshots: Report on Historical Data with Analytic Snapshots.

Sep 8, 2014

Lab Coat Lesson: The Google Maps API



This blog post will discuss the various Google Maps Web API’s and their features. We will show you a few ways to use them in your next Salesforce project and give you some sample code to get you started right away. You will be able to add maps to your record details, in pdf’s, custom visualforce pages and with a little thought anywhere else you so desire within your Salesforce instance. Your best help on this adventure will be from Google itself at http://developers.google.com/maps/.

We will be covering the following Google Maps API's :
Embed - allows you to iframe a map
Geocoding - get the coordinates for an address
Static Map - create an image of a map
JavaScript - fully customizable map 

Instant Gratification: Google Maps Embed API
Do you have a record with an address field on it and desire to show it’s location on a map? The Embed API will accomplish that for you with very little effort on your part. With a couple dozen lines of code you can create an inline visualforce page on an object of your choice (such as Account or Lead) that shows the record's location! Here’s the finished product:



Here is the visualforce code that produced this: 


All we are doing here is passing in the address fields as parameters into the iframe source.

**Don’t forget to add http://maps.googleapis.com to your Remote Site Settings! We keep it simple for now and assume that the record has values for the Street Address, City, State, Zip Code, and Country fields. The controller gathers these values for the page like so:



That’s all there is to it! This can be extended further by passing the “Mode” parameter, for example directions. You could even potentially show directions based on records of a related list by passing all of their addresses as parameters to the iframe src.

Latitude and Longitude? Google Maps Geocoding API
Geocoding standardizes your addresses with high accuracy, a very important need in businesses. Slight variations in addresses can lead to dramatically different interpretations as to their physical locations. Different parts of the world build their addresses in different ways. And do we really want to keep lugging around all those various fields each time we try to map? Using a latitude and a longitude sets the frame of reference to an exact point on the sphere that is our dear planet Earth. This also serves as an address validation tool — failure to return a latitude and longitude pair would imply that the location could not be found in the Google (in this case) database. 

We’ll add this code to the page controller we created earlier so we can perform a geocode through the page while viewing the record. Let’s see how it’s done:


We added a button and an output field to see what we’re doing from the inline visualforce page we built earlier:


The result is this :


We pressed the button, geocoded the address, printed the result on the page, and also updated the record’s address field with the coordinates. Geocoded!

Need an image? Google Maps Static API
Suppose you would like to include your new map in a pdf. Well, we can’t go about embedding something dynamic like an <iframe> in a pdf, but what we can do is get an image of the map and embed that instead. The image tag could be stored somewhere like a field on the record, perhaps built in the controller from address fields in the record. Here is the result:


Image__c here is a rich text field that simply contains an image tag with an src leading to the Static Maps API endpoint, passing in lat/lng coordinates from our Address__c geolocation field as parameters. We added the following method in the controller, which is called by a button on the page:


THE Google Maps Javascript API
If you really want to unleash the potential of Google Maps look no further than the Google Maps Javascript API.  Using the Javascript API allows you to wield much greater power and control over your maps behavior, adding layers of interaction and features unavailable through just the Embed API.
First let's improve on what we did earlier with the Embedded API -- let's now show all World__c records on the map in one go:

Let’s take a look at the code used to generate the above. 

(1.) The heart of the map functionality is generated in the initialize() function, called on window load. 
-MapOptions is used to specify characteristics of the map, in this case where to center the map (location of the current record) and the zoom level
-The div to be used to contain the map is specified — the API now knows where to put the map it generates





(2.) We want to show all the World__c records on the map, so we build a list of them (AllLocations property) in the new controller we created as follows:



-This simply gathers the relevant record properties (coordinates, name) and bundles them up to be used in the script.
-A marker is instantiated for each record and its coordinates are set



Reasonable enough.. But what about actually creating the World__c records? We will create an example where we can drop a pin anywhere in the world, and create a record in salesforce right then and there — from within the map! This is what it will look like:



We want to capture the event of the user clicking on the map, get those coordinates, and have the option to create a new record from the location the user clicked on. Here is what we added to the initialize() method in the page code used to accomplish all this: 


                       

There may seem like a lot is going on there so let’s take a closer look. 


(3.) There are a multitude of events that the API could potentially respond to, one of which is a click of the mouse on the map. We use an event listener here to capture the coordinates of the mouse click to help us build a record from it later.

(4.) We create a marker on the spot where the mouse was clicked. 
-We add a click listener on the marker so that we can show a popup dialog (called an infobox) 
-We define the infobox to have an input field, a button, and the lat/lng coordinates. We used the counter here to uniquely identify the infobox being used, as the user can add multiple pins to the map (and hence have multiple infoboxes in the DOM). For each marker added in this way, we increment the counter to keep track of the input field. 

(5.) Once the user clicks the save button in the infobox, we pass the latitude, longitude, and counter to a javascript method
- The method gets the value of the inputted name based on the counter passed to it (id of the element is a concatenation of “name” + “ counter” ex: “name4” for the 4th marker created this way
- Call a method in the controller to create the record with the coordinates and name specified : 

*The method is marked as global to allow it to be called from within the inline page

And voila! We have created a new record from the map! 




While this post only skimmed the surface of the APIs, we hope that it has set you in the right direction in beginning your own explorations of them. Try creating directions between records in your org by using a polyline, or adding record details to an infobox, perhaps an image roll.. There are any number of things you can do still that are fun to try and thankfully are also very well documented.