Jan 16, 2015

Managing SFDC Credentials Using KeePass

Fun Fact:  The longer you work with the Force.com Platform the more sets of credentials you are going to have to maintain.

I should invest in a tally click counter to keep track of the number of times I log into various (and often the same) Salesforce orgs throughout the day.  At a certain point, to maintain your sanity, you need some sort of assistance to keep track of all your Salesforce org details.
  • Username
  • Password
  • Token
  • URL - login, test, etc
  • Org Type - Group, Professional, etc
  • Notes about the org's purpose - Dev Org for XYZ Project
Now, I've used the Force.com Logins Chrome extension for the last few years to help keep track of a few of those items as well as to help pop-open a new session in a new window, tab, or Incognito session.

While it's easy to search, it seemed to be growing unmanageable recently; the list was growing longer and I had a lot of old/stale entries.  

There were also a few long standing drawbacks:
  • Credentials are stored via Chrome's local storage as opposed to sync storage, meaning the credential data, an XML structure, stayed on one computer and wasn't replicated across all of my devices where Google's sync normally applies.  By day, I'm often on one computer, which frequently requires updates to these credentials.  By night, I'm on another computer and let's face it... getting up off the couch after a long day to retrieve your token from another computer's not going to happen.  Entire data dupe nightmare.
  • Force.com Logins is a Chrome extension, so while I can open Chrome tabs, windows, and Incognito tabs just fine, sometimes you want or need a Firefox or Internet Explorer window.  There's a Firefox version (don't know if it is associated with Appirio or not), but I had trouble exporting my Chrome extension credentials and importing them into the Firefox verison.
So, what to do?  There's all kinds of password managers out there.  LastPass and 1Password are two great examples.  Check out LifeHacker's Top Five Best Password Managers list to get a better feel for what they to.

In a previous life, before the cloud was "the cloud," I managed various systems credentials via an application called KeePass.  KeePass is a free, open-source program that allows you to create, organize, and store your various credentials in a local database, a .kdbx file.  You can keep this encrypted database on your hard drive or flash drive and use it to store endless hierarchies of credentials for all of your commonly visited websites, including Salesforce.  

I can quickly overcome both of those drawback bullet points above with KeePass:
  • While the .kdbx file the stores all of your passwords is saved to your computer, there's nothing preventing you from storing the file in a Dropbox folder.  Sync that folder across your devices and never be left without your credentials again.
  • Being open-source, there's various Chrome Extensions, Firefox add-on, and Internet Explorer... I don't even know what you call them... thingamabobs... that connect your browser to the KeePass program running in your system tray.  This can provide a list of credentials for the website you're currently visiting or an auto-population of creds, if you want.  
There's some additional perks too:
  • I don't have to rely on those extensions, add-ons, and thingamabobs.  I can customize KeePass to have custom buttons to open a browser of my choice, in the standard mode or Incognito/Private mode, and automatically log into Salesforce for me - replicating the functionality of the Force.com Logins Chrome Extension, but allowing me to use the browser of my choice.
  • There are global keyboard shortcuts (CTRL+ALT+K) that allow me quickly find my credentials and get logged in, without taking my hands off the keyboard.
  • There's an "AutoType" feature that allows me to assign a shortcut key to the selected credential entry that will take fields from my credential (username, password, url, custom - oh, did I mention the ability to create custom fields for additional data points...like your token?) and automatically type that string into another window for you.  Admins and Developers - how many times do you have to type in your password, go look up what your latest security token is and then copy/paste it after your password for new Data Loader sessions or IDE projects?  Imaging highlighting your credential, pressing a set of keys and letting the program do the work for you.
Ok great.  It does a lot, solves my problems, but is it going to help you?  Maybe, give it a shot.  This approach of auto-logging you into a Salesforce org is no more secure than the Force.com Logins extension and it is very possible your credentials could be seen within your browser history or URL bar upon login.  Check out the review of the Force.com Logins extension for plenty of feedback about that.

The rest of this blog post will be to demonstrate how to:

1)  Set up a new KeePass Database
2)  Add a  Group
3)  Add an Entry
4)  Include a custom Token field
5)  Set up KeePass Trigger Buttons
6)  Create/Assign KeePass Trigger Actions to open creds in the browser and mode of your choice.

Set Up a New KeePass Database

1)  The first thing you'll want to do is install the program from KeePass and open it up.
2)  Click on File --> New... to create a new database.  Give it a name and save it in a location of your choice.  This is where I save my file to my synced Dropbox folder.  Once you save it, you always have the ability to rename and move the file.

3)  Next, you'll create a master key.  Pick a strong phrase that you won't forget.  At this point, you can also create an additional key file to store separately (maybe on a USB), that you'll need to provide in addition to you database and master key for some extra security.

 4)  Within the Database Settings screen, you can provide a name and description for your credential database.  Check out the other tabs for more security and a few other storage related settings.

Add a Group

1)  Now that you're database is created, you can start structuring your credential folders to stay organize.  You can create a top-level "Salesforce" folder and then create sub-folders for clients, projects, etc.  Do this by going to Edit --> Add Group.

Add an Entry

1)  Now you can start populating your entries.  Click on Edit --> Add Entry... and provide details about your credentials.  Here I provide a quick Title (label) for the cred, usually something along the lines of "Testing:  Feature X" or "Development Org:  Feature X."  To take advantage of the browser abilities we'll be using later, be sure to provide either the login or test.salesforce.com URL in the appropriate field.  There's a few different ways to skin-this cat, but using the URL field will also allow you to auto-fill/suggest credentials if you use one of the various KeePass related browser add-ons.

 2)  Click on the "Advanced" Tab, followed by the "Add" button to add a new custom string field.

3)  Give it the name "Token" (be consistent about this) and then paste your security token in the "Value" box.

KeePass Triggers


Nope, not Apex Triggers, KeePass triggers.  KeePass triggers are a way that you can create your own custom features within KeePass, without any development.  We'll be creating two triggers, one a button, and one an action for when the button is clicked.  That functionality will be opening the browser of your choice, in standard or private mode, and logging into Salesforce with the credentials you have highlighted.

If you configure all the browsers, this is what you'll end up with these triggers:

Which will result in these custom buttons within the tool:

Creating the Buttons

1)  Go to Tools --> Triggers...
2)  Provide a name for this trigger.  I prefix mine with "Button:  " just to keep them straight in the list.

3)  Go to the "Events" tab, click on the "Add..." button

4)  Choose the "Application started and ready" option.

5)  Leave the "Conditions" Tab blank.

6)  On the "Actions" Tab, click on the "Add..." button.

7)  Within the "Action:" dropdown, choose "Add custom toolbar button.  Then provide an ID, Name, and Description for this new button.  The Id is important as this will be referenced by the next trigger you create.

Assigning an Action to a Custom Button

1)  Create another trigger, providing a name for it on the "Properties" tab.  This time, I prefix the name with "Action:  "

 2)  On the "Events" tab, click on the "Add..." button

3)  From the "Event:" dropdown, choose "Custom toolbar button clicked" and next to "ID" provide the ID of the button to be clicked (you defined this in the above step 7).

 4)  Again, leave the "Conditions" tab empty.

 5)  On the "Actions" tab, click on the "Add..." button.

 6)  From the "Action:" dropdown, click on the "Execute command line / URL" option.  Provide the following as parameters for the "File/URL" and "Arguments" options.  Here we're using the equivalent of Salesforce merge fields to populate the setting with a value from the credential.

Arguments:  {URL}?un={USERNAME}&pw={PASSWORD}

Now when you open KeePass, you'll have a few additional buttons within the programs toolbar for you to use.
1)  Highlight the credential you want to use
2)  Click on the appropriate custom toolbar button to get logged into Salesforce

Update #1 - 1/22/2015

About a week into using this over the Force.com Plugins extension and I've adapted well.  The extension is gone and my habit to click there is gone.

The process is easy:
Ctrl + Alt + K opens KeePass
Ctrl + F opens the search prompt
And I'm off!

Here are the URLs and the arguments that I used for my six buttons; two for each browser (Chrome, Firefox, and Internet Explorer).  One for normal browsing and one for that browser's private mode.

Google Chrome - Standard
Arguments:  {URL}?un={USERNAME}&pw={PASSWORD}

Google Chrome - Incognito
Arguments:  -incognito {URL}?un={USERNAME}&pw={PASSWORD}

Firefox - Standard
Arguments:  {URL}?un={USERNAME}&pw={PASSWORD}

Firefox - Private Mode
Arguments:  {URL}?un={USERNAME}&pw={PASSWORD} -private-window 

Internet Explorer - Standard
Arguments:  {URL}?un={USERNAME}&pw={PASSWORD}

Internet Explorer - Private Mode
Arguments:  {URL}?un={USERNAME}&pw={PASSWORD} -private 

Dec 22, 2014

Test Coverage Pattern for Multi-Callout Methods

When you're developing Apex code for integrations with external systems, an issue you always need to overcome is the creation of test coverage to cover your various methods responsible for making callouts to one or more endpoints.

Existing Resources

Salesforce provides a few different ways for you to achieve this:
  1. HttpCalloutMock Interface
  2. StaticResourceCalloutMock
  3. MultiStaticResourceCalloutMock


However, all three have a similar shortcoming when additional complexity is needed.  With systems integrations, it's not uncommon to require multiple callouts within the same execution.  The three examples from Salesforce can handle that just fine... as long as you don't need to use the same endpoint more than once AND expect different results.

Before we take a look at a solution, here are some details on the existing testing mechanisms and sample usages from the Salesforce documentation.

HttpCalloutMock Interface

The HttpCalloutMock Interface allows you to create a respond() method in a test utility class where a response is constructed.  Within the test coverage class, you tell your test to use the mock utility with a test.setMock() method.

TestUtility (from documentation)

In that code snippet, note the "implements HttpCalloutMock" interface declaration.  HttpCalloutMock requires a respond() method, which accepts an HttpRequest parameter.  Within this method, an HttpResponse is constructed.

Test Method Usage (from documentation)

If you look at their comments within the respond() method, you could intelligently create an HttpResponse based on the request - however, for that endpoint, you'll always receive the same response.  That's not ideal if you're making multiple calls within an execution context and need different results.  Just a few examples might include testing paging ("next_page":2), date/time stamp requirements (if date/time > last received date/time), record count calculations (count # increase after POST), and so on.

StaticResourceCalloutMock and MultipleStaticResourceCalloutMock 

Using these methods, you can leverage Static Resources to maintain you response bodies, which can help keep your Apex code nice and tidy. Rather than implement an HttpCalloutMock interface, you can declare everything within your test coverage.  Here are usage examples of both the single StaticResourceCalloutMock and the MultipleStaticResourceCalloutMock

Test Method Usage of Static ResourceCalloutMock (from documentation)

Test Method Usage of MultipleStaticResourceCalloutMock (from documentation)


So how do we go about setting up a mechanism to achieve test coverage in a method that requires multiple callouts, including multiple callouts to the same resources where different resutlts are expected?  We'll leverage and extend the first solution, the HttpMockCallout interface.  We'll define a constructor that accepts a map of callout methods to callout endpoints to a list of response details.  We'll also accept a boolean to control whether or not the responses should be re-used or thrown away so a different response can be provided next time. Let's start with a sample call we'll be covering: Here we have two methods (doCallout1 and doCallout2) that make a GET callout to two different endpoints (/resources/example1 and /resources/example2).  We also have a doCallouts() method that uses those callout methods; it calls Callout1 twice and Callout2 once.  It then returns a concatenated string of each callout's response body. If we test without any customizations, using the standard mock interface, here's what it would look like:

Test Utility

Test Class

Our output would be:
Instead of:
If we modify the Test Utility, can can get the expected results...

Test Utility

While we maintain the use of the HttpCalloutMock interface, we extend it's functionality by providing a new object called "Resp" that will hold individual response bodies, statuses, status codes, and a boolean called "Discard."  
A nested map, called ResponseMap, will be used to pair callout methods to endpoints and the endpoints to a list of these "Resp" records.
Method --> Endpoint --> LIST<Resp>
Within the respond() method of the interface, we'll get a list of Resp's/responses from the map using the provided HttpRequests (from the respond() signature's HttpRequest param), and use the Resp at the top of the list to populate a newly instantiated HttpResponse's details (set its body, result, and result code).
To help with our original problem of being able to provide different responses to calls, using the same endpoints, the "discard" boolean will be used to remove a resp, once moved, from the list, so in subsequent calls, another resp is used to populate the HttpResponse.

Test Method

The test method is only slightly different.  Before using the test.setMock() method, we have to load up the ResponseMap with the responses that are required for the testing in that method.  Now, within our test method, we can set up everything that's needed, from multiple methods, endpoints, with varying response bodies and results, as needed.

Now you can run the test class and get the expected results:

Dec 14, 2014

What Color Is It?

While swiping away at my tablet, like a madman with my morning cup-o-joe, I came across this novelty of a site:  http://whatcolourisit.scn9a.org/.

The idea is simple... take the hour, minute, and second of the current time and use those values combined as the page background's hex color (#hhmmss).

Admittedly, there's not a lot of business purpose here, but it can be a good development exercise to use as an introduction to re-rendering page components using actionPollers and a few various Visualforce functions.

Can it be done?  With the exception of the minimal polling time being 5 seconds, you know it! 

Here's the end result:

Here's the Visualforce page:

Oct 10, 2014

From Admin to Developer: Learning to Code on Force.com Resources

From Admin to Developer: Learning to Code on Force.com
Are you an Administrator interested in learning more about how and when to use the programmatic tools in Salesforce? Join us for an introduction to Apex and Visualforce that is geared towards Salesforce Administrators. This session will include best practices, real life experiences with using code in Salesforce, and useful resources. You will leave this session with a foundation to start learning how to code in Salesforce and develop on the Force.com platform.

Use the links below to get started on your path to become a Developer!


  1. Guided Do-It-Yourself
  2. Follow Along
  3. Reading Documentation
  4. Independent Practice
  5. Advanced Learning

Best of luck on your journey!  Let us know how it is going @mgsherms @Scott_VS

Oct 9, 2014

Integrating Salesforce to the Carvoyant API

The Internet of Things. IoT. The connected world. A paradigm shift known by many different names. While how exactly it will look may be unclear, it is quite clear that the technological world as we know it is undergoing a rapid and dynamic change. Pick a vertical, and we see the evolution. For instance, consider transportation, or more specifically cars. We've come a long way from the horseless carriages of yore; every modern car has a computer in it that controls it's oprations called the Electronic Control Module (ECM). Not only does this control many of the car's functions, but it also digitizes these actions. While the ECM is specific to a particular manufacturer or vehicle, there is an interface called the 'On Board Diagnostic' (OBD) connector that serves as a universal gateway to the digitized data of a car's inner workings.

Taking advantage of this interface are devices such as Carvoyant's connector, which plugs in to the OBD device and collects your car's data for your use through their API. This allows you to retrieve details such as your cars movements and position, fuel usage, maintenance needs, and much more. For companies whose businesses revolve around many or a fleet of cars, the prospect of accessing their data on-demand and/or near real-time can be very attractive.

 Setting up Carvoyant

1. Refer to Carvoyant's getting started and documentation here: http://confluence.carvoyant.com/display/PUBDEV/Getting+Started

2. Go to http://developer.carvoyant.com and register for a user account.

3. You will fill out some information for the application you create. The "Register Callback URL" field is very important -- this must be the page on which your web app performing the OAuth2 authentication is located and process the authorization code to exchange for the access token.

4. Now that you have an account, we need some data. go to https://sandbox-driver.carvoyant.com/ . Here you must register for a driver account (different from the user account you set up earlier).

5. Create a few cars. Either use your own cars' VIN numbers or search the web for some of your favorite cars and use those. Don't tell them I sent you.

6. Now let's make some trip data. Go to https://sandbox-simulator.carvoyant.com/ and login once again using your driver account. Click on two points on the map and a series of waypoints will be created between them. You can add more color to your data with the properties to the left, such as fuel usage and engine temperature.

7. That's it! We now have some data that we can use in Salesforce once we perform our integration.

 Salesforce Development

 The API uses the standard server side OAuth2 authorization flow. This involves passing an application key and client secret to the API, getting back an authorization code, and exchanging this code for a token from the API. Authentication is complete and this token is then stored and used for subsequent callouts to the API.

 Let's do a simple callout to the API to get our vehicles and their positions. We will then store them as records in Salesforce and map their position on a Visualforce page with Google Maps. This is what we will end up with (or something similar for the data you create):

The code below consists of a few components. We performed our authentication and saved our token and credentials in a custom setting. The class "CarvoyantIntegration" builds our HTTP request for us by using the results from authentication, the target API endpoint (https://sandbox-api.carvoyant.com/sandbox/api), and the resource/HTTP method passed in from our loadVehicles method. We deserialize the response from the Carvoyant API using the Vehicles/Vehicle inner classes, and thereby have the properties of the vehicle for our use later in the processVehicles method, where we take their values and create records in Salesforce.

 The class below contains a getter that merely fetches the vehicle data created from our callouts for use in our visualforce page.

We will use some of the properties in our table of vehicle information, and use the last waypoint latitude/longitude to map the last known location of the vehicles. The page has code for displaying vehicle record information as well as creating markers for the vehicles we queried for inthe controller above. Check out http://blog.crmscience.com/2014/09/lab-coat-lesson-google-maps-api.html for more detailed information about the Google Maps Javascript API.

What's next? Maybe you want to go in the direction of getting data for your vehicles when there is a change. You can use Carvoyant's subscription service and the Salesforce Streaming API as they have described here to do that http://confluence.carvoyant.com/display/PUBDEV/Force.com Perhaps you want to send emails when something goes wrong with a car, or better yet perform some preventative actions when a car hits certain threshold data points during their usage, such as mileage and engine temperatures. Or maybe you want to be able to know when certain cars enter certain areas you demarcate known as Geofences.

While we are only beginning our journey into the connected world, steps like these will prove to be instrumental in guiding that course and our expectations of it. Do we want cars to communicate with eachother? Perhaps leading to more efficient routing and a final end to traffic jams? Can we optimize our infrastructure through analysis of this data? Will this sort of thing help us keep an eye out on our autonomous cars? Control them even? There is a plethora of ideas in this realm that are waiting to be implemented. And what's next... you decide.

Oct 5, 2014

Lab Coat Lesson: Salesforce Arduino Gateway

The maker movement is alive, well, and has been very much reinvigorated over the last few years. Makers and tinkerers have been creating amazing projects and supporting each other in a large online community, not all too different from our own Salesforce community. The tail end of 2013 and 2014 brought a wave of "smart" wearable tech - 2015 will be no different (check out some of the ideas on Kickstarter).

The contributions to hobby electronics made by companies like Arduino, Adafruit, and Sparkfun make it possible for anyone (yes, anyone - including you) to create a physical working version of something you thought would be a great idea.  Think web controlled pet bowls, smart fishtanks, web-enabled weather logging stations, sun tracking window blinds...

Adafruit created the CC3000, a Wi-Fi enabled breakout board (also, available as a shield), featuring a Texas Instrument chips that allowed you to add Wi-Fi capabilities to your projects.  You could now send web requests to servers or even treat your device as a basic web server to receive requests.    

Then along came the Arduino Yún (Chinese for "cloud", by the way) board - the best of the Arduino you may already know and love with a bridge over to an embedded linux chip.  Now you've got more flexibility in you data processing and web handling.  You could easily a Raspberry Pi (fairly inexpensive embedded linux board) for the project in this post, but then you lose out on the fun of working with the new Yún.  If it were up to me, I'd be saying in my best Oprah voice, "You get a Yún!... and you get a Yún!... and you get a Yún!...) while living in a house made of Yúns - big fan Massimo Banzi, big fan.

So what are we going to build?   How about a little switch box to help quickly destroy all the evidence?

Alright - let's not be destructive, so let's create a few test records via the push of a button.

Sure, you could always execute the same script anonymously, but let's be honest, this seems like more fun!

How's this going to work?  We'll wire together an Arduino Yún and connect to it an LCD and a rotary knob.  Users will twist the rotary knob clockwise or couter-clockwise to select an object (Account, Contact, Lead, Opportunity, or Case).  The user will then push the knob (hooray, built in switch!) which will send a request across the Yún's bridge to the linux side to kick off a Python script.  This script will callout to your Salesforce Org, which will be using an Apex Rest class to kick off the destruction create copious amounts of test data and send back a response to the board.

Not so pretty.

What You'll Need:


  1. Arduino Yún (1)
  2. Rotary Encoder Switch
  3. LCD w/ Serial Backpack (backpack allows for LCD use with only 5V, ground, and one data lead instead of the normal rats nest)


  1. A Dev Org (or sandbox, but definitely not production)
  2. Arduino 1.5.5 (beta or later - support for Yún necessary)


Note:  I'm using the LCD Backpack (linked above) that uses 3 wires.  The above locations on the LCD are not correct and don't represent the use of the backpack.

Apex Class:  ArduinoGateway

Debug Log Output

Connected App

We'll use a Connected App to provide us with an endpoint to authenticate against so we can securely use our custom Apex Rest classes.
  1. Setup --> Build --> Create --> Apps
  2. Scroll down to the "Connected Apps" section
  3. Click on the "New" button
  4. Populate the "Connected App" details
  5. Check the "Enable OAuth Settings" checkbox and provide the following details:
  6. Click on the "Save" button
  7. Copy the Consumer Key and the Consumer Secret - you'll need these for the Python script below

Arduino Sketch:  ArduinoGateway.ino
Overview:  Our sketch is very simple - in a nutshell, when we flipping our final switch runs a Python script.  More verbosely, after the Yún board boots, it will be listening for a signal on the switchPin (13).  The switchPin will be hot after our final safety toggle is flipped (and illuminated!).  At that moment, we'll use the Yún's bridge from the Arduino side to the linux side to kick off a new Python process to run our script to call out to our Salesforce endpoint.

After you've uploaded the Arduino sketch to the Yun, using the Arduino IDE, you'll need to create the Python script.  Here are the general steps to do so:
  1. ssh <username>@<yun_ip_address> (IE:  root@
  2. If prompted, enter your password (Default:  arduino)
  3. "Yes" if prompted to add RSA key to list of known hosts
  4. cd /
  5. ls
  6. mkdir -p /home/root/ArduinoGateway/
  7. vi /home/root/ArduinoGateway/CallSalesforce.py

Python Script:  CallSalesforce.py                                               

So there you have it - with all those pieces in place, you should be able to twist the knob and cycle through the various sObject types.  Within the Apex Class above, we only handle Lead and Account scenarios - but you can just as freely modify all of the code to kick off any Apex code via the webservice class.  Likewise, within the Python script, you'll see that it's currently configured to take two parameters, one for the object type and one for the number of records to create.  The Arduino sketch can be modified to allow for the selection of an sObject type, then the number of records to create, before sending the request to the web service.

Have fun and please share any projects you have in mind or have done!

Sep 22, 2014

Lab Coat Lesson: Using Bootstrap in Visualforce

During the Summer of Hacks last July, I worked with a pair of coders with no prior Salesforce experience. While I worked on the Apex code, I had them work on the front end because Visualforce could just be straight up HTML if you wanted it to be. So the devs - both of whom had far more experience in modern front-end tools and libraries than I did - decided to use Bootstrap for the front end. But this decision came with a stipulation:

"We had to disable the header and standard style sheets for it to work."

This was okay to me for the time because we were building a Salesforce1 app and the mobile container did not require any headers. But now I want the page to function normally in desktop Salesforce just as well as the mobile version, I put the header and standard stylesheet back on.

Yuck! That's no good! Look at what happened to the Salesforce header and tabs. There is obviously some kind of CSS being overridden from the third-party library to Salesforce. Time to figure out how to use Bootstrap and Salesforce together.

Sep 15, 2014

Lab Coat Lesson: Building better Analytic Snapshots

Analytic snapshots should be in every admins toolbox. The ability to report on historical data and track trends over time is beneficial to nearly all organizations. Salesforce Help & Training documentation is a good place to start for beginners learning to setup and configure analytic snapshots for their organization.

Here are some useful tips, best practices, and workarounds for the admin looking to make analytic snapshots more efficient:
  • Find out how Schema Builder can save you clicks and time 
  • Map a lookup field 
  • Use record types in analytic snapshots 
  • Save and protect your working in a secure report folder 

Use the Schema Builder to create your target object and fields!

Create your target object using the Schema Builder. You most likely will not be creating a tab for this object and will not need to use the Wizard for this object. This is the most efficient method for creating or target object to house the fields and data of the source report. You will save many clicks using the schema builder instead of adding fields to an object from the setup menu.

Click Setup > Schema Builder. This is a drap-and-drop environment you can use to create your new target object and all the necessary fields to map from the source report.

First, click on the Elements tab and drag Object into the work pane.

NOTE: Make sure you select Allow Reports or you will have trouble later!

It is recommended that you create your source report first. This way, you will know which fields you want to archive. You can quickly create the fields in the target object based on the fields in the source report.

Using the Schema Builder, drag a field type from the side panel and drop it in your new target object. The field types must be compatible. Most are pretty straightforward, you will usually use the same data type from the source field for your target field (for example, the Amount field maps to a currency field). Make sure you pay attention to field length and decimal places for currency, number, and percent fields. Make sure text fields have at least the same character length. There are some trickier scenarios which will we cover now…

Mapping a Lookup Field:

Following our example of creating an opportunity analytic snapshot, how would we archive the Account Name of the opportunity? This is a lookup field, so create a lookup field related to the Accounts in the target object.

To map a value to this lookup field, you must map the Account ID. Make sure Account ID is included in your source report.

Mapping Record Types

Unfortunately, and despite grumblings from the Salesforce community, you cannot map record types in an analytic snapshot. There is however, an easy workarounds using formula fields. We will store the record type name in a custom field, include this field in the source report, and map it to a text field on the target object.

Formula Field for Record Types

Here at CRM Science, we found an even easier way to map record types to analytic snapshots than this Salesforce knowledge article suggests: Can I map the record type in an Analytic Snapshot?

It recommends creating a text formula field and entering the following formula:

You see, following this articles advice, you would have to actively manage your formula. What if you created new record types? Or changed the name of an existing record type? You would have to update the formula.

Follow this simple advice to create the formula and forget it! It will manage itself.

  1. Create a Text Formula Field on the object 
  2. Name it “Record Type” or something similar. I do not use “Opportunity Record Type” in order to distinguish it from the standard's field name. 
  3. Enter the following into the formula:  “RecordType.Name”
  4. Save and forget it! 
This formula will simply copy the record type name into the field. No need for all that extra code that you will have to remember to update later when record types change.

Create an “Analytic Snapshot” folder to protect all your hard work

The last thing you need is to setup, configure, and schedule the analytic snapshot only to have an end user make change to the source report and ruin everything!

I cannot stress this enough, save your source report to a folder that is Read Only, or grant only Viewer access if the new report sharing model is enabled, to all users except those trusted to make changes to analytic snapshots.

If non-admins have edit access to the source report, they can (and we all know, will) edit the report. This puts all the field mappings at risk of breaking and your analytic snapshot failing. So protect your hard work and create a Read Only report folder for your source report!

This advice should prove useful for seasoned analytic snapshot creators. If you are new to analytic snapshots, remember these tips! To get started, check out this page to learn how to setup and configure analytic snapshots: Report on Historical Data with Analytic Snapshots.