Thursday, May 2, 2013

Training sessions: User Experience

As a part of our philosophy of always keep improving and growing, both personally and professionally, we have some training sessions here at Software Allies to keep up to date with the newest technologies, improve our proceedings or just offer a better service.

Today we want to share the presentation we had in one of our training sessions: The User Experience.

As you may know, the experience the user has with a brand or company involves much more than just the interaction with the product or service and it has a very meaningful impact into whether that customer becomes a one-time buyer or a loyal client. 

With this in mind, Kristian Venegas, a Front-end Designer here at Software Allies gave us this session about User Experience.

Hope you enjoy it!


This session was made by Kristian Venegas, a Front-End Designer at Software Allies.

Tuesday, April 9, 2013

Tips to Achieving Salesforce.com Certification Success


Salesforce.com has quickly risen up as one of the most reputable CRM systems used among today's businesses. Also increasing in the industry are the number of solution providers providing Professional Services for CRM systems. So achieving Salesforce.com Certifications are a crucial part in being able to effectively administrate and advance your Salesforce CRM by use of an internal administrator. There are a couple main resources/principles I am going to go over that helped me to achieve the 4 Salesforce.com credentials that I hold currently. This process and resources helped me to be as adequately prepared as I could be going into the test on exam day.
The first two resources I want to mention are the Salesforce Premier Training library and the Salesforce Partner Portal. As a registered consulting or ISV partner of Salesforce you get access to the Salesforce.com Partner Portal as well as the entire Premier Training library for a whole year per user account.

The great news is that this same Premier Training that Salesforce Partners can access is also available to purchase via subscription by any of the Salesforce CRM subscribers. This can be a vital part of your preparation and offers many great benefits.

Premier training gives you access to the full video training library for all of the Salesforce products and even many of the certification tiers. For example, the library contains the full Administrator (ADM-201) certification program and the Developer (DEV-401) certification program. Also included are many videos relevant to the Advanced Administrator (ADM-301), the (DEV-501) Force.comAvanced Developer, and the Sales & Marketing Cloud Consultant certifications. I used these resources in combination with the standard certification study guides and a live Salesforce Enterprise or Developer environment to prepare for these exams.

The standard Salesforce program certification guides are your lifeline to properly study for your certification('s). It gives you all of the areas that are covered on the test and you can use that information to your advantage. What I did as part of my preparation was start at the top of the test content section, and work line by line until I had reviewed every area covered by the exam. Here are some more general tips to help you succeed.



a) If the area mentioned is a Salesforce CRM feature, such as Salesforce to Salesforce, or emailtocase, see if a specific place exists in the system to configure that feature and study the configuration settings. Pay close attention to the options available and any help text in and around its location.

b) After your familiar with the features location and options available, search the premier training library for training on that feature and study that content.

c) Finally, if it is a feature that can be configured and set up as a test then configure it and use the training followed to build a working version of the feature. This will give you the hands on experience needed and in many cases hands on Salesforce knowledge is more beneficial than textbook information.

As 90% of Salesforce tests are scenario based its crucial that you work hands on with an instance. If you get access to a partner portal or if you sign up for Premier Training it is also important to have a live instance to test and study in. Salesforce offers 30 day training instances that are geared towards preparing for different exams. 

You can sign up for one here

I hope these tips and study recommendations help you to achieve your certification goals as they did mine.

Until next time, take care!


This article was written by Richard Kresse

Tuesday, March 5, 2013

Specifying Agile User Stories


If you are currently using or have used Agile processes in the past, you are probably familiar with the concept of framing project requirements as Stories and using those stories to complete the project. For those not familiar, a Story is a way to define an element of a project’s requirements from the perspective of the end user in a way that encourages discussion between the relevant members of the team. Here is an example of a user Story; you can see a goal clearly defined:



This method is very effective at capturing and putting emphasis on the value that a specific feature will have for the end user and why it is important to include in the project. Stories are intended to focus on the “why” and not the “what”, letting those involved focus on the motives behind the feature instead of how they should be implemented. Because they are meant to be “reminders to have a conversation” and not specifications or models, when it comes time to actually developing software to meet the Story’s guidelines, the Story itself has little guidance on how to implement a solution.

A Story is much too vague to use as a complete reference for development. Since it is meant to be a short summary of a particular need, it lacks many critical details that may be important parts of the overall requirements.  When a developer reads a Story and implements a solution, they may implement many features that were discussed in the conversation about the story but forget about a key piece. Similarly with a tester, they may have not heard about a specific piece of functionality or nuance so necessary features may slip by incorrectly. The ambiguity of Stories also leaves room for assumptions; even after thorough discussion about the stories there may still be very different ideas about the details. This is especially prevalent in larger projects where Stories can encompass much larger features and is a real issue to producing a product that meets everyone’s expectations. To capture all of the finer details, something more than just Stories are needed.

For a productive development process, the developer needs the requirements outlined clearly, correctly, and concisely. This isn’t to say that we need very detailed requirements from the Client up front (that is actually something we want to avoid with Agile methodology), but after having discussions about the Stories then individual units of functionality should be captured. One way to do this is to capture details in the form of Acceptance Tests. These can help tremendously in making sure that the end result of the development process is exactly what the Client ultimately needs. Testers will have individual units of functionality that they can test against and these acceptance tests can even be converted into integration tests for automated and comprehensive testing. Developers can write code specifically geared towards satisfying those test cases and changes in the code that would normally pass syntax validation (but break desired functionality) are caught early.  Continuing our example, here’s our original Story with some Acceptance Tests.  You can see some important details that were captured:



However, even acceptance tests may not be enough information for developers to go off of (especially for front-end work) and it might be necessary to create additional documents, or artifacts, that explain the specifics of required functionality. These could vary based on the actual requirements you’re trying to capture; for example, a complicated process may be better defined with a flowchart than a verbose document, a mockup of a web page could more clearly explain some functionality than a flowchart, or a technical document may clearly define something better than anything else. It really depends on the functionality you’re trying to capture but the most important thing is to accurately convey it as succinctly as possible. Agile is lean in requirements, so the idea for any agile documentation is to model as little as possible while still containing enough detail to fully convey what is needed. The goal is the most effective use of time, and this process is called “Just Barely Good Enough”. Our example from before can be easily modelled as a mockup, giving developers a much clearer picture of the end goal:

    

With acceptance tests and artifacts (when necessary), the entire team will have a much more unified idea of what a Story truly means, and will all be on the same page from start to finish. This eliminates many of the problems that can occur with the non-programming side of software development.  By getting exactly what is needed in the first iteration you will save time, money, and headaches by not having to deal with the back and forth between clients, testers, and developers.



This article was written by Josh Glass, a Business Analyst at Software Allies.

Thursday, February 21, 2013

Building Agile Teams

Building Agile Teams

What is an agile team, how should it be composed, and why? Understanding how to build agile teams is one of the most critical factors in an agile initiative. However, one should really understand what agile is all about before they start to build teams to support the values and principles of agile.

What Is An Agile Team:

Agile is designed to be lean on requirements, but the team still needs just enough requirements and in-time specifications to successfully produce working code. In traditional software development methodologies like waterfall, software specifications are written upfront and then handed off to a development team to implement. This means that the development team could be composed mostly of developers and it was expected that they would have limited or little interaction with the people that actually wrote the specification. This is not the idea with Agile and it doesn’t work with an agile team because the team starts with very limited specifications by design, often just user stories.

The issue with user stories is they are really just placeholders for a promise that a more detailed conversation will be had during the iteration to define the specification more. What does this conversation part really mean?  Is it just some vague and brief talk between developers? No, it is a detailed conversation between products owners, developers, testers, and whoever else might be required to really define what the story is all about.  These conversations should be happening everyday on many different stories with all of the people that need to be involved. This is very different from writing the specification up front and handing it off to a developer and as such it requires that an agile team has a different structure than a typical development team using upfront specifications.

While some people think that agile is done without requirements, this is far from the truth. Doing agile correctly means that team is constantly defining, revising, and working on specifications. While the format of the specifications might not include detailed use cases or a lot of UML, the requirements are still captured by some means, like acceptance and use cases. This is done as part of the process each day and as such requires a special team composition to get it done.




Who Should  Be On An Agile Team:


In short, an agile team should be a cross sectional group that can follow this define/build/test cycle. It is not just a collection for developers, but a group of people that will work together that can define the domain knowledge, implement the technical details, and test to confirm that is was done correctly.

There is typically 5-7 team members on a single agile team that have a few different roles. The core Agile teams or scrums should be kept small. If you start to put too many people on a team, then activities like the daily meetings, sprint planning, and sprint reviews will become too cumbersome and good conversation cannot happen with the allotted time to cover the stories sufficiently.

Core Agile Team Roles:

Product Owner:

The product owner is responsible for defining and setting the priorities for requirements. They help to establish what the theme and objectives for an iteration will be.

Scrum Masters:

Scrum Masters are the glue of the agile team.  They keep everyone following the process and help to remove any roadblocks or impediments. They are also a kind of agile team leader, always helping the team to achieve their goals.

Developers:

Developers write code to implement stories. Stories should have both unit and acceptance tests. The developer should be responsible for ensuring the unit tests are good and passing and to also help to make sure the stories will pass the automated acceptance tests. This means they should be working directly with other team members to ensure the requirements are met.

Testers:

While some agile teams might be without dedicated testers, they have a place and should be on the agile team if at all possible. They help to define and write acceptance tests and test the stories with additional exploratory testing.  They should be working directly along with the developer on the stories for an iteration with a goal to bring them to a completed state.



While these are the core roles, there might be some shared resources on the team like UI developers, business analysts, build engineers, architects and other supportive roles that the team requires.

Conclusion:

Agile works well when a team can dynamically define, build, and test stories within a given iteration. Because of the lean nature and limited up-front specification, the teams should be composed of a cross section group of product owners, developers, testers and shared resources that can get stories completed within very short iterations.  Composing an agile team of just developers cannot accomplish this and careful consideration for who is really needed on an agile team should be a critical part of any agile initiative.

 

This article was written by Jason Stafford, Director of Development at Software Allies.




Friday, February 15, 2013

Improvements on Continous Integration

Continuous integration allows code on each commit to be directly integrated into the code base and pushed to a production or test environment. This is a great idea in theory and works well on many projects but is it always the right solution?  What alternatives exist with modern build systems and what are the benefits of distributed version control systems(DVCS)?

Common Issues that are introduced by Continuous Integration:

  1. Developers commit changes that break automated tests. Regardless of the reason this should not happen, but when it does the whole team suffers.
  2. Automated testing can start to take a long time as the project grows. Constant effort is needed to keep build time down.  The whole team suffers as the build time increases and builds start to happen concurrently but are deployed out serially.
  3. Only the developer responsible can commit when the build is broken, but not everyone can help in fixing the build.
  4. Broken builds that cannot be fixed within a few minutes are happening too often and the rollback process is affecting other commits.

This results in developers waiting long periods of time to find a "commit window" and waiting to make sure everything is fixed in sequential order when there is a broken build before they can commit again.

What can be done to overcome these types of issues:

The idea that each commit is directly pushed into the main line of development lies at the heart of many of these issues. If a developer makes a mistake, it becomes everyone else’s problem on their commit and the team must work together to resolve the issue quickly. On some projects with many developers this could be a constantly recurring theme that is very time consuming, especially if the team is new and doesn’t have enough discipline to adhere to the demanding continuous integration principles. While it could be argued that marking software the right way isn’t easy, we think that in some cases there is a better way to handle some of these issues.  The root principles of Continuous Integration can still be achieved while protecting the development process.

Our Solution:

  1. Temporarily separate developers code from the development trunk until the code passes all automated testing.
  2. If code passes all tests it is then merged into the development pipeline where all other developers can access the code.
  3. All builds can run concurrently in parallel.

A dirty word in CI community is feature branching. However, when used correctly and with modern build systems you can still have a code base that is integrated and tested often, which is the real essence of what CI is all about.

Recommended Branch Layout:
  • Master
  • Release
  • QA
  • Development

Each of these branches directly gets deployed on successful merge or push into an environment in Heroku or whatever setup you might have.  The development branch is the first real point of integration of various branches. We use branches for major features or epics of any application that we build in this type of setup.  In this environment, testers and developers can review their integrated code and see if it is really ready to be merged into a User Acceptance Testing environment (UAT).  When a tester/developer feels something is ready we merge the development branch into the QA branch.  This essentially creates a snapshot and sends a revision to an environment that the customer uses to verify if a feature is ready for staging and final review.  

We use the next branch, Release, to feeds an environment for any final testing before actually being deployed in a production environment, which is fed by the master branch. Since these branches are automatically pushed into environments on updates and shared by developers, we don’t want any developer to directly make changes to these branches.  We get around this by implementing branches for each epic that a developer is working on.  This means that when a developer starts working on an epic they will create a new branch specifically for that epic/feature (if it has not already been created by another developer).  All of their work will be done solely in this branch.  To handle the actual code merging into the development branch, we use Atlassian’s Bamboo build system.  

As of version 4.0, Bamboo can automatically detect new branches in your repository using regular expressions and generate build plans for them.  Bamboo can be configured to detect any branches that started with a string like “epic-” and create build plans that run all of our automated testing.  While Bamboo has automatic branch merging through their tools named Gatekeeper and Branch Updater, the problem with these tools is that if multiple builds are merging into the same branch in parallel you end up with Git Head errors.  As of writing this article, there is no way to configure either of these tools to merge in such a situation.  This is why we have created an additional step in the Apache Ant build file to do the merge.  
The plan runs the following steps:

  1. Prepare environment
    1. Install any linux libraries required by gems.
    2. Run bundle install.
    3. Configure database.yml to use a unique database name for this specific plan.
  2. Prepare database
    1. Create, run migrations, and seed the database.
    2. Using the parallel_tests gem prepare test databases
  3. Run RSPEC tests in parallel over 8 cores
  4. Run Cucumber tests in parallel over 8 cores
  5. Merge code
    1. Clone a clean copy of the repository and checkout the development branch
    2. Run a merge of the epic branch into the development branch (the name of the epic branch is passed along to the Ant file as a Bamboo environment variable)
    3. Push code back to BitBucket

Each of the main branches have a deployment build plan that push the code in the branch to their respective Heroku environment.  All of the main branches have a merge build plan but only development is automated; the rest are manual.  These builds are used by testers to create the “snapshots” mentioned before to update branches.  Unlike the development merge plan, they only prepare the environment and merge code.  

Here is a diagram showing the development flow:




For developers, Git is a much more powerful tool than traditional central version control systems and strict policies are often necessary to minimize issues.  Developers should be doing commits as often as possible, even if these commits are not immediately pushed.  With central version control systems, a commit is directly associated with a repository change.  With Git, developers can associate commits with check points or the end of a thought process.  Along with these frequent commits, developers should be syncing the changes merged into the development branch into their local branch’s code.  This allows developers to test what the end result of merging into development will be and helps prevent merge conflicts from becoming overwhelming.

Conclusion:

Continuous Integration is great, however, you might want to consider alternative solutions when presented with some of the issues we have outlined. While feature branching used to be difficult and not work well with CI, new features of modern build systems liked Bamboo and distributed version control with easy, automatic merging have opened the door to new possibilities while still keeping the overall idea of the values that continuous integration brings to projects.



This article was written by Grant Hudgens, a Build Engineer at Software Allies.

Wednesday, February 6, 2013

Rails Application Security

Introduction

On the 30th of January this year, the site RubyGems[1] suffered from a security attack. The intrusion originated from a set of vulnerabilities reported a couple of weeks ago by Aaron Patterson in the Google development group: Ruby on Rails – Security [2,3].



Thanks to the timely intervention of the RubyGem's team and help from the forensic analysis folks at Red-Hat, the “Ruby community's gem hosting service” was down just a few hours. This was because the site's admins decided to verify the gems by looking for implicated files from the attacker. In this way, The RubyGem's team was able to mitigate the threat without incurring more problems.

This incident reminds us that any software application, including Rails applications, must be maintained to ensure security. Rails applications need to be patched for vulnerabilities when security issues are discovered. The RoR community publishes security newsletters to keep application owners and developers informed.

List of Recent Rails Vulnerabilities:

SQL Injection vulnerability fixed http://bit.ly/TCDT1s
YAML vulnerability fixed http://bit.ly/TCEaBH 
JSON vulnerability fixed http://bit.ly/VAAcGo 
Parameter Parsing in Action Pack fixed http://bit.ly/SivPUo


Background:
When a Rails developer adds a new model in an application, the new model class will inherit from the ActiveRecord class. This class is responsible for adding any needed information (methods, constants, etc.) to the object (let’s say User.rb). For each single attribute declared to be used inside the model, a method find_by_attribute_name will be automatically added to the instance (i.e. a=User.find_by_id( 154 ) ). The outcome would be equivalent to making an SQL query like “Select * from ModelName WHERE attribute='parameters' ”. After getting the result, put it into a new HashMap according the model structure (i.e. 'users' => {'1'=>{:id=:1, :name=>'me', :last_name=>'and me again'}})[4].

Technical description about this vulnerability.
A vulnerability could occur when a request is processed through the find_by_* methods of any created model. On the client side, when an Ajax call is made, it will be processed by the ActiveDispatcher module, used for routing requests. Inside of this, the parameter parsing will be instantiated through the class ParamsParser.

This ParamsParser instance receives inputs in XML and YAML formats; passing the information in a sort of primitive data to Ruby (Integer, Float, Symbol, etc.). After this, the information will be merged with the rest of the params into a new object of type HashWithIndefferentAccess, finally letting the find_by_* method do its work.

If the client request contains a specially created request with YAML, the attacker could then perform any of the various types of attacks:

·Nil Conditions Bypass (i.e. if @user.nil? ).
·Denial of Service.
·SQL Injection.
·Remote Code Execution.

Keep in mind that this vulnerability has been fixed and labeled as highly critical according to Secunia.com[5].

Software Allies wants to keep their readers safe from security threats. Therefore we recommend you update your Rails version if you are using anything different than versions 3.2.11, 3.1.10, 3.0.19 and 2.3.15.

Please be sure to apply any of the below patches to ensure your application is secure.

·2-3-xml_parsing.patch - Patch for 2.3 series [2,3]
·3-0-xml_parsing.patch - Patch for 3.0 series [2,3]
·3-1-xml_parsing.patch - Patch for 3.1 series [2,3]
·3-2-xml_parsing.patch - Patch for 3.2 series [2,3]

References:

·http://rubygems.org/ - the Ruby community's gem hosting service.
·https://groups.google.com/forum/?fromgroups=#!topic/rubyonrails-security/61bkgvnSGTQ - Multiple vulnerabilities in parameter parsing in Action Pack (CVE-2013-0156).
·https://groups.google.com/forum/?fromgroups=#!topic/rubyonrails-security/t1WFuuQyavI - Unsafe Query Generation Risk in Ruby on Rails (CVE-2013-0155).
·http://api.rubyonrails.org/classes/ActiveRecord/Base.html - Dynamic attribute-based finders.
·http://secunia.com/advisories/51753/ - Ruby on Rails XML Parameter Parsing Vulnerability



This article was written by Christian Yerena, a Rails Developer at Software Allies.



Monday, January 14, 2013

Learning How to Use Rails and Ajax Together

Sometimes we need to make post requests to controller actions to perform logic that we need to return back to the views. This can easily be handled with the help of jQuery and a little understanding of how Rails handles request.
This may not be the best scenario for this to be applied, but I am just giving a simple example of this use.
Let's say for example we have this code on the index page listing all the blogs:
<a href="#" id="save"> Save Changes </a>

<section id="main">
  <% @blogs.each do |blog| %>
    <section class="post">
      <%= check_box_tag :featured, "featured", blog.featured  %>
      <%= label_tag :featured %>

      <h2> <%= blog.title %> </h2>
      <p> <%= blog.body %> </p>
    </section>
  <% end %>
</section>
By clicking the "save" link we want to update all the checked blogs to be featured in the backend and re-render the list ordered by their featured status.
Let's start off by setting up our javascript and get started writing this functionality.
<script>
  $(function() {
    $("#save").on('click', function(e) {
      e.preventDefault();
      // will do the fancy stuff later
    });
  });
</script>
We start off when the dom is loaded and ready to be acted upon. We then add an event listener to the link with an ID of "save" and prevent the default behavior from happening when that link is clicked. We will handle this ourself with a post request to our backend.
Next we need to compose the data we will be sending to the server. We will be doing this by creating an array of objects that contain the ID of the object, and whether it is featured or not. Once we have this array of objects we'll send this information to the server to apply the changes.
<a href="#" id="save"> Save Changes </a>

<section id="main">
  <% @blogs.each do |blog| %>
    <section class="post" data-blog-id="<%= blog.id %>">
      <%= check_box_tag :featured, "featured", blog.featured  %>
      <%= label_tag :featured %>

      <h2> <%= blog.title %> </h2>
      <p> <%= blog.body %> </p>
    </section>
  <% end %>
</section>

<script>
  $(function() {
    $("#save").on('click', function(e) {
      e.preventDefault();
      var blog_information = [];

      $('section.post').each(function(i, post){
        var post_id   = $(this).data('blog-id'),
            featured  = $(this).find("input[name='featured']").is(':checked');

        blog_information.push({ post_id: post_id, featured: featured })

        // will send information to the server
      });

    });
  });
</script>
Here we add a data attribute of blog-id to the post in order to access the objects ID within our jQuery code. Then in our javascript we loop over each section with a class of post and contruct an object with the blog ID and its respected featured boolean flag. Once we have the object we push that into an array that we will send to server to be processed.
<script>
  $(function() {
    $("#save").on('click', function(e) {
      e.preventDefault();
      var blog_information = [];

      $('section.post').each(function(i, post){
        var post_id   = $(this).data('blog-id'),
            featured  = $(this).find("input[name='featured']").is(':checked');

        blog_information.push({ post_id: post_id, featured: featured })

        $.post("/filter_featured", { blog_information: blog_information }, function(data) {
          // stuff will happen once we handle the data in the server
        });
      });

    });
  });
</script>

routes

PostRenderTutorial::Application.routes.draw do
  match "/filter_featured", to: "blogs#filter_featured"
end

controller

class BlogsController < ApplicationController
  def filter_featured
  end 
end
As you can see here, we are making a POST request to a route that we define to direct to the filter_featured action in the blogs controller. In here we are going to update the records and re-render the layout and return the updated HTML. So lets get to it.
class BlogsController < ApplicationController

  ...  

  def filter_featured
    @blogs = Blog.by_featured

    params[:blog_information].each do |blog_info|
      blog_info = blog_info[1]
      blog = Blog.find blog_info["post_id"].to_i
      blog.update_attribute(:featured, blog_info["featured"]) unless blog.featured == blog_info["featured"]
    end

    render "featured_blogs", layout: false 
  end 

  ... 

end
This method is pretty basic. We first loop over each of the paramaters that were sent from our javascript POST request, and then we update the blog's featured column if it has changed. Then render "featured blogs" that will handle the new HTML.

featured_blogs.html.erb

<% @blogs.each do |blog| %>
  <section class="post" data-blog-id="<%= blog.id %>">
    <%= check_box_tag :featured, "featured", blog.featured  %>
    <%= label_tag :featured %>

    <h2> <%= blog.title %> </h2>
    <p> <%= blog.body %> </p>
  </section>
<% end %>

Blog.rb

class Blog < ActiveRecord::Base
  attr_accessible :featured
  scope :by_featured, order("featured = ?", true)
end
Here we are essentially recreating the structure we had previously. This is the HTML that will be returned from the POST request in our initial code. Let's get back there and finish this off.
<script>
  $(function() {
    $("#save").on('click', function(e) {
      e.preventDefault();
      var blog_information = [];

      $('section.post').each(function(i, post){
        var post_id   = $(this).data('post-id'),
            featured  = $(this).find("input[name='featured']").is(':checked');

        blog_information.push({ post_id: post_id, featured: featured })

        $.post("/filter_featured", { blog_information: blog_information }, function(data) {
          $("#main").html(data);
        });
      });

    });
  });
</script>
That's all it takes. One more line and we have replaced the HTML with the newly updated statuses of the blog posts. Again, this could be handled a much cleaner way, but I am simply trying to demonstrate how jQuery can be used and to hopefully give people a better understanding of how to construct your own custom actions.

This article was written by Garrett Heinlen a Senior Rails Developer at Software Allies.

Friday, October 12, 2012

InteGREAT - Integrations with Ruby on Rails

Last week, Jorge Valdivia presented on the topic of Integrations using Ruby on Rails at the monthly Dallas Ruby Brigade meetup.  You can see a summary of the presentation and watch the video below and watch a video below.  We have also included a link to the slideshow here: http://www.softwareallies.com/presentations/integreat.key

Summary:
Just about any application integrates with some sort of external service. Uploading files to AWS, liking all of your friends' statuses, or downloading tweets for the latest trending topic, a modern app is almost required to have an integration. While implementing a simple one may require nothing more than skimming through the README on the Github repo, more complicated integrations call for a deeper understanding of the process as a whole. Developers should be aware of best practices when dealing with complex integrations, such as syncing, volatile and persistent caching, and handling API limitations. This talk serves as a quick introduction to implementing integrations in a rails environment, beginning with a simple one-way integration and moving on to explore how to handle large amounts of data, the need for "real time" information, and working with API imposed limits.


InteGREAT - Integrations With Rails from Software Allies on Vimeo.

Tuesday, July 10, 2012

salesforce_bulk v1.0

Several months ago I decided to write a ruby gem, salesforce_bulk, that integrates with the Salesforce Bulk API. There are several gems out that there can do the same for the non-bulk REST API, such as databasedotcom, but none of these use the bulk API, leading to frustration as you watch your daily API calls go through the roof.The salesforce_bulk gem keeps your governor limits in check by allowing you to do multiple operations in a single call, versus a single operation per call which is how the other gems operate.


When I first wrote this gem, I did so rather quickly thinking that no one would really use it so I didn't dedicate much time to improving it. It wasn't until recently that I noticed it had a several downloads, a couple of pull requests, and several mentions on stackoverflow, some blogs, and even on an official Salesforce guide.


I must admit that I was rather embarrassed to see my hastily written code being talked about by so many people. Several developers pointed out the fact that it lacked sandbox support and had near non-existent error reporting. This embarrassment has motivated me to improve gem and upgrade it from version 0.0.5 to version 1.0. In version 1.0 you'll find sandbox and asynchronous support (thanks to tonyjiang), better error catching, and better result reporting. I'll briefly go over these improvements here:


Sandbox Support

 salesforce = SalesforceBulk::Api.new("YOUR_SALESFORCE_SANDBOX_USERNAME", "YOUR_SALESFORCE_PASSWORD+YOUR_SALESFORCE_SANDBOX_TOKEN", true)  

Asynchronous Job Processing Support

 upserted_account = Hash["name" => "Test Account -- Upserted", "External_Field_Name" => "123456"] # Fields to be updated. External field must be included  
 records_to_upsert = Array.new  
 records_to_upsert.push(upserted_account)  
 salesforce.upsert("Account", records_to_upsert, "External_Field_Name", true) # last parameter indicates whether to wait until the batch finishes

Better Error Catching

Initializing the salesforce client with incorrect credentials will now result in an error similar to this:
 INVALID_LOGIN: Invalid username, password, security token; or user locked out. (RuntimeError)  
versus the dreaded "NoMethodError: undefined method `[]' for nil:NilClass" error.

Better Result Reporting

The results of a job are now returned in a format that is easy to read and manipulate. Take this job with 2 records, the first record being invalid (check the id) and the second being valid.
 updated_account = Hash["name" => "Test Account -- Updated #{Time.now}", "id" => "CLEARLY_AN_INVALID_ID"]  
 updated_account2 = Hash["name" => "Test Account -- Updated #{Time.now}", "id" => "001A000000sibbu"]  
 records_to_update = Array.new  
 records_to_update.push(updated_account)  
 records_to_update.push(updated_account2)  
 job = salesforce.update("Account", records_to_update, true)  
 puts "result is: #{job.result.inspect}"  
The result of the puts statement will look like this:
 result is: {"errors"=>[{"0"=>"MALFORMED_ID:Account ID: id value of incorrect type: CLEARLY_AN_INVALID_ID:Id --"}], "success"=>false, "records"=>[#<CSV::Row "Id":"" "Success":false "Created":false "Error":"MALFORMED_ID:Account ID: id value of incorrect type: CLEARLY_AN_INVALID_ID:Id --">, #<CSV::Row "Id":"001A000000sibbuIAA" "Success":true "Created":false "Error":"">], "raw"=>"\"\",\"false\",\"false\",\"MALFORMED_ID:Account ID: id value of incorrect type: CLEARLY_AN_INVALID_ID:Id --\"\n\"001A000000sibbuIAA\",\"true\",\"false\",\"\"\n", "message"=>"The job has been closed."}  

The status and errors of each processed record is available through result.records, and the errors are available through a hash that is indexed in the same order that the records were added to the array. It is now easy to check the status of a job via result.success, which returns true of false. Note that result reporting is only available if the job is set to wait for processing. If waiting is turned off, then you will simply receive a generic message saying that the job has been queued.

I plan on maintaining this gem more than I have in the past. Hopefully many people have gotten use from it in its current state, and hopefully many more will get use from version 1.0.

Wednesday, December 7, 2011

AUTO REFRESH DASHBOARDS
Auto Refresh Dashboards every time you click the Home Tab.

Tired of seeing the wrong numbers displayed on the Dashboard? Oftentimes, users forget to hit the refresh button on the Home Page Dashboard. This can result in Salesforce users seeing inaccurate data which can be confusing and frustrating. This TIP will help keep your Dashboards refreshing with the most up to date information.

Go to the setup menu:
Click customize, Home, Homepage Components.
Edit Messages and Alert.
Paste this code anywhere in the data entry box:

<script type="text/javascript"> var run; function sa_refresh() { var dashboardButton = document.getElementById("db_ref_btn"); dashboardButton.click() } setTimeout(sa_refresh, 5000); //Runs the Refresh function every 5 minutes using Milliseconds (1minute = 60000) run = setInterval(sa_refresh,300000); </script>

SAVE.

The code will refresh the dashboard every time you click the home tab or every 5 minutes.

Cheers!!