Data Management Best Practices to Avoid Salesforce Data Recovery Cost

For those who have already experienced Salesforce CRM, the Force.com platform is really admirable for its ability to scale up or down based on your business needs. However, all technological systems have their own constraints, and so does Salesforce. Even though you do not think you may ever run into such a need for data recovery, properly managing your data with some of the fundamental best practices will help you to make the whole platform much more efficient and functional.

What does Salesforce data actually need?

Simply adding data to Salesforce instances without considering the actual need for it may only end up in a data cluster. Such a thing can adversely impact your business, the same as how physical clutter impacts you at the workplace. Ideally, data clutter may end up in a scenario where you will not be able to use the data in hand for any value-added purposes like analytics, decision making, etc. Even on simple information retrieval, it may be much harder to find out what you are looking for.

Salesforce is more suited to work with the operational data sets with targeted transactions in enterprise data management scenarios.  While it may simply work fine with huge data volumes, it may take some space and may slow down the processing you want to get done. You can also see how it impacts the enterprise reports, list views, and probably when you reassign the owner or try to update the Salesforce role hierarchy.

What can you do?

Ideally, you may define the operational data set and try to store that data only in Salesforce. You may also check out whether you need all those records you currently have. If you find it is not needed, you may plan to remove those that are not needed. You can archive them as you may need historical data in the future. In many cases, you can do your archiving simply by summarizing the data. You also need to check it out with the data owners and different functional teams like finance, legal, sales, marketing, etc., before you do any data transformations that cannot be reversed.

Salesforce admins can also do the in-place archiving of data by custom creating new objects with all the fields you need and by trying to move the data from the original destination to the new one. There are also many cautions to be taken with this approach. The records related to particular data may not automatically come over, and you have to create independent IDs to manage this well.

Are all the fields necessary?

All data admins may be into situations where we add different data points to a given object. Even when you do not see any reports or list views, this may still clutter your Salesforce org. For this, you need to conduct a review from time to time and delete all the fields which are no longer needed. This approach will also keep the Org lean and keep your Salesforce data recovery cost to a minimum.

In many cases, the custom objectives are created for tracking something through automation, but the CRM users may not use the process which is supported.  The records may be created all the time, and no one may be looking at the same. Salesforce Objects that contain these records may be ideal for deleting the entire object instead of individual fields and records. Once you have taken all the above measures and further tried to optimize your Salesforce org, here is something more that you can do.

Optimizing data visibility

Salesforce maintains a full set of tables for tracking who all can have the visibility to which all records, and there are various ways for giving access to someone. Salesforce keeps track of all of these and can also check everything related to the records as if someone makes a change request or views access to a record or report. Adding to it, the users can also be given restricted access to the different groups, queues, associations, etc. There can be different modes of access like role hierarchy access, manager access, etc. Let us further explore some more tips for optimizing data visibility.

  • Reviewing and replanning default org-wide settings – You can review the Org-wide settings and customize the same. These need not be private, but you can still hide individual fields through the Field Level Security option. Doing this will allow you to get more global access to the records.
  • Review sharing rules – If the sharing rules are not needed anymore, you can remove those, especially if you have changed the default org-wide rules to provide a little more visibility to data.
  • Review public groups – Check and see if there are duplicate groups. Also, check if there are groups within groups and try to simplify it down as further as possible.
  • Review the role hierarchy – Role hierarchy is so powerful at offering some built-in access. However, this may also be very costly to maintain at the backend. In a standard enterprise, not everyone may need access to everything. Say, for example, the customer support personnel should see all the accounts.  Marketing should have access to all given opportunities and who owns each to do the analysis. So, if you cannot make the objects public, you may put these at different levels by setting the highest rules for those.

Once you cleanse the operational data set and simply the sharing process, you can further consider moving on to the next level of growth. For example, data skew is an advanced level that refers to assigning a ratio of records to the parent records. Say, for example, you may assign all the inactive customers to the Sales VP by setting this role as the owner. Thus, you can put all the old or inactive contacts of customers beneath a parent account. This account will skew how the optimizer thinks about the underlying data and help Salesforce be optimally efficient.

Leave a Reply

Back To Top