TransWikia.com

Process builder not working properly when updating multiple records at the same time

Salesforce Asked by AjFmO on August 11, 2020

I have a process builder that executes just fine when there’s a single record update. But not updating all records when there are multiple updates at the same time.

PB calls a flow, flow makes the updates.
Works perfect on a single record, even on multiple records (always leaving a record without processing), if I make the updates from the object List View, runs the whole process (PB->Flow) for all records but seems like is leaving one record without processing.

Any ideas?

Use Case:

We are using a managed package called Aprika Mission Control, what we are trying to is that whenever a Time Log is approved and has not been posted (flag to prevent to count same Time Log twice), roll the time consumed up to the related Opportunity Product.

This is the ERD.

enter image description here

Time Log <-m:1- Action <-m:1- Milestone <-m:1- Project <-m:n- Oppty <-m:n-> OLI

Basically, it works for one record, but when doing bulk updates it’s not rolling up the value up to the OLI for all records, just for the newest record.


Edit #1

Ran some tests, and all records go through the PB, but only one (the newest record) is going through the flow.



Edit #2

Changed flow type to Flow trigger, got rid of PB, now everything happens in the Flow.

Records actually go through the flow, but they are not updating each OLI.


Screen Shots

PB.

Object and process
Criteria
Action - Flow
Action - Update Field

Flow.

Flow
Get Record that fired the PB

Update from List view:

Update from List view

One Answer

So the way Process Builder "bulkifies" the records is that it actually runs X interviews each for the loaded records. So if you load 100 records, PB is running 100 interviews with 1 record each and NOT 1 interview with the 100 records.

  1. This is why you see 1 record in the Log. It has always been an issue that the Debug Logs are pretty much useless in this scenario because it will either jumble everything up so there is no way you can see a linear flow OR it drops all the records and shows you just 1.

  2. Since you mentioned something about rolling up the values it sounds like you need to have the 100 records go through together as in a Collection variable. You can do this 1 of 2 ways.

A. Instead of having the PB actually fire the flow for the loaded records, you would have it flag a field in each of the records "Update Me". Then you have a scheduled flow execute that finds all the "Update Me" records, puts them into an Sobject Collection, then loops through and does what you want it to do. You'd want your DML updates at the very end to update all records at once. You CANNOT do this in your current process. There is no way to collect the records from the PB and pass them to the flow in an Sobject Collection this way.

B. Use an Invocable Apex Class at the start of the Flow. This is one of the areas where the flow "stops" and bulkifies the records again. So in the case where the 100 interviews are running, you can actually stop them all using the Invocable Apex, pass them to the Apex, put them into your own List variable, and pass that variable back to the Flow as an Sobject Collection of 100 records, then run as in A.

*Note: I have used option B for 2 yrs now to resolve this issue I put an example of the Invocable for the Case object below. You can change the object OR now that flow accepts dynamic Apex to some degree you can modify this to accept any object.

public with sharing class CaseBulkifyClass {

@InvocableMethod(label='Collect Bulkified Records')
public static List<List<Case>> gatherRequests(Request[] requests) {

    // Gather bulkified records from Flow
    Map<Id, Case> parents = new Map<Id, Case>();

    for(Request request : requests) {
        parents.put(request.recordId, null);
    }

    parents = new Map<Id, Case>([
        SELECT Id, hdone__Identifier__c, Status, ClosedDate, hdone__hidden_CaseEdit__c, hdone__Flow_Status__c
        FROM Case
        WHERE Id IN :parents.keySet()
    ]);
    
    // Flows must return List<List> for Sobject Collections. This is returned as Sobject Collection to Flow
    List<List<Case>> responseColl = new List<List<Case>>();
    
    // Add all of the values to an initial List
    List<Case> responseList = new List<Case>();
    responseList.addAll(parents.values());
    
    System.debug('Size of ResponseList ' + responseList.size());
    
    // Adds actual values we will use to the List of List
    responseColl.add(responseList);
    System.debug('Return responseColl size ' + responseColl.size());
    
    // Initialize a new list which will return empty to ensure return same number of interviews back to Flow
    List<Case> emptyList = new List<Case>();

    // Iterate through number of interviews - 1 and assign to empty list
    for (Integer i=0; i<responseList.size()-1; i++){
        responseColl.add(emptyList);
        }
    return responseColl;
}

public with sharing class Request {
    @InvocableVariable(label='Record ID' required=true)
    public Id recordId;
    
    @InvocableVariable(label='Object API Name' required=false)
    public String objectName;

    @InvocableVariable(label='Parent ID' required=false)
    public Id parentId;
}

}

One more caveat is that if you leave this as is you will get an error "# of Flow Interviews does not match....". This is because the Flow started with 100 interviews, Apex collected them and passed back 1 variable with 100 records...so where are the other 99 interviews? Apex actually will pass back 1 with 100 records and 99 with 0 records. So you need a decision element after the Apex class that says "If collection variable is null, exit flow, else continue." Then you'll be set.

Answered by ddeve on August 11, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP