UC is planning a massive SF implementation with large volumes of data. As part of the org's implementation, several roles, territories, groups, and sharing rules have been configured. The data architect has been tasked with loading all of the required data, including user data, in a timely manner.
What should a data architect do to minimize data load times due to system calculations?
- Enable defer sharing calculations, and suspend sharing rule calculations
- Load the data through data loader, and turn on parallel processing.
- Leverage the Bulk API andconcurrent processing with multiple batches
- Enable granular locking to avoid "UNABLE _TO_LOCK_ROW" error.
Reveal Solution Next Question