You have successfully deployed to production a large and complex TensorFlow model trained on tabular data. You want to predict the lifetime value (LTV) field for each subscription stored in the BigQuery table named subscription. subscriptionPurchase in the project named my-fortune500-company-project.
You have organized all your training code, from preprocessing data from the BigQuery table up to deploying the validated model to the Vertex AI endpoint, into a TensorFlow Extended (TFX) pipeline. You want to prevent prediction drift, i.e., a situation when a feature data distribution in production changes significantly over time. What should you do?
- Implement continuous retraining of the model daily using Vertex AI Pipelines.
- Add a model monitoring job where 10% of incoming predictions are sampled 24 hours.
- Add a model monitoring job where 90% of incoming predictions are sampled 24 hours.
- Add a model monitoring job where 10% of incoming predictions are sampled every hour.
Reveal Solution Next Question