Machine Unlearning

Yunbo Long 571 words 3 minutes Privacy Data Governance Machine Learning

Machine unlearning is an emerging area of research that addresses the need to selectively remove the influence of specific training data from machine learning models after deployment. Unlike traditional model retraining from scratch—which is computationally prohibitive for large-scale systems—machine unlearning seeks efficient methods to "forget" targeted data points while preserving the model's overall performance on remaining data. This challenge has gained urgency with the introduction of data protection regulations such as the EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), both of which enshrine a right to erasure that extends to data embedded within trained models.

In the context of supply chain management, machine unlearning carries particular significance. Modern supply chains rely on AI models trained on sensitive data from multiple stakeholders—including suppliers, manufacturers, logistics providers, and customers. When a commercial relationship ends, a data-sharing agreement expires, or a regulatory request mandates deletion, organisations must be able to verifiably remove the influence of that partner's data from shared predictive models. This is especially critical in collaborative forecasting, demand sensing, and risk assessment systems where proprietary data from multiple parties may be intertwined within a single model. Without effective unlearning mechanisms, supply chain organisations face a difficult choice between regulatory non-compliance and costly full retraining.

Research in this space spans several directions: exact unlearning methods that provide formal guarantees of data removal, approximate unlearning techniques that balance efficiency with removal quality, and verification frameworks that allow auditors to confirm that unlearning has been faithfully performed. As supply chain AI systems become more autonomous and data-intensive, the ability to manage the lifecycle of learned information—including its targeted removal—will be a foundational requirement for trustworthy, privacy-compliant operations.

We invite you to explore our curated collection of key publications below, offering a gateway into this important and rapidly evolving field.

List of Publications

  1. Cao, Y. and Yang, J., 2015. Towards making systems forget with machine unlearning. 2015 IEEE Symposium on Security and Privacy, pp.463-480. [PDF]
  2. Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C.A., Jia, H., Travers, A., Zhang, B., Lie, D. and Papernot, N., 2021. Machine unlearning. 2021 IEEE Symposium on Security and Privacy (SP), pp.141-159. [PDF]
  3. Sekhari, A., Acharya, J., Kamath, G. and Suresh, A.T., 2021. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34, pp.18075-18086. [PDF]
  4. Nguyen, T.T., Huynh, T.T., Nguyen, P.L., Liew, A.W.C., Yin, H. and Nguyen, Q.V.H., 2022. A survey of machine unlearning. arXiv preprint arXiv:2209.02299. [PDF]
  5. Xu, H., Zhu, T., Zhang, L., Zhou, W. and Yu, P.S., 2024. Machine unlearning: A survey. ACM Computing Surveys, 56(1), pp.1-36. [PDF]
  6. Ginart, A., Guan, M., Valiant, G. and Zou, J., 2019. Making AI forget you: Data deletion in machine learning. Advances in Neural Information Processing Systems, 32. [PDF]
  7. Guo, C., Goldstein, T., Hannun, A. and Van Der Maaten, L., 2020. Certified data removal from machine learning models. Proceedings of the 37th International Conference on Machine Learning (ICML), pp.3832-3842. [PDF]