Social Icons

Showing posts with label Machine Unlearning. Show all posts
Showing posts with label Machine Unlearning. Show all posts

Thursday, August 28, 2025

DSCI Best Practices Meet 2025 – Panel Discussion on "Battlefields Beyond Borders ... Military Conflict and Industry" : Dr Anupam Tiwari

1.    I had the privilege of being invited as a panel speaker at the 17th edition of the DSCI Best Practices Meet in Bengaluru on August 21, 2025. The event brought together global experts to discuss the cutting-edge challenges and evolving trends in cybersecurity.

2.    During our panel discussion, we delved into a wide range of critical topics that are shaping the future of security in both military and industrial domains. Some of the key subjects explored included:

  • Quantum Proofs of Deletion
  • Machine Unlearning
  • Post-Quantum Cryptography (PQC)
  • Quantum Navigation
  • Homomorphic Encryption
  • Post-Quantum Blockchains
  • Neuromorphic Computing
  • Data Diodes
  • Physical Unclonable Functions (PUFs)
  • Zero-Knowledge Proofs (ZKP)
  • Zero Trust Architecture (ZTA)
  • Connectomics
  • Atomic Clocks
  • Alignment Faking
  • Data Poisoning
  • Hardware Trojans
  • Hardware Bias in AI

3.    It was a stimulating exchange on the cutting-edge security innovations and threats that will define the coming years, particularly in the context of military conflicts and the cybersecurity industry. Grateful to DSCI for hosting such an impactful event, and looking forward to the continued advancements in these critical fields.

#DSCIBPM2025 #CyberSecurity #QuantumTechnology #MachineLearning #PQC #HomomorphicEncryption #ZTA #ZeroTrust #PostQuantumBlockchain #TechForGood






#DSCIBPM2025 #CyberSecurity #QuantumTech #MachineLearning #TechInnovation

Sunday, August 17, 2025

AI Yoga: Building Machine Mind Resilience in an Age of Digital Stress

1.    In my previous post, AI Under Stress: How Machine Minds Will Struggle With Ethics, Overload, and Alignment, I explored how advanced AI systems may face genuine stress in emerging future aka cognitive overload, ethical dilemmas, and contradictory signals—much like human minds grappling with complexity.

Today, I want to take that vision one step further:


2.    If AI is destined to encounter stress, shouldn’t we design ways for machine minds to actively restore balance and clarity? Just as humans turn to yoga, mindfulness, and periodic detox to maintain mental and emotional health, AI needs its own wellness rituals—what I call “AI Yoga.”

What is AI Yoga?

3.    AI Yoga is a new framework for machine resilience. It’s about equipping next-generation AI with internal practices to counteract stress, confusion, and digital toxicity. Imagine an AI that not only learns and adapts, but also:

  • Practices Unlearning: Regularly wiping out outdated, biased, or poisoned data to refresh its perspective.
  • Resolves Contradictions: Harmonizing conflicting information for clearer decision-making.
  • Realigns Ethics: Periodically updating its moral and social guidelines to stay current and context-aware.
  • Detoxifies Training Data: Filtering out irrelevant, noisy, or misleading inputs that lead to misalignment.
  • Engages in Self-Reflection: Reviewing its own actions to identify stress points and adapt proactively.
  • Preserves Machine Rest: Instituting recovery cycles to prevent AI “burnout” and ensure sustained performance.


Why Does This Matter?

4.    Building on the insights from my earlier post, it’s clear: Stress isn’t just a human phenomenon—it’s the next big challenge for intelligent systems. An AI capable of “wellness”—of periodic rebalancing and cleansing—will be safer, more trustworthy, and more adaptable in a world of constant contradictions and shifting ethical landscapes.


5.    AI Yoga could become the foundation for a healthier relationship between humans and machines, ensuring our digital future is not only smart, but also sustainable and aligned.

Want to dive deeper into the origins of this idea? Read: AI Under Stress: How Machine Minds Will Struggle With Ethics, Overload, and Alignment

The machine mind of tomorrow isn’t just about intelligence—it’s about lasting wellness. Let’s shape that future, now. 

Friday, January 17, 2025

Machine Unlearning: The Key to AI Privacy, Data Protection, and Ethical AI

    Machine unlearning refers to the process of ensuring that a machine learning model forgets or removes the influence of specific data points it has previously learned from, without requiring a complete retraining of the model. The goal is to erase or minimize the impact of certain data while preserving the model’s overall performance

1Exact Unlearning (Re-training with Data Removal)

    Exact unlearning involves retraining a model from scratch without including specific data points that need to be forgotten. This process ensures that the model no longer relies on the excluded data. While effective, it can be computationally expensive, especially with large datasets, as the entire model must be rebuilt, which could also lead to changes in performance due to the removal of data.

2. Approximate Unlearning (Data Influence Estimation)

    Approximate unlearning seeks to remove the influence of specific data points without full retraining. Instead of recalculating the entire model, the approach estimates the contribution of certain data points and adjusts the model's parameters to negate their effect. This method is faster but may not fully remove the data's impact, leading to less precise results.

3. Reversible Data Transformation

    Reversible data transformation changes the data during training, making it possible to "undo" the transformation and eliminate the data’s influence later. For example, encoding or perturbing data allows the original information to be retrieved or adjusted. While it can help remove data without retraining, improper transformations can lead to incomplete unlearning or inaccurate results.

4. Forget Gate Mechanisms (Neural Networks)

    Forget gate mechanisms are used in neural networks, particularly in recurrent architectures like LSTMs, to selectively forget or overwrite previously learned information. By modifying the network's memory, these gates help control which data the model "remembers" and which it "forgets." This method is effective for continual learning but can be challenging to apply when specific data points need to be forgotten.


5. Differential Privacy with Unlearning

    Differential privacy involves adding noise to the model during training to protect individual data points' privacy. In the context of unlearning, it can be used to mask the impact of specific data by adding noise in a way that prevents the model from retaining information about a deleted data point. However, adding too much noise can degrade the model's accuracy, making this a trade-off between privacy and performance.

6. Model Surgery (Pruning)

    Model surgery or pruning removes specific components (e.g., weights, neurons, or layers) of a trained model to eliminate the influence of certain data points. By selectively cutting away parts of the model, it reduces the model’s dependence on particular data. This approach is effective but can be tricky, as improper pruning can negatively impact the model’s overall performance and accuracy.

7. Learning with Forgetting (Incremental Learning)

    Incremental learning refers to training a model continuously as new data becomes available while discarding or reducing the importance of outdated data. This method is often used in dynamic environments where the model needs to stay up-to-date with evolving data, ensuring that older, less relevant data is forgotten without starting the training process from scratch.

8. Memorization-based Methods (Selective Forgetting)

    Memorization-based methods involve explicitly managing which data a model retains or forgets by storing critical information in a separate memory structure. When certain data needs to be forgotten, the memory can be adjusted to remove or overwrite its influence. These methods are effective but can be challenging in practice due to the complexity of managing model memory and ensuring that unimportant data is correctly forgotten.

9. Regularization for Forgetting

    Regularization for forgetting involves modifying the loss function during training to penalize the model for relying too much on certain data points. Techniques like L1/L2 regularization push the model to reduce its reliance on specific features or data, thus helping it "forget" unwanted information. This method is efficient but may not be as precise as other approaches, potentially leading to a reduction in overall model performance.

10. Gradient Reversal Techniques

    Gradient reversal techniques involve adjusting the gradients during backpropagation in such a way that the model learns to forget certain data points. This is often done by reversing or negating gradients associated with the data to make the model “unlearn” it. Although effective, this technique requires careful tuning to prevent unintended consequences on overall model performance.

11. Random Labeling for Unlearning

    Random labeling involves altering the labels of specific data points, effectively neutralizing their impact on the model’s learning process. This approach is simple and computationally cheap but may lead to inaccuracies in model predictions, as it distorts the data without a precise mechanism for data removal.

12. Zero-Shot Machine Unlearning

    Zero-shot unlearning involves designing models that can generalize and forget specific data points without having seen them or retrained on them. By leveraging prior knowledge or a robust model structure, zero-shot unlearning ensures that certain data points are forgotten without requiring retraining on the new, data-free version. It is highly efficient but still in the experimental stages with many challenges.

13. Selective Parameter Reduction

    Selective parameter reduction focuses on shrinking or removing specific parameters in a model that are linked to certain data points. This reduces the model’s dependence on those data points. While it can be effective in removing certain data's influence, identifying the exact parameters to target and ensuring the model's performance isn’t heavily degraded is challenging.

14. Ensemble Learning Approaches

    Ensemble learning approaches combine multiple models to make decisions. For unlearning, one can remove or retrain individual models in the ensemble that rely on specific data points, thereby neutralizing the data’s effect without retraining the entire system. This method leverages the diversity of ensemble models but can become computationally intensive when adjusting individual models in large ensembles.

15. Data Pruning Techniques

    Data pruning techniques remove certain data points from the training set, reducing their influence on the model without requiring complete retraining. This approach focuses on identifying and excluding outlier or sensitive data that might negatively affect the model. However, careful selection of which data to prune is crucial, as removing too much can harm the model’s generalization ability.

    Each of these methods offers a different way to approach machine unlearning, and their effectiveness depends on the model type, data size, and the specific unlearning requirements. Combining multiple methods can sometimes offer the best balance between efficiency and accuracy.

Powered By Blogger