
Deploying ANNs on IoT devices is constrained by limited computational power, memory capacity, and energy consumption. Techniques like quantization and pruning are used to meet these constraints4. Additionally, edge AI models face errors due to shifts in data distribution between training and operational environments4.

Rehearsal-based CL methods prevent information forgetting by storing a pool of examples from previous tasks in a buffer with a fixed size5. This data is then used by the model to compute an additional loss term aimed at contrasting catastrophic forgetting. These methods allow the model to revisit past exemplars to reinforce previously learned representations, reducing the likelihood of forgetting1.

Rehearsal-free CL approaches involve trade-offs in accuracy, as they depend on specific adjustments to the network architecture or learning strategy to make models resilient to forgetting without storing samples on-device5. These methods may not perform as well as rehearsal-based approaches, which require more storage space but offer better accuracy by continually teaching the learner fresh data and examples from previously acquired tasks5.