Impact scenario: partial change
In many other cases, the base assumptions remain the same - recommendation engines, for example, still work very much the same, but only some of the dependencies extracted from the data change. This is not necessarily very different from, for example, a new bestseller entering the charts, but the speed and magnitude of change may be bigger: the way, for instance, health- related supplies jumped in demand outpaces how a bestseller rises in the charts. If the data science process has been designed sufficiently flexible, its built-in change detection mechanism should identify the change and quickly trigger a retraining of the underlying rules - if it was built-in and the quality of the retrained system does achieve sufficient quality levels.
Impact scenario: no change
This brief list is not complete without stressing that many concepts remain the same: Predictive maintenance is a good example - as long as the usage patterns stay the same, engines will continue to fail the same way as before. But the important question here for your data science team is: Are you sure? Is your performance monitoring setup thorough enough that you can be sure you are not losing quality? This is a predominant theme these days anyway: Do you even notice when performance of your data science system changes?
A little side note on model jump vs. model drift which is also often used in the context but refers to a different aspect. In the first two scenarios above (complete or partial change), that change can happen abruptly (when borders are closed from one day to the next, for example) or gradually over time (some of the bigger economic impacts will be visible in customer behaviour only over time; take a SaaS business, for example, customers will not cancel their subscriptions overnight but over coming months).