
Is AI about to normalize human behaviour?
Learning from the historic data produced by sensors or logged actions yields incredible results when this learning is accelerated through AI software or using deep learning neural networks across large databases. In many industrial applications such as predictive maintenance or process scheduling, the benefits are just obvious, the same goes for autonomous driving, as long as the rules you establish make consensus among regulators, the public and car makers. After all, hard analytics make sense when it comes to operating machines whose efficiency and safety goals have been clearly defined.
In the medical field, AI-based data mining promises “predictive medicine” scenarios or health monitoring with a view to anticipate potential health conditions based on the patient’s known antecedents and behaviour (probably combined soon with genetic profiling). That is of course if the patient or even the healthy consumer is willing to be monitored constantly and reminded of his/her unhealthy behaviours according to yet to be established standards or economic trade-offs (so far wearable fitness/medical devices are not being adopted as broadly as insurance companies are lobbying for).
But when it comes to personal assistants and the so-called smart personalized services provided by AI, (often used to recommend similar products or services as those consumed in the past), I wonder if with all its perfect analytics, AI may not become overzealous in conditioning consumers’ behaviours into a self-serving feedback loop.
While machines can be rigorously parametrized, humans exhibit rather erratic behaviours, even considering the many routines that life reserves us. And mining every possible data that smartphone users can generate or willingly log into additional wearable devices or unwillingly give away to smart appliances and infrastructures, wouldn’t AI be versed to focus on reinforcing these routines? And wouldn’t an AI assistant be prone to conclude that the best way to offer its help would be to nurture our routines while discarding singular choices or behaviours?
The recent release of Panasonic India’s “Arbo” software for Android phones somehow exemplifies my point of view. Panasonic explains that while smartphone functions are constantly evolving with new features, users typically only use a specific group of functions day-to-day. So using AI, the Arbo software will analyse users’ behaviour based on time and location so it will automatically display and make timely and context-based suggestions on screen (regarding the frequently contacted phone numbers, other social networking site users, applications to open, Wi-Fi, volume and other settings such as the connection with IoT devices).
Sure, intelligent user interfaces can take shortcuts based on users’ gestures and predicted intents, but will AI really serve our human nature by cognitively focusing on all the routine aspects of our lives?
Panasonic doesn’t say if Arbo would also learn from the millions of data points generated collectively by all the smartphone users equipped with the software.
But many companies rely on this sort of “shared data” to figure out new services or make new suggestions that “ought to” make sense for individual users in the particular context they find themselves. Which in the long term, if malleable consumers are reminded too often, could well normalize consumer behaviour following new machine-learned standard sets, well beyond the wildest authoritarian dreams.
Related articles:
Lost in Big Data: digital zombies
The future of video surveillance: HD, hyperspectral and stereoscopic
Pill dispenser stalks careless patients
