Research project shows why AI should not mimic humans

Research project shows why AI should not mimic humans

Technology News |
By Wisse Hettinga

The initiative, dubbed Project Blackfin, aims to leverage collective intelligence techniques, such as swarm intelligence, to create adaptive, autonomous AI agents that collaborate with each other to achieve common goals.

According to F-Secure Vice President of Artificial Intelligence Matti Aksela, there’s a common misconception that “advanced” AI should mimic human intelligence – an assumption Project Blackfin aims to challenge.

“People’s expectations that `advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do. Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do,” says Aksela, head of F-Secure’s Artificial Intelligence Center of Excellence.

Aksela conceptualized the Project Blackfin research initiative with a cross-disciplinary team of artificial intelligence and cyber security researchers, mathematicians, data scientists, machine learning experts, and engineers.

Inspired by patterns of collective behavior found in nature, the project’s overarching theme is to use collective intelligence techniques, such as swarm intelligence similar to ant colonies or schools of fish, to power fleets of distributed, autonomous, adaptive machine learning agents. Project Blackfin aims to develop these intelligent agents to run on individual hosts. Instead of receiving instructions from a single, centralized AI model, these agents would be intelligent and powerful enough to communicate and work together to achieve common goals.

Using such an approach, the agents learn to protect systems based on what they observe from their local hosts and networks, and are augmented further by observations and emergent behaviors learned across different organizations and industries. Local agents then get the benefit of the visibility and insights of a vast information network without requiring them to share full data sets.

“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone,” Aksela explains.

Not only does this help increase the performance of an organization’s IT estate by saving resources, but it also helps organizations avoid sharing confidential, potentially sensitive information via the cloud or product telemetry.

While the project is expected to require several years before realizing the full extent of its potential, it has experienced some early success. On-device intelligence (ODI) mechanisms developed by Project Blackfin are already being incorporated into F-Secure’s breach detection systems.

But the potential applications for Project Blackfin’s research goes beyond corporate security solutions, and even the cyber security industry. F-Secure Chief Research Officer Mikko Hypponen foresees the project’s line of research as a way to challenge people to rethink the role AI can play in our lives.  

“Looking beyond detecting breaches and attacks, we can envision these fleets of AI agents monitoring the overall health, efficiency, and usefulness of computer networks, or even systems like power grids or self-driving cars,” says Hypponen. “But most of all, I think this research can help us see AI as something more than just a threat to our jobs and livelihoods.”

The project aims to publish research, findings, and updates as they occur. More information on Project Blackfin is available at

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles