MENU

A low appetite for data: Fujitsu’s “Wide Learning” AI

A low appetite for data: Fujitsu’s “Wide Learning” AI

Technology News |
By eeNews Europe



In many fields, the lack of data for training AI on specific targets leaves machine learning unable to produce results with sufficient accuracy for practical use. AI deployment is also hindered by the lack of understanding of the logic behind the final results. Often, despite AI’s sufficiently accurate recognition or classification performance, experts and developers cannot explain why the AI produced a certain answer, which makes it difficult to explain the results to potential adopters in the industry.

Fujitsu’s so-called “Wide Learning” technology enables judgements to be reached more accurately than was previously possible, claims the researchers, and learning is achieved uniformly, no matter which hypothesis is examined, even when the data is imbalanced.

This is achieved by first extracting hypotheses with a high degree of importance, having made a large set of hypotheses formed by all of the combinations of data items, and then by controlling for the degree of impact of each respective hypothesis based on the overlapping relationships of the hypotheses.

What’s more, because the hypotheses are recorded as logical expressions, humans can also understand the reasoning behind a judgement, eliminating the “black box” uncertainty of how the AI judgements are performed. Fujitsu’s Wide Learning technology will enable the use of AI even in areas such as healthcare and marketing, where the data needed to make judgements is scarce, supporting operations and promoting the automation of work processes using AI.
 


For Fujitsu Laboratories’ researchers, the first step to solve today’s AI issues was to create combinations of data items to extract large volumes of hypotheses. Wide Learning treats all combination patterns of data items as hypotheses, and then determines the degree of importance of each hypothesis based on the hit rate for a given label category.

Hypothesis listing and knowledge chunk extraction.

For example, when analyzing trends in who purchases certain products, the system combines all sorts of patterns from the data items for those who did or did not make purchases (the category label), such as single women between 20-34 who have driver licenses, and then analyzes how many hits it gets in the data of those who actually made purchases when these combination patterns are taken as hypotheses. The hypotheses that achieve a hit rate above a certain level are defined as important hypotheses, called “knowledge chunks”.

This means that even when target data is insufficient, the system can extract all hypotheses worth looking into, which may also contribute to the discovery of previously unconsidered explanations.

Building an accurate classification model
The system builds a classification model based on multiple extracted knowledge chunks and on the target label. In this process, if the items making up a knowledge chunk frequently overlap with the items making up other knowledge chunks, the system controls their degree of impact so as to reduce the weight of their influence on the classification model. In this way, the system can train a model capable of accurate classifications even when the target label or the data marked as correct is imbalanced. For example, in a case where men who did not make a purchase make up the vast majority of an item purchase dataset, if the AI is trained without controlling the degree of impact, then the knowledge chunk that includes whether or not a person has a license, independent of gender, will not have much influence on the classification.

With this newly developed method, the degree of impact of knowledge chunks including male as a factor is limited due to the overlap of this item, while the impact of the smaller number of knowledge chunks that include whether a person has a license becomes relatively larger in training, building a model that can correctly categorize both men and possession of a license.
 

When making a classification model, the knowledge chunks impact adjustment.

To prove its Wide Learning concept, Fujitsu Laboratories applied it to data in areas such as digital marketing and healthcare. In a test using benchmark data in the marketing and healthcare areas from the UC Irvine Machine Learning Repository, this technology improved accuracy by about 10-20% compared to deep learning.

It successfully reduced the probability that the system would overlook customers likely to subscribe to a service or patients with a condition by about 20-50%. In the marketing data, of the approximately 5,000 customer data entries used in the test, only about 230 were for purchasing customers, making for an imbalanced set. This technology reduced the number of potential customers excluded from sales promotions from 120, the result of deep learning analysis, to 74. Moreover, as the knowledge chunks that form the basis for this technology have a logical expression format, the ability to explain the reasoning behind a judgement is also useful in implementing this technology in society. Even when it is determined that corrections to a model are necessary, based on results from new data, it is possible to make more appropriate revisions, because users can understand the reasons for results.

Roadmap

Fujitsu Laboratories says it will continue to apply this technology to tasks that demand the reasoning behind AI judgements, such as in financial transactions and medical diagnoses, and to tasks that handle low frequency phenomena, such as fraud and equipment breakdowns. Wide Learning will likely be commercialized in 2019 as a new machine learning technology supporting Fujitsu Limited’s Fujitsu Human Centric AI Zinrai.

Fujitsu Laboratories – www.fujitsu.com

Related articles:

Synthetic data to the rescue of AI

Complete machine learning framework targets neuromorphic soc

Wave computing to donate training scheme to tensorflow

AI-powered software boosts facial classification for video law enforcement

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s