Saturday 23 November 2024
EN    SK    RU

Self-learning can create a meta-universe and perhaps even human-level AI. It is believed that the next AI revolution will come when AI systems lose the need for supervised learning. They will no longer rely on precisely mapped data to provide reliable information to gain insight into the world and complete tasks. AI systems should be able to learn from humans with minimal human involvement. Supervised learning works well in relatively delineated domains for which many labeled data can be obtained. The raw data obtained during deployment is different from that used during training. Collecting large amounts of labeled data that are not biased to some degree is quite tricky.

It is not so much a question of social bias as it is about relationships in the data that the system should not use. An example of this kind is a system trained to distinguish between cows, and all the examples given are cows in fields of grass. In this case, the system uses the grass as a contextual label to recognize the presence of a cow. However, if you depict a cow on the seashore, the system will have difficulty identifying the cow. 

Self-supervised learning (SSL) gives you the ability to train the system to accurately represent the raw data in a task-independent way. Because SSL training uses unlabeled data, we are allowed to use giant training sets and get more reliable and complete representations of the raw data from the system.

A small amount of labeled data will be needed to achieve good results on any managed problem. In some cases, this reduces the system's vulnerability to data bias.

In practical AI systems, we move toward larger-scale structures pre-trained with SSL on large amounts of unlabeled data. They can be used for a variety of tasks. For example, Meta AI now has systems for language translation that can handle hundreds of languages. And it's all on a single neural network! The company also has speech recognition systems for multiple languages. They can handle languages that we have very little information about, much less annotated data.

But how can self-directed learning help create AI systems with common sense? And how far can understanding common sense at the level of human intelligence take us?

Serious success in the development of AI will come when it is clear how to teach machines to understand how the world functions in the same way that humans and animals perceive it: primarily by observing it and partly by actively acting in it. We know how the world functions because each of us has a specific internal model of the world, which allows us to add missing information, predict what will happen, and anticipate the results of our actions. We can sense, perceive, interpret, reason, plan for the future, and take action through our world model.

How will learning with self-control affect the formation of the metaverse?

There are many definite applications of deep learning for the metaverse, such as motion tracking for VR and AR -glasses, fixation, re-synthesis of body movements and facial expressions, etc.

This opens up many possibilities for creating new creative tools based on AI so that everyone can create new things in the meta world and reality.

But there is also an "AI-full" application for the meta-universe: virtual AI assistants that will answer all the questions and help us cope with the flood of data bombarding us every day. To do this, AI systems need to understand how the world works (both physical and virtual), think and plan, and have a certain level of common sense. In short, we have to find a way to create autonomous AI systems that can learn like humans. That will take time. But Meta is playing the long-term game.

FacebookMySpace TwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineLinkedinRSS FeedPinterest
Pin It