亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Multiple Knowledge Representation of Artificial Intelligence

        2020-09-14 03:38:50YunhePan
        Engineering 2020年3期

        Yunhe Pan

        a Institute of Artificial Intelligence, Zhejiang University, Hangzhou 310027, China

        b Zhejiang Lab, Hangzhou 311121, China

        In the 1970s,cognitive psychology recognized that the information in the long-term memory is scene and semantic [1] and may be encoded in parallel as verbal and mental imagery [2]. In 1991,I pointed out that not all verbal propositions can be derived from the verbal system, and that many can only be transformed from the imagery system [3]. I have proposed the concept of visual knowledge, which consists of visual concepts, visual propositions,and visual narratives [4]. Visual knowledge can simulate the various spatiotemporal operations that a person can perform on a mental imagery in his/her brain, such as the design process [5].

        Moreover,existing computing technologies have provided relevant technical support for expressing and deducing visual knowledge. To this end, artificial intelligence (AI) researchers need to expand their horizons from traditional AI fields (including deep learning)to closely related technologies such as computer graphics and computer vision. Hence, researchers in AI, computer vision,and computer graphics in particular must jointly study visual knowledge. Those original verbal propositions that cannot be inferenced from the verbal system alone might be transformed with the help of visual knowledge. Therefore, by depending on verbal knowledge and visual knowledge,we can more comprehensively describe the surrounding world and solve more complex problems. Hence, the expression and deduction of visual knowledge is an important technology for AI 2.0 [6].

        Based on visual knowledge, a total of three kinds of methods represent knowledge in AI 2.0, as follows:

        (1) Verbal representation of knowledge.Verbal knowledge is represented by symbolic data, and its structure is explicit, its semantics are understandable,and its knowledge can be reasoned.Typical verbal knowledge includes the semantic network and the knowledge graph.

        (2) Knowledge representation by deep neural network.This kind of knowledge is suitable for the tasks of classification and recognition for unstructured data such as images, videos, and audios. However,it is difficult to interpret this kind of knowledge.Typical examples include deep neural networks(DNNs)and convolutional neural networks (CNNs).

        (3) Visual representation of knowledge.This kind of knowledge can feasibly be dealt with using graphs, animation, and three-dimensional(3D)objects.Its structure(i.e.,spatial-temporal structure)is explicit,its semantics are interpretable,and its knowledge can be deduced. A typical example is visual knowledge.

        The relationship between above three knowledge representations is fundamentally different from those between various other former knowledge representations that have appeared in traditional AI, such as rules, frameworks, and semantic networks. The three kinds of knowledge representation correspond to three different aspects of human memory, as follows:

        (1)The knowledge graph corresponds to semantic memory content.It is suitable for the retrieval and reasoning of symbolic data.

        (2) Visual knowledge corresponds to the scene memory content. It is suitable for the deduction and visualization of spatiotemporal data.

        (3) The DNN corresponds to the perception memory content. It is suitable for layer-wise abstraction of input data for classification.

        Parts 1 and 2 correspond to the encoded information of verbal and mental imagery in human long-term memory. Part 3 corresponds to the perceptual information in human short-term memory. Thus, these three kinds of knowledge representation are complementary and must be utilized as a whole in working.

        Another important property of these three kinds of knowledge representation is that they are interconnected and mutually supportive.Visual knowledge can transform 3D graphics or animation into an image or video via projection. Image or video information can also be converted to 3D graphics or animation through 3D reconstruction techniques.

        Since the semantics of visual knowledge are clearly expressed,we can align visual knowledge and the knowledge graph. Therefore, visual knowledge and the knowledge graph can be transformed into each other via symbolic retrieval and matching. That is, the connection between scene information and semantic information in visual knowledge and the knowledge graph can be realized by a structural model. Moreover, visual knowledge and the image and video sample data used in a DNN can be connected via reconstruction and transformation.

        Taking the knowledge of the cat as an example,Fig.1 illustrates how to connect the three kinds of knowledge representation.

        In Fig. 1, the knowledge graph expresses the cat’s species relationship; the visual knowledge expresses the spatiotemporal characteristics of the cat, including its form, structure, and movement; and the DNN expresses an abstraction of cat images,including both positive and negative sample images.

        Fig. 1. Three kinds of knowledge representation with respect to the cat.

        In fact, visual knowledge of the cat can be reconstructed from cat images from different perspectives of the cat. By means of transformation (i.e., geometry projection and motion transformation), visual knowledge can generate many of the cat’s images,which are helpful for DNN learning. Through the connection between visual knowledge and the knowledge graph,one can infer that cats, tigers, and leopards would share similar shapes, structures, and movements since they belong to the same family, as shown in Fig.1.Therefore,visual knowledge of tigers and leopards can be captured through appropriate modifications of the cat’s visual knowledge.In this way,we can realize the transfer of learning and find a way to learn a model when only small samples are available (such as zero-shot learning or few-shot learning).

        In this paper,I proposes the multiple knowledge representation of AI, which consists of the knowledge graph, visual knowledge,and DNN. A knowledge graph and visual knowledge are capable of dealing with the textual descriptions and visual content,respectively,for a given concept,while a DNN is desirable to disentangle the hierarchical abstraction of visual information and therefore is similar to the information-processing mechanism in the long-term and short-term memory of the human brain. Multiple knowledge representation via a combination of knowledge graphs, visual knowledge, and DNNs will be beneficial to interpretable,evolutional,and transferable models for knowledge representation and inference.

        Acknowledgements

        I am grateful for helpful suggestions from Yueting Zhuang, Fei Wu, Weidong Geng, and Siliang Tang at Zhejiang University.

        欧美精品偷自拍另类在线观看| 国产免费一区二区三区精品视频| 人人妻人人澡人人爽超污| 成 人 免费 黄 色 视频| 99热门精品一区二区三区无码| 精品国产一区二区三区男人吃奶| 一本久道竹内纱里奈中文字幕| 亚洲综合在线一区二区三区| 91久久青青草原免费| 偷拍激情视频一区二区| 亚洲av香蕉一区二区三区av| 综合色区亚洲熟妇另类| 女人与牲口性恔配视频免费| 日韩精品首页在线观看| 亚洲一区二区刺激的视频| 国语对白做受xxxxx在| 久久aⅴ无码av免费一区| 国产一区二区三区色区| 亚洲国产国语在线对白观看| 亚洲中文字幕在线观看| 夜夜爽无码一区二区三区| 国产亚洲精品视频在线| 亚洲精品宾馆在线精品酒店| 成人区人妻精品一熟女 | 影音先锋女人av鲁色资源网久久| 中文字幕亚洲综合久久菠萝蜜| 精品国产车一区二区三区| 国产麻花豆剧传媒精品mv在线| 精品午夜福利无人区乱码一区| 亚洲AV无码国产精品久久l| 亚洲一区二区综合精品| 女人脱了内裤趴开腿让男躁| 精品久久综合亚洲伊人| 中文乱码字幕在线中文乱码| 手机在线观看免费av网站| 亚洲日本中文字幕高清在线| 一区二区三区高清视频在线| 亚洲另类无码专区首页| 日韩精品人妻系列无码专区免费| 又色又爽又黄的视频网站| 99国产精品久久一区二区三区|