亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Review:Multi-aperture optical imaging systems and their mathematical light field acquisition models*

        2022-06-30 05:51:22QimingQIRuigangFUZhengzhengSHAOPingWANGHongqiFAN

        Qiming QI, Ruigang FU, Zhengzheng SHAO, Ping WANG, Hongqi FAN?

        National Key Laboratory of Science and Technology on ATR, College of Electronic Science and Technology,National University of Defense Technology, Changsha 410073, China

        ?E-mail: qiqiming19@163.com; fanhongqi@nudt.edu.cn

        Received Jan. 31, 2021; Revision accepted May 18, 2021; Crosschecked Mar. 24, 2022

        Abstract: Inspired by the compound eyes of insects, many multi-aperture optical imaging systems have been proposed to improve the imaging quality, e.g., to yield a high-resolution image or an image with a large field-ofview. Previous research has reviewed existing multi-aperture optical imaging systems, but few papers emphasize the light field acquisition model which is essential to bridge the gap between configuration design and application.In this paper, we review typical multi-aperture optical imaging systems (i.e., artificial compound eye, light field camera, and camera array), and then summarize general mathematical light field acquisition models for different configurations. These mathematical models provide methods for calculating the key indexes of a specific multiaperture optical imaging system, such as the field-of-view and sub-image overlap ratio. The mathematical tools simplify the quantitative design and evaluation of imaging systems for researchers.

        Key words: Multi-aperture optical imaging system; Artificial compound eye; Light field camera; Camera array;Light field acquisition model

        1 Introduction

        Over the past few decades, the imaging performance of single-aperture optical imaging devices,such as the single-lens reflex(SLR)camera,has been significantly improved. However, because a space point can be recorded by only one pixel in an imaging sensor, the ability to obtain more information about the scene (e.g., the depth of imaged objects)is limited. Generally, digital imaging sensors, e.g.,complementary metal oxide semiconductor(CMOS)and charge coupled device(CCD),are flat,which accounts for the difficulty of adapting the optical device with wide field-of-view(FOV)imaging(i.e., obvious chromatic aberration and distortion). In addition,one imaging sensor cannot meet both wide FOV and high-resolution requirements.

        To reach some goals such as high resolution,wide FOV,depth information acquisition,and multitarget detection, many multi-aperture optical imaging systems have been proposed since the begining of the 20thcentury. Multi-aperture optical imaging systems are based on the compound eye of insects (Land, 1989; Wen et al., 2019), which has the advantages of small volume, wide FOV, and high sensitivity to moving targets. Researchers refer to multi-aperture optical imaging systems as the artificial compound eye (ACE) (Gong et al., 2013; Hao and Li,2015;Wu SD et al.,2017;Cheng et al.,2019)and the light field camera and camera array (Wu G et al.,2017;Zhu H et al., 2017).

        In ACE research(Gong et al.,2013;Hao and Li,2015; Wu SD et al., 2017; Cheng et al., 2019), existing ACEs were classified and the preparation and application prospects of ACE were introduced, but the light field acquisition model was not included.Also, the use of the light field camera and planar camera array(Wu G et al.,2017;Zhu H et al.,2017)was introduced to obtain discrete light field information. Then, concerning the light field technique(Levoy, 2006), which is a key area of computational photography (Suo et al., 2012), researchers have analyzed the light field representation and calculation method.

        An ACE consists of multiple imaging apertures,which are called sub-eyes. These sub-eyes can be an individual lens module or a combination of a microlens array and common imaging sensors. Because a single camera has all the elements of a lens module,a camera array is actually an ACE with a planar structure. Different from ACEs, a light field camera inserts a microlens array behind the main lens of an ordinary camera. However, after a decoding operation, as in planar ACEs, sub-images from different viewpoints can be obtained. Thus, a light field camera is a special planar ACE. Summarizing current research, the ACE, light field camera, and camera array have common characteristics: multiple imaging apertures are integrated, and the relative position of each imaging aperture conforms to the symmetrical arrangement rule. Therefore, one can refer to these systems collectively as multi-aperture optical imaging systems.

        Multi-aperture optical imaging systems integrate preparation technology, optical design, and machine vision algorithms,and they have great value in applications such as reconnaissance, image navigation,computational photography,and medical endoscopy. The mathematical light field acquisition models play an essential role in closing the gap between configuration design and application during research. However,there is little research that summarizes light field acquisition for multi-aperture optical imaging systems.

        In this paper, some typical multi-aperture optical imaging systems are enumerated and categorized. In contrast to other research, general mathematical light field acquisition models are summarized for different kinds of multi-aperture optical imaging systems. Based on the models, it will be easier to quantitatively analyze the configuration design and carry out information processing research. In addition,the basic applications of multi-aperture optical imaging systems are analyzed referring to the light field acquisition models.

        The rest of this paper focuses on three aspects:(1) typical multi-aperture optical imaging systems,(2)general light field acquisition models,and(3)the application analysis of multi-aperture optical imaging systems.

        2 Typical multi-aperture optical imaging systems

        In previous research(Gong et al.,2013;Hao and Li, 2015; Wu G et al., 2017; Wu SD et al., 2017;Zhu H et al.,2017;Cheng et al.,2019),some typical multi-aperture optical imaging systems have been enumerated and the performance has been compared from different perspectives. The development trend of multi-aperture optical imaging systems is shown in Fig.1.

        Fig. 1 The development trend of multi-aperture optical imaging systems

        The aim of multi-aperture optical imaging systems is to obtain sub-images from different viewpoints. Each sub-image corresponds to an imaging aperture. For vision application, multi-aperture optical imaging systems can be divided into two main structures based on the position of each sub-image’s optical center, i.e., planar structure (Tanida et al.,2000; Yang JC et al., 2002; Duparré et al., 2005;Ng and Hanrahan,2005;Wilburn et al.,2005;Levoy et al., 2006; Lumsdaine and Georgiev, 2009; Li and Yi,2012;Venkataraman et al.,2013;Cao et al.,2018)and convex structure (Duparré et al., 2007; Zhang YK et al.,2010;Brady et al.,2012;Guo et al.,2012;Afshari et al., 2013; Song et al., 2013; Leitel et al.,2014; Cao et al., 2015; Luo et al., 2015; Deng et al.,2016; Pang et al., 2017; Shi et al., 2017; Yu et al.,2019; Zhang JM et al., 2020; Zhou et al., 2020). In addition, three possible arrangements exist in the convex structure: spherical multi-loop(Brady et al.,2012;Guo et al.,2012;Afshari et al.,2013;Cao et al.,2015; Luo et al., 2015; Pang et al., 2017; Shi et al.,2017; Yu et al., 2019; Zhang JM et al., 2020; Zhou et al., 2020), spherical multi-row (Zhang YK et al.,2010;Song et al.,2013;Deng et al.,2016),and cylinder(Leitel et al., 2014).

        There are basically two types of ACEs in terms of size, the microlens array (Tanida et al., 2000;Duparré et al., 2005; Duparré et al., 2007; Zhang YK et al., 2010; Li and Yi, 2012; Song et al., 2013;Leitel et al.,2014;Luo et al.,2015;Deng et al.,2016;Pang et al., 2017; Cao et al., 2018;Zhang JM et al.,2020; Zhou et al., 2020) and the lens module array (Brady et al., 2012; Guo et al., 2012; Afshari et al., 2013; Cao et al., 2015; Shi et al., 2017; Yu et al., 2019). As for light field acquisition devices,when a microlens array is placed behind the main lens of an ordinary camera, it becomes a light field camera(Ng and Hanrahan,2005;Levoy et al.,2006;Lumsdaine and Georgiev, 2009). Except for nontraditional light field acquisition approaches (e.g.,time-sequential capture,conventional camera with a coded aperture), both the camera array (Yang JC et al.,2002;Wilburn et al., 2005)and the lens module array (Venkataraman et al., 2013) can be called planar ACEs.

        By setting the layout of optical lenses and imaging sensors, the optical center and optical axis of each sub-image can be determined. In general, the distribution of optical lens often determines the distribution of the sub-images’optical centers. The best result is that each lens has an imaging sensor and the imaging sensors are distributed in an array according to the positions of the optical lenses. However, because they are limited by size, some multi-aperture optical imaging systems(Tanida et al.,2000;Ng and Hanrahan, 2005; Levoy et al., 2006; Duparré et al.,2007; Lumsdaine and Georgiev, 2009; Guo et al.,2012; Luo et al., 2015; Pang et al., 2017; Shi et al.,2017; Yu et al., 2019; Zhang JM et al., 2020) use only one imaging sensor to obtain all sub-images.To improve the utilization of the imaging sensor,sub-images are focused on a common imaging sensor by an optical transferring system in some convex microlens arrays (Duparré et al., 2007; Zhang JM et al., 2020) and convex lens module arrays (Guo et al.,2012;Shi et al., 2017;Yu et al., 2019).

        For optical lenses, there are some special cases,such as free-form surface optics (Li and Yi, 2012;Pang et al., 2017)and multi-focal optics(Cao et al.,2018). Li and Yi(2012)used free-form surface design to make the optical axis of each sub-image point differently. Pang et al. (2017) reduced aberrations using free-form surface design. Cao et al. (2018)designed a microlens array with two focal lengths to demonstrate excellent two-order focusing abilities.

        The classification is shown in Fig. 2. Table 1 shows the features of the multi-aperture optical imaging systems above. Briefly,multi-aperture optical imaging systems can be classified in three ways.First, for vision applications,there are planar structures and convex structures, which are based on the position of each sub-image’s optical center. Second,there are two styles in terms of size: microlens array and lens module array. Third, some multi-aperture optical imaging systems use an imaging sensor array to obtain sub-images, while others use only one common imaging sensor.

        Fig. 2 The classification of multi-aperture optical imaging systems

        Table 1 The features of typical multi-aperture optical imaging systems

        Multi-aperture optical imaging systems face mainly two problems: one is the preparation and optimization of optical components, and the other is the combination of optical components and an imaging sensor. There are two considerations in the preparation and optimization of optical components: the microlens array manufacturing process(Yuan et al., 2018; Zhu L et al., 2019) and image quality improvement. To improve the image quality of a traditional lens, the trend is to apply free-form surface optics in multi-aperture optical imaging systems. Concerning the combination of optical components and an imaging sensor, the first problem is how to design a curved imaging sensor to fit a curved microlens array, and the second is how to optimize the optical transferring system for a lens module array using a common imaging sensor to achieve both high pixel efficiency and high-quality imaging.

        3 Basic theory: single-aperture optical imaging model

        To analyze the light field acquisition of multiaperture optical imaging systems, the first step is to use the general single-aperture optical imaging model. In this section, the optical path model of single-aperture imaging, which is on the physical level, is first briefly introduced. Then the equivalent mathematical model is introduced as the basis of this paper.

        3.1 Physical level: optical path model

        A multi-aperture optical imaging system consists of multiple imaging apertures. The design of a single imaging aperture usually follows geometric optics principles (O’Shea and Zajac,1986;Born et al.,1999; Lindlein and Leuchs, 2012). As an imaging optical system, the basic principle of single aperture is that a lens forms a real inverted picture of a scene on the surface of an imaging sensor. Also,each imaging aperture has a diaphragm near the lens(Lindlein and Leuchs,2012). When the object depth is much greater than the focal length of the lenses,single aperture can be simplified as a thin lens model(Barsky et al., 2003;Liang et al., 2011),as shown in Fig. 3.

        Fig. 3 Single-aperture optical path model. An imaging aperture can be simplified as a thin lens model: a thin lens, a diaphragm, and an imaging sensor

        The center of lensOLis called the optical center,and the axis that passes throughOLis the optical axis. The plane that is normal to the optical axis atOLis called the principal plane,and the image plane is located at the surface of the imaging sensor. The focus point on the optical axis has the property that any ray emanating from it or proceeding toward it travels parallel to the optical axis after refraction by the lens. Focal lengthFis the distance between the principal plane and the focus point. The distance between the principal plane and the image plane is the principal distancef. The relationship amongf,F(xiàn), and the focused object depthZffollows 1/Zf+1/f= 1/F. In practice, the principal distancefis approximately equal to the focal lengthF. The size,d, of the diaphragm, controls the light flux of the lens.

        Because optical design cannot be perfect, any single imaging aperture has depth of field (DOF),chromatic aberration, distortion, and other errors.Although these errors may be needed in photography, for vision application, it is necessary to optimize the optical design to eliminate the errors due to the requirement for high measurement accuracy.For multi-aperture optical imaging systems, attention should be paid to the main imaging indexes,such as the principal distancefor focal lengthFand the FOV.

        3.2 Mathematical level: pinhole model

        The single-aperture mathematical model plays a key role in extracting information from the obtained sub-images. In practice,two models are used to link configuration design and application. The first is the pinhole model (Hartley and Zisserman, 2004),and the other is the unified omnidirectional camera model (Geyer and Daniilidis, 2000). The former is a linear image projection that limits the FOV below 180°. The latter is suitable for a wide FOV, but the image projection is nonlinear. With the benefit of enough imaging apertures, the FOV of each aperture is generally small. Linear image projection is essential for multi-aperture optical imaging systems,and to this end,the pinhole model is common as the general mathematical model.

        According to projective geometry (Hartley and Zisserman,2004),after calibration,the position of a pixel in a sub-image can represent the direction of a ray passing through the optical center. As shown in Fig.4,for each aperture,the sub-image is composed ofNu×Nvpixels and associated with two coordinate systems: the pixel coordinate systemOp-uvand the local coordinate systemOL-XLYLZL. For the multi-aperture optical imaging system, global coordinate systemOG-XGYGZGis established with its geometric center.

        Fig. 4 Pinhole camera model. There are two coordinate systems associated with a single aperture:pixel coordinate system Op-uv and local coordinate system OL-XLYLZL. The global coordinate system OG-XGYGZG is established with the geometric center of the multi-aperture optical imaging system

        The origin of local coordinate systemOLis located at the optical center, and theZLaxis follows the direction of the optical axis. The optical axis intersects the image plane at (cu,cv). The distance betweenOLand the image plane is the principal distancef. According to Eq. (1), if the global coordinates of an objectare offered, the local coordinatesare obtained. In this paper,?·is used to denote the homogeneous vector by adding 1 as the last element:The rotation matrixR ∈R3×3and translation vectort ∈R3are called extrinsic parameters. We have

        In this paper, every sub-image is assumed to have no distortion (Weng et al., 1992; Heikkila and Silven, 1997).which is the position of a pixel in the image plane, represents the homogeneous pixel coordinates.δuandδv, measured in microns, are the width and height of a pixel on the imaging sensor, respectively. The pixel focal lengthsfuandfvare obtained usingfu=f/δuandfv=f/δv. Using Eq. (2), the pixel coordinates are obtained from the corresponding local coordinates. The matrixK ∈R3×3, called the calibration matrix, consists of the intrinsic parameter{fu,fv,cu,cv}.

        As shown in Fig. 4, the horizontal FOV and vertical FOV of an imaging aperture are 2φ0and 2θ0, respectively, where the half-horizontal FOVφ0and half-vertical FOVθ0are obtained using Eq.(4).When the object depth isZ, the pixel resolution is

        4 Light field acquisition models for the planar structure

        In this section,light field acquisition models are derived from the planar structure. Note that the inner optical path of light field cameras is different from that of an ordinary planar ACE.In this section,we will introduce the light field acquisition models.

        As a general inductive derivation, this section and the following are based on these assumptions:

        1. All single apertures are identical.

        2. The structure of the multi-aperture optical imaging system is symmetrical.

        3. Sub-images are planar and rectangular.

        4. Distortion can be ignored in each sub-image.

        4.1 Planar ACE

        For a planar ACE,the optical centers of all subimages are coplanar. Except for a few planar ACEs,such as artificial apposition compound eye objective (APCO) (Duparré et al., 2005) and compact large-FOV compound-eye camera(CEC)(Li and Yi,2012), all the optical axes are parallel. As shown in Fig. 5, all imaging apertures are in a grid pattern on a plane. Supposing that the number of apertures isNX ×NY, the intervals between the adjacent apertures are denoted as Δsand Δtrespectively,and(nX,nY)represents the label of each aperture. In this way, the global coordinates of aperture(nX,nY) arePG=s,t,0)T, in whichs=nXΔsandt=nYΔt.

        According to the mathematical model of singleaperture optical imaging in Section 3.2,for the aperture(nX,nY),the rotation matrixRis obviously the identity matrix,R=E, and the translation vector ist= (-s,-t,0)T. Then, if the global coordinates of an objectPG= (x,y,Z)Tare given, the pixel coordinatespcan be computed using Eq. (3). As shown in Fig. 6, assuming that the object plane isZaway from the planar ACE, every aperture will have a corresponding sub-image region on the object plane. In a common overlapping region, an object point is recorded by all apertures.

        Fig. 5 Schematic of a general planar ACE, where all apertures are in a grid pattern on a plane

        Fig. 6 Light field acquisition of planar ACE. Every aperture will have a sub-image region on the object plane. In addition, an object point in a common overlapping region is recorded by all apertures

        The whole FOV of planar ACE is the same as that of a single aperture. See Appendix A for details. For adjacent apertures, when object depthZsatisfiesZ ≥fuΔs/Nu, the sub-images overlap.

        The sub-image overlap ratio, which is the ratio of the overlapping image region size to the sub-image region size, can be obtained using Eq. (5):

        According to the similarity theory, the whole object region that a planar ACE can record is

        Thus, the width of the whole object region is(Nu -1)Z/fu+ (NX -1)Δs, and the height is(Nv -1)Z/fv+(NY -1)Δt.are used to denote rounding down and up, respectively, and the equivalent resolutionNx×Nyof the whole object region is

        Note that when object depthZsatisfies a specific condition as shown in inequality (8), all apertures would have a common overlapping region referring to Eq. (9):

        In this case, a point on the object plane is recorded by all sub-images. The size of the common overlapping region is (Nu -1)Z/fu-(NX -1)Δs.The adjacent aperture will record this point with pixel shift, which is called parallax. The parallax denoted as Δpis computed by referring to Δp=■fuΔs/Z」. The larger Δsis,the more obvious Δpwill be. In vision applications, Δsis termed the baseline.

        Ignoring subpixel shift,the equivalent resolutionNx′×Ny′of the common overlapping region is shown in Eq. (10):

        If Δs= 0.1 m, whenZincreases from 1 m to 10 m, the sizes of the whole object region and the common overlapping region will vary linearly from 1.199 m to 8.387 m and 0.3987 m to 7.587 m,respectively. However, forηX, whenZis 0.5, 3, 5.5, and 8 m, the correspondingηXis 0.75, 0.9583, 0.9773,and 0.9844,respectively. Therefore, as object depthZbecomes larger, the sub-image overlap ratio approaches 1.

        LetZ= 5 m. As Δsincreases from 0.03 m to 0.2 m, the whole object region is going to increase linearly from 4.114 m to 4.794 m, butηXand the common overlapping region size will decrease linearly from 0.9925 to 0.95 and 3.874 m to 3.194 m,respectively. In addition, when Δs= 0.1 m andZ=5 m,the common overlapping region is 3.594 m,whose equivalent resolution isNx′= 575 according to Eq. (10).

        4.2 Light field camera

        As Fig.6 shows,an object pointPGin the common overlapping region is recorded by all imaging apertures (i.e.,p-2–p2). This makes it possible to capture the angular light field information so that the three-dimensional (3D) scene can be recovered from two-dimensional (2D) sub-images. Based on this idea,light field imaging(Wu G et al.,2017;Zhu H et al.,2017)emerged.

        Fig. 7 Quantitative analysis: (a) variation trend of ηX; (b) variation trend of the whole object region size; (c)variation trend of the common overlapping region size

        Moon and Spencer(1953)defined a light field as a complete collection of rays in space. Later, Adelson and Bergen(1991)proposed a seven-dimensional(7D)functionL(P,ω,λ,t),called the plenoptic function, to represent a light field. The plenoptic function models a ray with eight parameters: positionP=(x,y,z)T, directionω=(θ,φ)T, wavelengthλ,timet,and brightness|L|. The 7D function is seldom used due to its complicated calculation. Fortunately,because the dynamic light field can be captured in continuous frames, the timetof the plenoptic function can be ignored. In addition, most imaging sensors are CCD or CMOS,having red,green,and blue channels,so the wavelengthλcan be ignored as well.Without wavelengthλand timet, the 5D functionL(P,ω),as shown in Fig.8a,is used to represent the light field (McMillan and Bishop, 1995). However,the dimension of the 5D function can be further reduced. Levoy and Hanrahan(1996)and Gortler et al.(1996)proposed a two-parallel-plane(2PP)model to represent the light field (Fig. 8b). In their proposed model,a ray can be represented by two intersections(e.g., (s,t)Ton planestand (x,y)Ton planexy)when the interval between two parallel planes isZ.

        Fig.8 Light field representation: (a)a ray is recorded by position P and direction ω in the 5D function;(b)a ray is recorded by two intersections(s,t)T and(x,y)T in the two-parallel-plane (2PP) model

        Compared with other light field representations, it is convenient to calculate the 4D functionLZ(s,t,x,y)because it has the least dimension.More importantly, the 4D function allows the light field information to be used flexibly in the space domain (Bolles et al., 1987; Isaksen et al., 2000; Lin ZC and Shum, 2004) and frequency domain (Chai et al.,2000;Zhang C and Chen,2003;Durand et al.,2005; Ng, 2005; Soler et al., 2009), like the digital signal. Hence, the 2PP model is widely used in light field imaging. Taking a planar ACE as an example,if the object depth isZ, the recorded space resolution of the light fieldLZ(s,t,x,y) is determined by the equivalent resolution of the common overlapping regionNx′ ×Ny′, and the angular resolution is determined by the number of aperturesNX ×NY.

        Light field cameras based on microlens arrays can obtain light field information in a single photographic exposure (Ng and Hanrahan, 2005). Compared to a planar ACE, light field cameras have the advantage of being small and easy to carry, and thus are popular among researchers and photographers. There are mainly two types of light field cameras based on a microlens array: Plenoptic 1.0(Ng and Hanrahan, 2005; Levoy et al., 2006) and Plenoptic 2.0 (Lumsdaine and Georgiev, 2009). In the rest of this subsection, the light field acquisition models of these two kinds of light field cameras are summarized.

        4.2.1 Plenoptic 1.0

        Without loss of generality, as shown in Fig. 9,it is assumed that there areNx × Nymicrolenses in a light field camera, and each microlens label is denoted as (nx,ny). On the imaging sensor, every microlens has a corresponding pixel region withNu × Nvpixels, and the coordinates of a pixel in the corresponding pixel region are denoted byp(nx,ny)= (u,v)T. The diaphragm diameter of the main lens isdmainand that of the microlens isdmic.In terms of the main lens,the principal distancefmainis a little larger than the focal lengthFmain.

        Now that light field cameras are designed to obtain light field information,the 2PP model is used to represent the light field acquisition for Plenoptic 1.0(Fig. 10). The viewpoint plane puts the focal lengthFmainin front of the principal plane of the main lens(Hahne et al.,2014a,2014b). The microlens array is located at the image plane of the main lens, so that rays focused by the main lens are separated by the microlens array.

        The King mourned, but he did not think that the Queen had done the wicked deed, and as he was afraid the maiden would also be taken from him, he wanted to take her with him

        Fig. 9 Internal structure of light field cameras

        Fig. 10 Light field acquisition of Plenoptic 1.0. The viewpoint plane puts the focal length Fmain in front of the principal plane of the main lens. The microlens array is located at the image plane of the main lens

        An important characteristic of Plenoptic 1.0 is that the main lens and all microlenses have the samef-number, which is defined as the ratio of the principal length to the size of the diaphragm (Ng and Hanrahan, 2005). In this case, the principal length of the microlens isfmic=dmicfmain/dmain. Notice that adjacent pixel regions will not overlap if thefnumber is reasonable. Specifically,whendmainis the same as the size of the main lens, adjacent pixel regions are tangent to each other, and thus pixels are used effectively.

        If the coordinates of a pixelp(nx,ny)= (u,v)Tare offered, a rayL(s,t,x,y) in the 2PP model will be obtained referring to Eqs. (11) and (12), which is called decoding. Through decoding, we will obtain sub-images as planar ACEs do. The resolution of every sub-image isNx ×Ny, and the number of viewpoints isNu×Nv.

        For every sub-image,the pixel resolution on the object plane is Δx=dmic(Z -Fmain)/Fmain, while the viewpoint interval is Δs=dmain/Nu. In addition, whennxanduare taken to be the maximum and minimum respectively,when the object depth isZ,the range of the object region is shown in Eq.(13):

        Therefore,the size of the whole object region isdmain+Nxdmic(Z -Fmain)/Fmain. The sub-image overlap ratioηXcan be obtained using Eq. (14):

        BecauseFmainis almost negligible compared toZandNxdmicis approximatelydmain,ηXcan be considered as 1 in practice. As a result, the number of microlensesNx×Nydetermines the space resolution of the light field, and the angular resolution of the light field is determined by the number of pixels in a pixel region,which isNu×Nv.

        It seems that Plenoptic 1.0 is equivalent to anNu×Nvplanar ACE,whose resolution of each imaging sensor isNx ×Nyand the pixel focal length isfu=fv=ZFmain/(dmic(Z-Fmain)). The baselines are Δs=dmain/Nuand Δt=dmain/Nv.

        According to Ng and Hanrahan(2005),the proposed Plenoptic 1.0 has 296×296 microlenses,whose pixel region is 14×14. The focal length of a microlens is 500 μm, and the diaphragm diameters of a microlens and main lens aredmic= 125 μm anddmain= 35 mm, respectively. After calculation, the parameter of an equivalent planar ACE is as follows:the number of apertures is 14×14, the resolution of each imaging sensor is 296×296, the pixel focal length isfu=fv= 1152 whenZ= 5 m, and the baselines are Δs=Δt=2.5 mm.

        4.2.2 Plenoptic 2.0

        Better than Plenoptic 1.0,Plenoptic 2.0 can provide a variable tradeoffbetween space and angular resolution(Lumsdaine and Georgiev, 2009). Fig. 11 shows the 2PP model of light field acquisition for Plenoptic 2.0. The focused image plane of the main lens is in front of the microlens array with distancea. The interval between the microlens array and the imaging sensor isb. In this way,the viewpoint plane coincides with the principal plane of the main lens.

        Fig. 11 Light field acquisition of Plenoptic 2.0. The focused image plane of the main lens is in front of the microlens array with distance a

        In this case,according to Eqs.(15)and(16),we will decode the light field information from all pixels on the imaging sensor.

        On the object plane, the pixel resolution of a sub-image is Δx=admicZ/(bNufmain). In addition,the viewpoint interval is Δs=dmicZ/fmain. Takingnxanduto be the maximum and minimum respectively,the range of the object region will be obtained as follows:

        and the size of the whole object region is(Nx+a/b)dmicZ/fmain.

        The sub-image overlap ratioηXcan be computed using Eq. (18):

        In practical applications,ηXcould be considered as 1, similar to the setting in Plenoptic 1.0. However, in contrast to Plenoptic 1.0, in Plenoptic 2.0 the resolution of every sub-image is(NxNub/a)×(NyNvb/a) and the number of viewpoints is (a/b)×(a/b). The former represents the space resolution of the light field, while the latter is the angular resolution. Obviously, the ratio ofatobdetermines the tradeoffbetween the space and angular resolution of light field acquisition.

        Plenoptic 2.0 can also be equivalent to a planar ACE. This planar ACE has (a/b)×(a/b) apertures; the resolution of each imaging sensor is(NxNub/a)×(NyNvb/a) and the pixel focal length isfu=bNufmain/(admic). In addition, the baselines are Δs=Δt=dmicZ/fmain.

        Lumsdaine and Georgiev (2009) proposed Plenoptic 2.0,which has 130×122 microlenses whose pixel region is 32×32. The focal lengths of the microlenses and the main lens are 750 μm andfmain= 140 mm, respectively. The diaphragm diameter of a microlens isdmic= 250 μm. Assuming thata/b=8,after calculation,the equivalent planar ACE is as follows: the number of apertures is 8×8,the resolution of each imaging sensor is 520×488,and the pixel focal length isfu=fv= 2240. IfZ=5 m, the baselines will be Δs=Δt=8.93 mm.

        5 Light field acquisition models for the convex structure

        Compared with the planar structure, the optical centers of all sub-images are not coplanar for the convex structure. In this section, the light field acquisition models of three representative arrangements for the convex structure,spherical multi-loop,spherical multi-row,and cylinder, are summarized.

        5.1 Spherical multi-loop arrangement

        Most convex ACEs (Brady et al., 2012; Guo et al., 2012; Afshari et al., 2013; Cao et al., 2015;Luo et al., 2015; Pang et al., 2017; Shi et al., 2017;Yu et al., 2019; Zhang JM et al., 2020; Zhou et al.,2020) are in a spherical multi-loop arrangement. In this arrangement,imaging apertures are divided intoNθlatitude loops, and all optical centers located in the same loop have the same latitude angle,as shown in Fig. 12.

        Fig. 12 Spherical multi-loop arrangement

        Suppose that the radius of the sphere isRand that each loop is evenly spaced with the latitude interval Δθ. At loopnθ, when the number of apertures uniformly arranged isNφ(nθ),the longitude interval between two adjacent sub-eyes is Δφ(nθ) =360°/Nφ(nθ). The label of each aperture is denoted as (nθ,nφ(nθ)), in whichnφ(nθ)is the counterclockwise number for loopnθin the top view. Thus, the global coordinates of aperture (nθ,nφ(nθ)) could be computed using spherical coordinates(θG,φG). The zenith angle and azimuth angle areθG=nθΔθandφG=nφ(nθ)Δφ(nθ)+φ0(nθ), respectively, in whichφ0(nθ)stands for azimuth deviation.

        The local coordinate system,OL-XLYLZL, is established for each aperture. TheZLaxis coincides with the optical axis, whose extension line passes throughOG. In this case, the rotation matrixRof aperture (nθ,nφ(nθ)) is shown in Eq. (19), and translation vectortis (0,0,-R)T:

        Fig. 13 shows the side view of this arrangement. In this arrangement,the maximum half-FOV of light field acquisition is(Nθ-1)Δθ+arctan((ZR)tanφ0/Z);see Appendix B for details. If the adjacent apertures overlap,Δθ <2φ0should be satisfied.The red shaded region is the overlapping sub-image region of adjacent apertures. The minimum depth of the spherical object surfaceZminis computed using Eq. (20), and the sub-image overlap ratio is computed using Eq. (21):

        Fig. 13 Vertical view of a convex ACE

        One of the key issues in convex ACEs is how to obtain a panoramic image. Two determinants need to be considered: distribution density of apertures and full-view coverage distance (FCD) (Afshari et al.,2013). A well-established geometry concept known as the Voronoi diagram(Aurenhammer,1991; de Berg et al., 2001), which is applied on a spherical surface, is used to determine the parameters of this arrangement, so that each direction of the curved object surface is observable by at least one aperture. Afshari et al. (2013) and Wang YW et al. (2017) analyzed how to obtain a panoramic image. In their opinion, as shown in Fig. 14, when every loop is evenly spaced with the latitude interval Δθ, for the apertures at the same loop, the edge of each aperture’s object plane in the non-overlapping part should be seamlessly connected to obtain a panoramic image.

        Fig. 14 The object planes of a convex ACE in a spherical multi-loop arrangement. The edge of each aperture’s object plane in the non-overlapping part is seamlessly connected. Reprinted from Wang YW et al. (2017), Copyright 2017, with permission from John Wiley and Sons

        In this case,the number of apertures located at loopnθ(0<nθ <Nθ-1)usually satisfies inequality(22). Also,the full-view coverage distance isZFCD=

        Specifically, as for the last loop,nθ=Nθ -1,Nφ(nθ)ought to satisfy inequality(23),in whichφ=

        Assuming that for every sub-image, the resolution isNu= 640 without nonlinear distortion and the pixel focal length isfu= 800, according to Eqs. (20) and (21) and inequality (22), the corresponding values are set for quantitative analysis.When the radius of the sphere isR= 0.1 m, if Δθincreases from 10°to 30°andZincreases from 0.5 m to 10 m,the FCD,sub-image overlap ratio,and minimum number of apertures located at loopnθ= 1 will vary(Fig.15).

        Fig. 15 Quantitative analysis: (a) variation trend of ZFCD when Δθ varies; (b) variation trend of ηθ when Δθ and Z vary; (c) variation trend of Nφ(1) when Δθ and Z vary

        When Δθis 10°, 15°, 20°, 25°, and 30°, the correspondingZFCDis 0.128,0.1491,0.1788,0.2243,and 0.3029 m, respectively. As Δθincreases, the FCD is going to increase faster and faster. IfZ= 5 m, when Δθincreases from 10°to 30°,ηθwill decrease linearly from 0.7661 to 0.2983. In addition, if Δθ= 30°, whenZis 0.5, 3, 5.5, 8, and 10 m,the correspondingηθis 0.1449,0.2892,0.2995,0.3035,and 0.3051,respectively. With an increase in the spherical object surface’s depth, the sub-image overlap ratio will approach 1-Δθ/(2φ0) = 0.312.Let Δθ=15°andZ=5 m; there should be at least three apertures located at loopnθ= 1 to ensure full-view coverage.

        5.2 Spherical multi-row arrangement

        Part of the convex ACE(Zhang YK et al.,2010;Song et al., 2013; Deng et al., 2016) is its spherical multi-row arrangement. As in the spherical multiloop arrangement, all apertures are located on the

        sphere with radiusRand all optical axis extension lines intersect at spherical centerOG(Fig. 16).

        Fig. 16 Spherical multi-row arrangement

        Suppose that the number of apertures isNφ×Nθand that the label of each aperture is denoted as(nφ,nθ). Then the global coordinates of aperture (nφ,nθ) could be computed using polar coordinates (φG,θG), in whichφGandθGare the azimuth angle and elevation angle, respectively. If the elevation angle between adjacent apertures is Δθand the azimuth angle is Δφ,the azimuth and elevation angles of each aperture will beφG=nφΔφandθG=nθΔθ, respectively.

        Taking the sub-image’s optical center of each aperture as the origin to establish a local coordinate system, theZLaxis is along the optical axis, while theXLaxis is parallel to planeZGOGXG. For the extrinsic aperture parameter (nφ,nθ), the rotation matrixRis shown in Eq.(24)and translation vector

        If the adjacent sub-images overlap, Δφ <2φ0and Δθ <2θ0should be satisfied. When the spherical object surface has a depth ofZ, the vertical sub-image overlap ratio of adjacent apertures can be computed using Eq. (25):

        WhenθG= 0, the minimum horizontal subimage overlap ratio is computed using Eq. (26):

        To obtain a panoramic image, the depth of the spherical object surface should satisfy inequality (27):

        In this case, the vertical FOV of light field acquisition is (Nθ -1)Δθ+ 2arctan((Z -R)tanθ0/Z). In addition, the horizontal FOV is

        Many qualitative conclusions are consistent with the convex ACE with a spherical multi-loop arrangement. For example, with the increase of Δφor Δθ, the horizontal or vertical FOV will increase linearly,but the sub-image overlap ratio will decrease linearly.

        5.3 Cylinder arrangement

        Leitel et al. (2014) proposed a typical convex ACE in a cylinder arrangement. To summarize the general light field acquisition model of the cylinder arrangement, it is assumed that all vertical apertures are parallel. As shown in Fig. 17, similar to the spherical multi-row arrangement,Nφ × NYapertures are arranged on the surface of a cylinder whose radius isR. The vertical interval between adjacent apertures is ΔY, and Δφis used to represent the azimuth angle between adjacent horizontal apertures. Likewise,if (nφ,nY)is denoted as the label of each aperture, then the global coordinates of aperture (nφ,nY) can be computed using columnar coordinates (φG,YG), in which the azimuth angle isφG=nφΔφand the height isYG=nYΔY.

        Fig. 17 Cylinder arrangement

        Any aperture takes a corresponding sub-image’s optical center as the origin to establish a local coordinate systemOL-XLYLZL, in which theZLaxis is along the optical axis and theYLaxis is parallel to theYGaxis. The rotation matrixRand translation vectortfor the aperture (nφ,nY) are shown in Eqs. (28)and(29), respectively:

        When the depth of the cylindrical object surface isZ, the sub-image overlap ratio of adjacent apertures in the horizontal and vertical directions can be computed using Eq. (30):

        To synthesize a panoramic image, the following conditions should be satisfied: Δφ <2φ0andZ ≥ZFCD, in whichZFCDis computed using Eq. (31):

        In this case, the vertical FOV is 2θ0, and the horizontal FOV of light field acquisition is (Nφ -1)Δφ+2arctan((Z-R)tanφ0/Z). Similar to a planar ACE,the vertical whole object region is(Nv-1)(ZR)/fv+(NY -1)ΔY, and the size of the vertical common overlapping region is(Nv-1)(Z-R)/fv-(NY -1)ΔY.

        Consistent with some qualitative conclusions concerning planar ACEs and convex ACEs in a spherical multi-loop arrangement, if Δφor ΔYincreases,the horizontal FOV or vertical whole object region will become larger,but the sub-image overlap ratio will decrease. When the depth of the cylindrical object surface increases,both the size of the vertical whole object region and the sub-image overlap ratio will increase.

        6 Application analysis of multiaperture optical imaging systems

        Researchers have discussed the applications of multi-aperture optical imaging systems based on ACEs(Gong et al.,2013;Wu SD et al.,2017;Cheng et al., 2019), light field cameras, and camera arrays(Wu G et al., 2017;Zhu H et al.,2017). Apparently,the sub-image overlap ratio for the planar structure is greater than that for the convex structure,and all sub-images have a large common overlapping region.Compared to the planar structure, the whole FOV of the convex structure is much larger.

        It is therefore easy to infer that these two kinds of multi-aperture optical imaging systems have different applications. Planar ACEs and light field cameras based on a microlens array are suitable for light field imaging. Convex ACEs focus on wide FOV imaging, associated with high-definition surveillance and multi-target detection. In this section, based on the summarized light field acquisition models,the application of different structures is analyzed.

        In the rest of this section we assume that for every aperture,the imaging sensor resolution isNu=Nv=640 without nonlinear distortion, the pixel focal length isfu=fv=800,and the pixel coordinates of the optical center arecu=cv=320. In this case,the half FOV of the sub-image isφ0=θ0=21.8014°.

        6.1 Application analysis of the planar structure

        Planar multi-aperture optical imaging systems are used mainly in light field imaging; basic algorithms include superresolution reconstruction(Bishop et al.,2009;Lim et al.,2009;Georgiev et al.,2011; Wanner and Goldluecke, 2012b; Carles et al.,2014), depth estimation (Wanner and Goldluecke,2012a,2014;Kim et al.,2013;Lin HT et al.,2015;Johannsen et al., 2016;Williem and Park,2016;Wang TC et al., 2018; Wu SD et al., 2018a, 2018b), and refocusing (Vaish et al., 2004; Yang T et al., 2014;Wang YQ et al., 2019). In this subsection, these three types of applications are analyzed.

        Assuming that the number of apertures isNX=5 and that the interval of adjacent apertures is Δs=0.05 m, if object depthZincreases from 0.3 m to 50 m, the sub-image overlap ratioηXand the common overlap ratioη,which is the ratio of the common overlapping region size to the whole object region size, are shown in Fig. 18a. The parallax Δpwill vary as shown in Fig.18b.

        According to Fig. 18a, whenZ >10 m,ηXis greater than 0.9511,which means that the pixel utilization of each aperture is high enough to obtain the common overlapping region. Because the common overlapping region can be recorded by all apertures,there areNXsub-pixels in a pixel region,and a shift occurs within each sub-pixel. Therefore, superresolution reconstruction has been studied for planar ACEs(Tanida et al.,2000;Duparré et al.,2005)and light field cameras (Ng and Hanrahan, 2005; Lumsdaine and Georgiev, 2009) to obtain high-definition images.

        According to Fig.18b,whenZ <40 m,the parallax between adjacent sub-images is greater than 1 pixel. In this case, the parallax can be used to determine the depth of an object point. Depth estimation, which is the basis of 3D reconstruction,can be applied to planar multi-aperture optical imaging systems as a result. The depth interval is shown in Fig. 19. WhenZ >10 m, only three depth intervals can be identified.

        The accuracy of depth estimation can be represented by the depth change ΔZof a pixel (i.e.,fuΔs/(Z-ΔZ)-fuΔs/Z=1),which is computed using Eq. (32), as shown in Fig. 20. AsZincreases,the depth estimation accuracy deteriorates rapidly.However, if Δsis set as 0.1, 0.2, 0.3, or 0.4 m whenZ= 25 m, the corresponding ΔZis 5.952, 3.378,2.358,or 1.812 m, respectively.

        Fig. 18 The variation trend of the sub-image overlap ratio and common overlap ratio (a) and Δp (b)

        Hence,depth estimation is used mainly in small scenes with limited distances, such as indoors. The smaller the object depth, the higher the accuracy,but the larger the common overlapping region. At the same time, as the baseline Δs increases, the depth estimation accuracy will improve. Some planar multi-aperture optical imaging systems (Tanida et al., 2000; Ng and Hanrahan, 2005; Lumsdaine and Georgiev,2009;Venkataraman et al.,2013)have been applied to depth estimation.

        Based on depth information acquisition,another basic algorithm is light field refocusing,by which the DOF can be controlled. In this way,the objects at a certain depth can be imaged, while objects at other depths are blurred,which is called synthetic aperture imaging in some research (Vaish et al., 2006; Joshi et al.,2007;Yang T et al.,2014). The minimum DOF is consistent with the depth estimation accuracy.

        Taking Fig. 6 as an example, if the refocused depth isZ, the sub-image of aperturenXwill be shifted bynXΔppixels, in which Δpis the parallax. Then, adding and averaging these shifted subimages, the refocused image, whose viewpoint is the same with aperturenX=0,will be obtained. When Δs= 0.05 m andZ= 10 m, the DOF of the refocused image is 2 m according to Eq. (32).

        Fig. 19 The depth interval for Fig. 18b

        Fig. 20 The depth estimation accuracy

        In short, as the parallax Δpincreases, the refocusing effect will be more obvious. Yang JC et al. (2002), Ng and Hanrahan (2005), Wilburn et al.(2005),Levoy et al.(2006),and Lumsdaine and Georgiev (2009) studied light field refocusing based on their multi-aperture optical imaging systems.

        In summary,the planar structure is suitable for light field imaging such as superresolution reconstruction, depth estimation, and refocusing. However, application for depth estimation or refocusing is possible only when the object depth is not too large.

        6.2 Application analysis of the convex structure

        It is apparent that convex multi-aperture optical imaging systems have the advantage of a larger whole FOV compared to the planar structure. Most of the convex multi-aperture optical imaging systems focus on wide FOV imaging. The basis of wide FOV imaging is image stitching. In general, the image stitching algorithm is composed of two steps: the first step is image registration, which is the core of image stitching, and the second step is image fusion(Zhang ZZ et al., 2018). However,traditional methods are time-consuming and thus cannot guarantee real-time wide FOV imaging. Luckily, for convex multi-aperture optical imaging systems, the relative position of each sub-image obtained is known, and this offers a solution to the problem. In this subsection, the advantage of convex structure in image stitching is analyzed.

        Fig. 13 is taken as an example again. Suppose that five apertures are fixed on a spherical frame whose radius isRand that the latitude interval between adjacent apertures is Δθ. WhenR= 0.1 m and Δθ=20°, if the depth of the curved object surfaceZincreases from 0.3 m to 50 m, the sub-image overlap ratioηθwill approach 1-Δθ/(2φ0)=0.5413 according to Eq. (21). To find the critical depthZoptat whichηθis approximately constant,take the derivative of Eq. (21) using Eq. (33). The variation trends ofηθand dηθ/dZare shown in Fig. 21.

        Fig. 21 The variation trend of ηθ (a) and dηθ/dZ (b)

        Judging from Fig. 21b, as dηθ/dZdecreases monotonically withZ, whenZ ≥9 m (dηθ/dZ ≤10-5),ηθis approximately constant at 1-Δθ/(2φ0).The conclusion is easily drawn that to obtain a panoramic image when the depth of the curved object surfaceZis large enough, the sub-image overlap ratio can be regarded as 1-Δθ/(2φ0), and then the non-overlapping parts of adjacent sub-images are spliced.

        To find the critical depthZoptwhere direct stitching can be carried out whenZ ≥Zopt, the tolerable error Δeis expressed in pixels by Eq.(34).In this way, as the tolerable error is offered,Zoptis computed using Eq. (35). Following the previous analysis, the variation trends ofZoptwith Δeare shown in Fig. 22.

        Fig. 22 The variation trend of Zopt with Δe

        The deeper the curved object surface is, the more accurate the direct stitching will be. If tolerable error Δeis less than 1 pixel, then the critical depth will be computed asZopt= 28.7502 m.Therefore, Eq. (35) gives the design solution for a convex multi-aperture optical imaging system that fits a particular curved object surface depth for direct stitching. Based on the principle of this method,Golish et al. (2012), Cao et al. (2014), and Popovic et al. (2014)performed rapid wide FOV imaging on their proposed ACEs.

        Based on the wide FOV, Brady et al. (2012),Afshari et al. (2013),and Cao et al. (2015)achieved panoramic imaging. Leitel et al. (2014), Pang et al.(2017), and Wu SD et al. (2019) used the proposed ACEs to detect motion based on light flow. Some convex multi-aperture optical imaging systems(Guo et al., 2012; Shi et al., 2017; Yu et al., 2019) were applied to fast target location.

        In short, the convex structure has the advantage of real-time image stitching, which is suitable for wide FOV imaging. Based on the wide FOV,convex multi-aperture optical imaging systems have great application value in surveillance and reconnaissance,image navigation,multi-target detection,and tracking.

        7 Conclusions and future work

        In this review, some typical multi-aperture optical imaging systems were enumerated and categorized. Then, the light field acquisition models were summarized according to their different structures.Based on mathematical models, the key indexes of different multi-aperture optical imaging systems(e.g., FOV, sub-image overlap ratio, and common overlapping region)were computed easily.

        In the future, multi-aperture optical imaging systems will be changed from rigid to flexible,which means zoomable aperture and a flexible substrate.For microlens arrays, researchers are constantly designing zoomable microlenses and optimizing imaging sensors from flat to curved. For lens module arrays,flexible substrates are worth studying. In addition, considering the one common imaging sensor structure,optical transferring systems need to be upgraded so that the sub-image error can be as small as possible. In short,multi-aperture optical imaging systems will be smaller and more flexible, and have higher resolution.

        Concurrently, application research continues to advance. Multi-aperture optical imaging systems will be more widely used in the fields of computational photography,surveillance and reconnaissance,image navigation,3D reconstruction,and so on.

        Contributors

        Qiming QI and Hongqi FAN structured the outline of the paper. Qiming QI drafted the paper. Zhengzheng SHAO and Ping WANG helped check the first two sections. Ruigang FU helped organize the paper. Qiming QI and Hongqi FAN revised and finalized the paper.

        Compliance with ethics guidelines

        Qiming QI, Ruigang FU, Zhengzheng SHAO, Ping WANG, and Hongqi FAN declare that they have no conflict of interest.

        Appendix A: Performance calculation for planar ACE

        Suppose that the object plane isZaway from the optical center plane. For the single aperture whose label isnX, every pixel coordinateucorresponds to a point on the object plane.xis used to denote the position of the object point:

        Therefore, the size of the sub-image object region is (Nu -1)Z/fu. For two adjacent apertures,if their sub-images overlap,the size of a single aperture’s object region is larger than its interval Δs. In this case, only when object depthZis larger than the threshold, as the following equation shows, will adjacent sub-images overlap:

        Further, the sub-image overlap ratio can be computed as follows:

        WhennXanduare taken to be the maximum and minimum, the edge coordinates of the whole object region are

        Note that the pixel resolution is Δx=Z/fu,while the subpixel shift is not considered. With■·」denoting rounding down, the number of pixels representing the whole object region is

        Appendix B: Performance calculation for spherical ACE

        As shown in Fig.13,five apertures are arranged on the spherical surface whose radius isR, and the angle between adjacent apertures is denoted as Δθ.

        If the adjacent sub-images overlap, Δθ <2φ0should be satisfied. In this way,according to the sine theorem,the minimum scene depthZminis computed as shown below:

        The angle of the overlapping region corresponding toOG, denoted asα, can be obtained:

        Thus, the sub-image overlap ratio can be computed as follows:

        Then, for the loop 0<nθ <Nθ-1,the projection radius of an object plane in the non-overlapping part is

        Therefore, the number of apertures located at loopnθshould satisfy

        As for the loopnθ=Nθ -1,Nφ(nθ)should satisfy

        国产亚洲3p一区二区| 国产精品jizz观看| 亚洲片在线视频| 久久熟女少妇一区二区三区| 亚洲中国精品精华液| 黑人巨大跨种族video| 五月婷婷激情综合| 热门精品一区二区三区| 亚洲天堂二区三区三州| 成人网站免费看黄a站视频| 日韩爱爱网站| 国产一级r片内射视频播放| 男女真人后进式猛烈视频网站 | 久久久国产精品三级av| 成人免费播放视频777777| 99香蕉国产精品偷在线观看| 国内成人精品亚洲日本语音| 五十路在线中文字幕在线中文字幕| 亚洲av丰满熟妇在线播放| 欧美午夜精品一区二区三区电影 | 高潮喷水无遮挡毛片视频| 99久久婷婷国产精品网| 精品久久亚洲中文无码| 国产啪精品视频网站免| 亚洲国产精品悠悠久久琪琪| 一本久久a久久精品vr综合| 久久夜色撩人精品国产小说| 亚洲一区二区三区美女av| 青青手机在线观看视频| 亚洲国产日韩欧美一区二区三区| 尤物无码一区| 亚洲24小时免费视频| 精品久久久久久无码人妻蜜桃| 精品人妻伦九区久久AAA片69| 久久亚洲精彩无码天堂| 天堂av网手机线上天堂| 亚洲国产精品无码专区影院| 亚洲精品综合第一国产综合| 国产亚洲一区二区毛片| 亚洲av无码码潮喷在线观看| 精品久久久久久国产|