Huihun Zhng,Lu Wng ,Xiuling Jin ,Liming Bin ,Yueng Ge
a College of Mechanical and Electronic Engineering,Nanjing Forestry University,Nanjing 210037,Jiangsu,China
b Jiangsu Co-Innovation Center of Efficient Processing and Utilization of Forest Resources,Nanjing Forestry University,Nanjing 210037,Jiangsu,China
c Institute of Crop Sciences,Chinese Academy of Agricultural Sciences/Key Laboratory of Crop Physiology and Ecology,Ministry of Agriculture,Beijing 100081,China
d College of Forestry,Nanjing Forestry University,Nanjing 210037,Jiangsu,China
e Co-Innovation Center for Sustainable Forestry in Southern China and Key Laboratory of Forest Genetics &Biotechnology of the Ministry of Education,Nanjing Forestry University,Nanjing 210037,Jiangsu,China
f Department of Biological Systems Engineering,University of Nebraska-Lincoln,NE 68583,USA
Keywords: Leaf traits Optical sensing Image processing Machine learning Artificial intelligence
ABSTRACT Acquisition of plant phenotypic information facilitates plant breeding,sheds light on gene action,and can be applied to optimize the quality of agricultural and forestry products.Because leaves often show the fastest responses to external environmental stimuli,leaf phenotypic traits are indicators of plant growth,health,and stress levels.Combination of new imaging sensors,image processing,and data analytics permits measurement over the full life span of plants at high temporal resolution and at several organizational levels from organs to individual plants to field populations of plants.We review the optical sensors and associated data analytics used for measuring morphological,physiological,and biochemical traits of plant leaves on multiple scales.We summarize the characteristics,advantages and limitations of optical sensing and data-processing methods applied in various plant phenotyping scenarios.Finally,we discuss the future prospects of plant leaf phenotyping research.This review aims to help researchers choose appropriate optical sensors and data processing methods to acquire plant leaf phenotypes rapidly,accurately,and cost-effectively.
With the increase in world population and the improvement of living standards,there is a growing need to ensure the quantity and quality of plant products for food,feed,fiber,and fuel [1,2].Studies aimed at associating genotypic with phenotypic information are essential for plant yield and quality improvement.With the development of next-generation sequencing technologies and the rapid decline of genotyping costs,obtaining plant phenotypic information with high throughput and resolution and low cost has become a bottleneck.Because plant phenotype is controlled by complex interactions between genotype and environment,comprehensive and accurate phenotypic information can lay the foundation for in-depth analysis of plant gene functions and regulatory networks and accelerated plant breeding pipelines [3].Thanks to technological advances in imaging sensors,image analysis,and machine learning,plant phenotypic traits can be estimated at multiple scales ranging from cell,tissue,organ,and whole plant to field populations of plants,such traits include morphological,physiological and biochemical traits [4].
Leaves convert absorbed solar energy into organic matter in green plants by photosynthesis,exchange water vapor and carbon dioxide via stomata,obtain oxygen via respiration,and generate the energy required for plant growth and metabolism.The leaf is a dynamic organ that is affected by both internal (genotype) and external (environmental) conditions.The phenotypic traits of leaves reflect the response,adaptability,and self-regulation ability of plants in constantly changing environments.For plant species whose leaves are consumed by humans(lettuce,cabbage,spinach,tea,etc.),leaf traits reflect the yield and nutritional value of the products.Leaf morphological traits include area,inclination angle,number,thickness,and phenology.Physiological and biochemical traits include stomatal conductance (gs),maximum carboxylation(Vcmax) and maximum electron transfer rates (Jmax),and contents of water,chlorophyll,nitrogen(N),phosphorus(P),and potassium(K)[5].
Conventionally,plant phenotypic information is obtained by manual measurement,with the shortcomings of labor intensiveness,low efficiency,subjectivity,low accuracy and strong destructiveness.In recent years,imaging technology has become an effective tool for studying plant phenotypes,including visiblelight,spectral (both multispectral and hyperspectral),thermal,and fluorescence cameras.This technology can acquire plant phenotypes with high throughput,high accuracy,and nondestructive methods,enabling a wide range of measurements and broad application [6,7].The need of spatial,spectral,and temporal resolution is considered when imaging sensors are used to acquire plant color,spectra,and texture information.Different phenotyping studies have diverse requirements for the resolution of imaging sensors.At the molecular,cell,and tissue levels,microscopic imaging technologies with higher resolution are needed,such as fluorescence and Raman imaging.Remote-sensing imaging sensors offer lower spatial resolution than proximal imaging sensors.Because a pixel in a satellite remote sensing image usually represents multiple plants,studying plant phenotypes at multiple scales requires selecting imaging sensors with corresponding resolution [8,9].Fig.1 shows various imaging sensor technologies for acquiring plant leaf phenotypes.
Fig.1.Schematic diagram of various imaging sensors operating in various regions of the electromagnetic spectrum and the plant leaf phenotypic traits at various organizational levels that they measure.
The data collected by imaging sensors are rich and diverse,and extracting meaningful phenotypic information from them requires carefully designed data-processing methods.Selecting appropriate optical sensors and data-processing methods can maximize the efficiency and accuracy of the experiment and reduce its cost.Given that leaves are vital organs that may represent the direct economic yield of plants,we describe their morphological,physiological and biochemical traits,summarize research progress in optical sensors and data-processing methods for plant leaf phenotyping at multiple scales(cell,tissue,whole leaf,and whole plant),and compare and discuss innovations and limitations of these tools for plant leaf phenotyping.We describe the temporal and spatial patterns of leaf growth as affected by internal and external conditions.Finally,we describe future prospects in plant leaf phenotyping research.
Measurement of morphological features is conventionally done manually,by either direct or indirect methods.Direct leaf-area measurement methods include drilling weighing,tracing weighing,and grid methods[10].Indirect measurement methods involve developing empirical regression models[11].Leaf angles are measured mainly with a protractor,and leaf number is obtained by counting.Leaf thickness of a leaf is calculated as the ratio of leaf weight to leaf area,and leaf color of leaves is determined with a color comparison card.These methods are limited by low efficiency,low accuracy,destructiveness,and subjectivity.Modern imaging sensors can reduce these shortcomings.
Leaves convert absorbed solar energy into organic matter in green plants by photosynthesis,providing nutrients for plant physiological activities.Leaves thus determine the growth rate and health condition of plants.Reduction of leaf area will lead to a reduction of chlorophyll content,affecting plant growth and ultimately reducing productivity,in particular for plants whose leaves are their economic product.Measuring leaf area is thus desirable for improving crop performance and increasing crop yield.Leaf area can be studied for a single leaf,a whole plant,or a group of plants.The leaf area of a group of plants in the field is commonly quantified by leaf area index (LAI) [12].LAI refers to the total leaf area in a unit of land area,calculated as the ratio of the total leaf area of a plant population to the land area it covers,as shown in Fig.2E.LAI is closely related to total leaf size,canopy structure,and light energy utilization,and is used for continuous monitoring of plant growth and estimating yield [13,14].The application of modern imaging technologies in leaf area measurement at different scales is shown in Fig.2,and Table S2 describes the corresponding imaging technologies and data-processing methods.
Fig.2.Application of modern imaging technology for leaf-area measurement at several scales.(A)Digital camera acquires the image of a single leaf and a calibration object[15].(B)Grape leaf extraction,point-cloud segmentation and surface reconstruction[19].(C)3D point-cloud and surface reconstruction of a single leaf(Reproduced from Yau et al.[20]with permission from Elsevier).(D)LiDAR measurement of the crown leaf area of a single tree(Reproduced from Berk et al.[21]with permission from Elsevier).(E)LAI schematic diagram.(F)RGB and NDVI images of plots([24]).(G)LAI 3D images with multiple voxel sizes(Reproduced from Yin et al.[34]with permission from Elsevier).
It is simpler to obtain the leaf area of a single leaf than that of a whole plant of population.Two-dimensional(2D)image processing permits estimating the leaf area of a single leaf.Using a smartphone to image and process a single leaf and a reference object of known size yields a scaling factor used to estimate the actual leaf area[15](Fig.2A).A study [16] of the relationship of the easily obtainable parameters leaf length and leaf width with leaf area in 2D images showed that a leaf-area prediction model based on a combination of the two parameters yielded the highest accuracy.When a smartphone was used to take a vertical top view image of a plant and there was leaf occlusion,leaf area was underestimated [17].The accuracy of leaf area estimation for the whole plant can be improved by use of multi-angle images and multiple features.Jiang[18]collected images of single rice plants with a visible-light imaging system,established a power-function model predicting leaf area from side view average projection area,and investigated the influence of the number of side view projections,top view projection area,texture and morphological features on accuracy of model.The error caused by leaf curl,camera viewing angle,and other factors in 2D images can be reduced by building a three-dimensional(3D) model to estimate the area of a single leaf.The visible light camera [19],RGB-D (Red,Green,Blue-Depth) camera [20],and LiDAR [21,22] are used to acquire plant images that will be used to generate 3D point clouds(Fig.2B,C,D).The point clouds are clustered and segmented,and a leaf grid model is reconstructed.The leaf area is obtained by measuring the area of the surface grid.
The total leaf area of a group of plants may be measured by LAI.Because it is measured at the canopy level,remote sensing imaging technology is widely used in LAI measurement,including imaging sensors carried by unmanned aerial vehicles (UAVs),manned aircrafts,and satellites.Compared with satellite remote sensing,UAVs have the advantages of flexible operation and low cost,but satellite remote sensing can acquire canopy data over a much larger spatial extent.Cloud cover will impair the use of remote sensing imaging by introducing occlusion and noise.In contrast to manned aircraft and satellites,UAVs can fly below cloud cover.As an alternative,images may be collected under sunny and cloudless weather conditions,or cloud-cover interference may be reduced by atmospheric correction,spectral pretreatment,and other techniques[23,28].
Many researchers have used UAVs equipped with one or more imaging instruments such as RGB [23,25],multispectral [24,25],and thermal cameras [25,26] to image plant canopy.Estimation models of LAI are established based on acquired color,spectral,and thermal information using multiple linear regression,artificial neural network (ANN) [23],random forest (RF),extreme gradient boosting(XGBoost)[24,27],partial least squares regression(PLSR),support vector regression (SVR),deep learning [25],and other methods (Fig.2F).Normalized difference vegetation index (NDVI)is one of the vegetation indices (VIs) commonly used to estimate LAI.For multispectral images,it is necessary to use a white reference plate to apply radiometric correction to estimate the true reflectance of plants.In different growth stages,the canopy structure,leaf inclination angle and leaf surface characteristics of plants will differ,strongly affecting canopy reflectance.Thus,phenology affects LAI estimation based on canopy spectral data,especially for plants with varying phenological characteristics(such as during flowering,heading,and fruiting),including maize,rice,sorghum,and wheat.In one study [25] the estimation accuracy of LAI was increased by use of multi-source data but was reduced by the presence of maize ears.Using VIs to estimate LAI has led to phenologyspecific models.A prediction model combining VIs and vegetation canopy height for rice increased the accuracy of LAI at heading[28].LAI can also be obtained by establishing 3D point clouds of plant populations.Lin et al.[29] acquired images of Masson pine forests using UAVs carrying RGB and multispectral cameras at various tilt angles.According to the voxel model obtained from the generated 3D point clouds,they calculated the LAI of the forest canopy.The 0° and 30° combined tilt angle scheme gave the best result who’sR2up to 0.9119.
A hyperspectral imaging sensor can collect high-resolution images in a wide range of wavebands to obtain the reflection spectrum of the canopy,and establish a model for prediction of LAI based on the hyperspectral information [30,31].Compared with the information from a multispectral camera,the band information obtained by the hyperspectral imaging sensor is rich and complex,and more VIs can be extracted.However,there are always spectral bands with poor correlation with specific leaf phenotypes,and these bands will reduce not only the accuracy of the LAI model but the efficiency of model training.These problems can be solved by feature band selection[32].Many studies have shown that feature band screening followed by multivariate modeling is effective for increasing the accuracy of model estimation.Based on extracted key information,researchers [30,33] have used various algorithms to establish LAI prediction models and then compared and selected the models with the highest prediction accuracy.
Besides the use of UAVs to carry imaging sensors,satellite data and laser scanning technology [34] are also applied (Fig.2G).LAI has been predicted from spectral data from the Sentinel-2 [35]and Landsat 5 TM and 7 ETM+[36] satellites and VIs generated from them.
The inclination of plant leaves refers to the angle between the normal line of the leaf surface and the zenith (z) axis.For flat leaves,it can also be defined as the angle between the leaf surface and the horizontal earth surface.Leaf inclination angle determines the proportion of light intercepted by the leaves,thus affecting the growth and biomass of plants [37].
Because the shape,size,softness and curvature of leaves of plants differ,the complexity of measuring leaf inclination angle varies.For plants with short and smooth leaves,leaf angle can be directly determined from images captured by side-view cameras.For plants with long,narrow,and curved leaves,the relative area of leaf segments is taken as a weight to estimate the effect of gravity on leaf inclination angle [38] (Fig.3C).
Fig.3.Application of modern imaging technologies in leaf inclination angle measurement.(A)Calculate the leaf inclination angle from leaf area and leaf projected area[41].(B) The leaf inclination angle is estimated from the 3D point cloud obtained by LiDAR (Reproduced from Hosoi et al.[43] with permission from the Society of Agricultural Meteorology of Japan).(C) Leaf inclination angle measurement of flat and curved leaves [38].(D) Calculate the angle between leaf normal and z axis based on the threedimensional bounding box [39].
Because quantifying angles of plant leaves involves measurement in 3D space,most existing studies are based on 3D point clouds of leaves,which can be reconstructed based on information from digital cameras [39,40],RGB-D cameras [41],LiDAR [42-44],and other sensors.The methods for constructing 3D point clouds include structure from motion (SFM) [40],multi-view stereo(MVS) [39] and iterative closest point (ICP) [43].The SFM algorithm is used mainly for sparse point-cloud,and MVS technology mainly for dense point-cloud reconstruction.Most researchers first perform preprocessing such as filtering and de-noising (depth threshold or,statistical filtering) [41] on the reconstructed 3D point cloud to optimize the cloud (Fig.3A).A single leaf can be clustered and segmented from it.Common point-cloud clustering and segmentation algorithms include K-means clustering [41],region growth[39],density-based spatial clustering of applications with noise (DBSCAN) [42],and watershed algorithm [40].For the point cloud of segmented single leaf,delaunay triangulation [41],least squares[43]and other methods are used to build the leaf surface(Fig.3B).Based on the built leaf surface,calculating the ratio of the projected area of a single leaf to the leaf area [41],finding the angle between the leaf normal and thezaxis[39,42,43](Fig.3D)or using a voxel-based 3D image processing method[40]can yield the leaf inclination angle.Fig.3 shows typical applications in leaf inclination angle measurement,and details of the imaging technologies and data processing methods are summarized in Table 1.
Table 1Typical modern imaging technologies and data-processing methods in leaf inclination angle measurement.
Leaf number can reflect plant health status and affect plant growth rate.The conventional method to obtain the number of leaves is manual counting,which is labor-intensive.Modern sensor imaging technology and machine learning provide a high-precision and high-throughput method for leaf counting.
Image-based leaf counting has the problems of leaf occlusion and uneven in situ growth of leaf.Accurate leaf counting can be performed by two main methods: deep learning and image processing.Existing leaf counting methods are aimed mostly at corn[48-52],sorghum [51],and rosette plants (such asArabidopsis thaliana) [52-55,58].The commonly used deep learning models in plant leaf counting are the convolutional neural network(CNN) [49],Faster R-CNN [50,51],Mask R-CNN [50,52],deep convolutional neural network (DCNN) [58] models.
Owing to the leaf shape and growth characteristics of maize and sorghum,leaves counting can be achieved based on detecting leaf sheath points (connection point between leaf and main stem)[50] or tip points [51] (Fig.S1C),instance segmentation based on pixel level classification can distinguish different leaves and is also commonly used to complete leaf counting (Fig.S1A).For the problem of occlusion by stems and leaves,using Mask RCNN for instance segmentation of plant stems and leaves,the scale invariant feature transform (SIFT) algorithm can be used to complete feature matching,after which the dynamic tracking(leaf tracking for images at different time) and counting of leaf targets can be realized [52] (Fig.S1D).The use of deep learning for leaf counting is based mostly on RGB images of plant leaves.When RGB,fluorescent,or near-infrared (NIR) images of top views ofArabidopsisand their combinations were input separately into the deep learning network,the accuracy of leaf counting based on multi-source data was higher,the network was applicable to a variety of plant leaves,and leaves could be counted at night based on NIR images [53].Because of the difficulty in obtaining rich datasets of real plants,the synthetic plants can be used to augment datasets for training the model to improve the performance of leaf count,and the model achieved satisfactory mean absolute count error[54].
When image processing is used for leaf counting,appropriate methods should be used to extract the leaves from the background,and single leaves should be segmented from the leaf group.Many researchers,based on plant leaf images acquired by RGB[55],infrared[56],and multispectral[57]cameras,removed the image background and extracted the region of interest (ROI) using various segmentation algorithms.Circular hough transform (CHT) was used to achieve leaf counting [55] (Fig.S1B) or a watershed algorithm was used to achieve the marking and segmentation of a single leaf [56,57],laying the foundation for achieving correct leaf counting.Fig.S1 shows typical applications of various modern processing methods in leaf counting.
Leaves vary in thickness among plants,regions,and altitudes,reflecting the influence of environment on this leaf trait.Estimating leaf thickness can permit early monitoring of leaf stress,improving timely management decisions [59].However,it is difficult to measure leaf thickness.The conventional method is based on the ratio of leaf weight to leaf area,and requires direct measurements of these two parameters.The multiplication of data errors reduces measurement accuracy and the operation is complex,time-consuming,and labor-intensive.The value of leaf thickness is small,and the measurement accuracy must be high.New imaging technologies have begun to see frequent use in measurement of leaf thickness.
Pfeifer et al.[60] used computed tomography (CT) to measure the leaf area and leaf thickness of soybean leaves,finding that daily relative growth rates differed,and estimated changes in leaf volume from measured leaf area and leaf thickness.Optical coherence tomography (OCT) is a recently developed imaging technology.Scanning yields 2D or 3D images of biological tissues.OCT can perform high-resolution tomographic measurement of biological tissues non-invasively.Water infiltration by leaf injection can improve the quality of leaf images acquired by OCT.The axial distance of the optical path length is measured by OCT and converted into leaf thickness using the refractive index of the infiltrated leaf tissue,thereby realizing the estimation of leaf thickness on the basis of 3D OCT images [61].The infrared spectra of plants reflect a variety of leaf traits.PLSR can be used to fit models of corresponding spectral bands and leaf traits,and leaf thickness has been shown to be highly correlated with short wave infrared (SWIR)bands [62].Some researchers have used a digital thickness gauge[63] and a magnetic field sensor [64] to estimate leaf thickness.Afzal et al.[64]studied a segmented linear regression model relating relative thickness (RT) to relative water content (RWC),incorporating the effects of drought resistance,salt resistance,monocotyledonous and dicotyledonous plant type,and leaf position on the RWC-RT model.However,the water content of plant leaves is affected by species and various environmental factors.To improve the accuracy,universality,and practicality of using the model to estimate RWC from RT,the effect of species and environment on the model should be further investigated.
Plant phenology refers to the seasonal cycle stages (leaf coloring,leaf expansion,flowering and leaf falling,etc.) that occur repeatedly throughout the plant life cycle.Plant phenology is closely related to carbon,nutrients,and water cycles in the ecosystem,which reflect plant response to environmental and climate change.Conventional phenological research includes manual observation and eddy covariance technique which is based on the turbulent exchange between the ecosystem and the atmosphere [65].With global climate change,acquiring accurate,multi-level,and long-term continuous plant phenological information may allow monitoring plant growth status and optimizing ecosystem structure.Conventional phenological research methods cannot meet this requirement [66].
Based on information obtained from modern imaging sensors,corresponding data processing methods can accurately and efficiently analyze plant phenology.One can use a digital camera to acquire plant images,and seek the relationship between various factors and phenology[67-69,71].A prediction model of plant phenology was established (Fig.4A),and mixed linear models were used [67,69] to estimate the effects of various climatic factors and environmental factors on plant phenology (Fig.4C).Using RGB images of plant canopy,the proposed DCNN with various training strategies can also identify phenological periods,and images from a wide range of perspectives and multiple angles have been shown to improve recognition performance with a high accuracy of 0.913[70](Fig.4B).Based on the spectral characteristics of plants in various phenological periods,spectral imaging sensors are also used in phenological research.In comparison of RGB cameras and infrared cameras in phenological monitoring,it is found that owing to the light environment and separation of bands,the phenological monitoring accuracy of infrared modified cameras is not as good as that of RGB cameras [72].Hyperspectral imaging sensors can gain more comprehensive and abundant spectral data than RGB cameras.Based on the relationship between hyperspectral data and phenological parameters,the phenological state and changes of plants can be quantified.Selecting sensitive spectral bands and VIs makes the prediction of phenological parameters more accurate and efficient [73] (Fig.4D).
Fig.4.Some applications of modern imaging technology in phenological measurement.(A)Digital cameras and their records of canopy phenological changes throughout the growing season [68].(B) Using DCNN to monitor rice leaf phenology based on images from multiple perspectives [70].(C) Phenological characteristics expressed by excess green(ExG)index[69].(D)Phenological changes of hyperspectral images based on spectral mixture analysis(SMA)[73].(E)Monitoring start of season(SOS),peak of season(POS),end of season (EOS),and length of season (LOS) at five locations by remote sensing [77].
In addition to various proximally deployed imaging sensors(the imaging sensors are deployed within 10 m above plants) [74] to monitor phenology,the moderate-resolution imaging spectroradiometer (MODIS) is used to gain mangrove canopy reflectance,and the estimation of mangrove phenological parameters based on the multi-year time series of VIs has achieved accurate results[75].Many researchers use SOS,EOS [76,77] and other indicators to describe phenology.Species have diverse phenological times:the SOS of mangrove is in April-June and the EOS is in January-February of the next year.Cumulative rainfall has been found[77]to be the largest climatic factor affecting mangrove phenology(Fig.4E).The current status of phenology research based on modern imaging sensors and data processing is shown in Fig.4.
Physiological and biochemical traits of plant leaves include stomatal conductance,maximum carboxylation rate and maximum electron transfer rate,and contents of water,chlorophyll,N,P,and K.The physiological and biochemical traits of plant leaves are reflected in healthy plant growth.Abnormal ranges of these traits are caused by biotic stress (diseases,insect pests,etc.) and abiotic stress (water,drought,and salt stress,etc.) suffered by plants.Dynamic monitoring of the physiological and biochemical phenotypic information of plant leaves can lay a foundation for early detection of biotic and abiotic stresses,reveal the health status of plant leaves,and support management strategies.In recent years,the development of various imaging technologies has provided technical support for the measurement of these traits,and improved the accuracy and efficiency of information acquisition.
Plant leaves exchange gases with the environment during growth.Plant stomata are channels for gas exchange that regulate the balance between water loss and carbon assimilation.Thegsmeasures the degree of stomatal opening.Air temperature,humidity,and soil moisture are all factors influencing plant stomatal conductance and thereby photosynthesis,respiration,and transpiration.When plants are subjected to drought stress,gswill decreases.Conventional estimation ofgsis performed by measuring the amount and rate of leaf gas absorbed and dissipated.Recently,various imaging technologies have been applied to the measurement ofgs,improving the accuracy and efficiency of measurement,and realizing high-throughput and non-destructive measurement [78].Current research on leaf stomatal conductance using modern imaging technologies and data processing methods is descried in Table S3.
The conventional measurement of leaf stomatal conductance is performed mainly with hand-held stomatometers [79] and portable gas-exchange measurement systems[80].These methods yield accurate measurements but suffer from high cost and low efficiency.Hyperspectral imaging systems have shown great potential for acquiring spectral information associated with leafgsaccurately and effectively.Using PLSR,RF,SVM,and other methods to establish prediction models based on extracted effective spectral bands and VIs allows estimating leafgs[80,81].Among the prediction models of leafgsestablished by various methods,RF showed better prediction whoseR2=0.92 [80],and a yellow band was found to improve the accuracy of prediction ofgs[81].
Leafgsreflects plant water stress.Under short-term water stress,a decrease ingsincreases leaf temperature.When plants are under long-term water stress,irreversible water-stress symptoms will occur.The combination of hyperspectral imaging and thermal imaging technologies [82] is widely used in the study of leafgs[83].Based on the use of a hyperspectral camera and a thermal imager,Sobejano Paz et al.[84] established a PLSR prediction model for multiple photosynthetic parameters based on hyperspectral data,radiometric temperature (TL,Rad),canopy height (hc)and other comprehensive data,overcoming the saturation problem with VIs used alone.
In comparison with conventional methods of measuringgs,modern imaging technology has the advantages of flexibility and wide range of use scenarios,andgscan be measured at multiple scales,by both proximal and remote sensing.Espinoza et al.[85]used a multi-spectral camera and a thermal infrared camera carried by a UAV to image the grape canopy,finding that NDVI,green normalized difference vegetation index(GNDVI),and canopy temperature were correlated with leafgsand thatgsdifferd with irrigation level.In addition to the water supply of plants,some plant hormones produced endogenously or applied externally can also affectgs,abscisic acid promotes stomatal closure.Vis/NIR spectra,multispectral images,and thermal images can be used to evaluate the effects of exogenous abscisic acid on leafgs,and crop water stress index(CWSI)shows an increasing trend with the decrease ofgs[79].
VcmaxandJmaxof plant leaves are plant photosynthetic parameters.Vcmaxrefers to the maximum number of moles of CO2assimilated by plant leaves per unit area in unit time.When plants are exposed to light,pigment molecules absorb light energy and undergo photochemical reactions and charge separation,thus forming electron flow [86].On the thylakoid membrane of the chloroplast matrix in leaves,the light reaction that converts light energy into chemical energy occurs.After light absorbs pigment to capture light energy,a special pigment molecule will excite electrons and these electrons enter the electron transfer chain in photosynthesis.Jmaxrefers to the maximum transfer speed of electrons in the photosynthetic electron transfer chain [87].VcmaxandJmaxare affected by temperature,light and other factors,and are closely related to leaf nitrogen content,chlorophyll,and other physiological and biochemical parameters,the relationship between them will vary with seasons and species changes [88].
Photosynthesis of plants is very important in the whole ecosystem.Photosynthesis underlies the accumulation of plant biomass,thus becoming the energy source and material basis of life activities in the entire ecosystem.It mitigates global warming by absorbing carbon dioxide.The effect of photosynthesis is jointly determined by environmental factors and the photosynthetic capacity of plants.Photosynthetic capacity refers to the carbon fixation capacity of plants under optimal water and light conditions,VcmaxandJmaxare the parameters most commonly used to evaluate the photosynthetic capacity of plants.The conventional method for measuringVcmaxandJmaxis via gas exchange.The CO2response curve of leaves is measured with a gas exchange system (e.g.,LI-6400 (LI-COR,USA)),and thenVcmaxandJmaxare estimated from the Farquhar photosynthesis model[97].This method has the disadvantages of high cost,low efficiency,and complex operation.The application of modern imaging technology to estimating photosynthetic parameters can effectively solve this problem.Hyperspectral imaging technology is most widely used to estimateVcmaxandJmax.PLSR is used to establish estimation models ofVcmaxandJmaxby combining plant spectral information obtained by hyperspectral instrument and real values measured by gas exchange [89,90].In comparison with using only the original hyperspectral data,the accuracy,efficiency and stability of the model can be improved by changing the spectral data form(reflectivity,spectral derivative,etc.)and extracting effective wavebands[91].Plants are full of rich phenotypic variation due to complex growth environments and species.To improve the applicability and reliability of theVcmaxandJmaxprediction models,Buchaillot et al.[92] collected hyperspectral data of soybean and peanut leaves under multiple growth environments,and used four methods to estimate photosynthetic parameters of leaves,of which PLSR was the best,the prediction accuracy forVcmaxandJmaxis 70%and 50%,respectively.In previous studies,leafVcmaxandJmaxwere closely associated with nitrogen,chlorophyll,and phosphorus contents.Based on this observation,some studies estimated leaf nitrogen content from PLSR based on plant hyperspectral [93] and LiDAR [94] data,and then performed indirect estimation ofVcmaxandJmaxvia linear model based on leaf nitrogen content.Combining the comprehensive characteristics of multiple spectral indexes and physiological parameters(leaf nitrogen content,chlorophyll,etc.) to predict leafVcmax,has also given satisfactory results [88].The gas exchange method can measureVcmaxandJmaxonly at the leaf level,and most studies also use modern imaging technology to achieve the estimation of Vcmaxand Jmaxat the leaf level.There are few studies of estimatingVcmaxandJmaxat the canopy level and ecosystem level,and these tasks remain challenging.Meacham-Hensold et al.[95] used a visible near infra-red (VNIR) camera,a NIR/SWIR camera,and a hyperspectral instrument to acquire reflectance spectra of tobacco leaves at the canopy and leaf levels.Based on the reflectance spectra,PLSR was used to establish an estimation model for eight leaf traits (includingVcmaxandJmax).The prediction model using a single VNIR hyperspectral camera at the canopy level achieved excellent prediction accuracy.Based on the information obtained by an airborne hyperspectral instrument [96] and sun-induced chlorophyll fluorescence (SIF) remote sensing technology [97],theVcmaxandJmaxcan be retrieved at the ecosystem level.Based on the above research,detailed techniques and methods for estimatingVcmaxandJmaxare listed in Table S4.
Leaf water content (LWC) reflects water stress (drought and waterlogging stress) in plants,and the plant growth environment and soil water potential may be adjusted according to leaf water content.Conventionally LWC is measured by weighing.The fresh and dry weights of the leaves are recorded and the water content calculated by difference.Other indicators of leaf water content are relative water content(RWC)[112],equivalent water thickness(EWT) [100],fuel moisture content (FMC) [108] (Fig.S2D) and CWSI [111].
In comparison with the conventional method of obtaining LWC indirectly by measuring soil water content,canopy temperature,transpiration rate and other parameters,the measurement of LWC based on modern imaging technology has the advantages of real-time application,efficiency,and non-destructiveness.Hyperspectral imaging technology is widely used for measuring LWC.Some studies show that the spectral absorption capacity of water are stronger in the infrared bands than other bands,especially at 970,1200,1440,and 1950 nm.The VIs synthesize the information at the wavebands related to LWC and have the advantages of generality,simplicity,and convenience.They are widely used for estimating physiological and biochemical parameters such as LWC.Rodriguez-Perez et al.[98] achieved the best prediction model of LWC within the spectral range centered at 1465 nm.Since the corresponding VIs are based on the combination of individual specific wavebands,part of the relevant spectral information cannot be included,and there are differences between plant species in spectral absorption characteristics,impairing the ability of the corresponding VIs to predict LWC [99,100].Because spectral data acquired by hyperspectral imaging are rich and comprehensive,but there are spectral noise and spectral data with little correlation with LWC,there will be problems such as long training time and redundant information interference when the full spectrum is used for modeling,reducing the accuracy and efficiency of the estimation model.To solve this problem,preprocessing techniques taking into account the spatial and the spectral structure of hyperspectral images have been used to increase resolution.Correlation analysis between the original spectrum and phenotypic parameters can be performed to remove noisy spectral bands with little or no correlation.VIs obtained by combining spectral bands can also strengthen useful spectral information.Spectral preprocessing and feature band extraction methods have been developed [101],including normalization,derivative,multivariate scattering correction(MSC) and convolution smoothing.Bruning et al.[102] used a hyperspectral imaging system to collect hyperspectral information of wheat,combined with a variety of spectral preprocessing techniques to predict the water and nitrogen content of wheat by regression,estimated the visual distribution of water and nitrogen in wheat plants based on the PLSR model(Fig.S2C),and found that SWIR bands increased model precision.
By combining image and spectral information,one can better study the relevant parameters of plant leaf water[103].For example,canopy structure will affect the reflectivity of the plant canopy,further affecting the accuracy of estimating LWC based on the canopy spectrum.Canopy cover information can be obtained by airborne LiDAR,and the canopy structure optimizes the effect of using hyperspectral data to estimate LWC [104] (Fig.S2A).At the leaf level,Murphy et al.[105] used hyperspectral imaging sensors to conduct a detailed study of longleaf lettuce.They established water content estimation models for midrib,green part and whole leaf based on VIs respectively.Each spectral index fitted better with LWC per unit leaf area than with per weight of wet plant material.The accuracy of estimating the water content of green part of leaves was the highest,and the prediction model of water content of each leaf component cannot be used universally.For single plants or groups of plants,most studies are performed only at the canopy level.But one study found that some biochemical parameters of plants are unevenly distributed along the plant height,and used a hyperspectrometer to record the reflectance of wheat plants with or without wheat spike,they used linear regression to estimate the LWC of vertical distribution of wheat based on the VIs,and found that the middle layer of the plant had the highest LWC,with a linear relationship with the upper and lower LWC[106].It was also found that wheat spikes reduced the accuracy of LWC prediction based on spectral data,which affects mainly NIRSWIR.Most studies of LWC are performed at the whole-leaf or canopy level.The common ground hyperspectral imaging technology involves a small area in a single shot,limiting the application of hyperspectral imaging technology at the regional level.To overcome this problem,a combination of ground-based hyperspectrometers and satellite spectral response functions can be considered [107].
Although satellite remote sensing observation technology has been shown to be effective for monitoring plant water content and water stress status in real time,its relatively coarse spatial resolution limits the scale of obtaining plant water indicators,and there are complex steps in acquiring and processing data.In contrast to satellite remote sensing,small imaging sensors carried by UAVs can acquire plant data in real time with high resolution,and can hover in the proximal end for fine sampling of the area of interest.Because the band with strong correlation with LWC is the infrared band,the best vegetation index for predicting LWC can be derived by using the red edge and near-infrared channels of the multispectral camera [109].The multispectral imaging sensor carried by UAV is thus widely used to monitor vegetation moisture status in real time.When plants are subjected to water stress,stomata will be closed to reduce transpiration,thereby increasing leaf temperature.Because thermal imaging technology reveals plant canopy temperature and other parameters,it is considered an effective tool for monitoring plant water status[110].Mwinuka et al.[111] imaged eggplant canopy through a thermal imaging sensor and multi-spectral camera carried by a UAV,and found that CWSI,NDVI,and optimized soil adjusted vegetation index(OSAVI)showed good correlation with leaf moisture content.
Given that the sensitive spectral band of leaf water is in the infrared region,infrared spectrometers are also widely used in research on LWC [62,112,113].Some new imaging technologies are also gradually being applied to the measurement of LWC,such as THz-QCL and NC-RUS[114](Fig.S2B).When THz-QCL is used to estimate LWC,THz-QCL can be used to estimate τ,and a RGB image of the leaf can be used to estimateLA.Linear regression reliably predicts LWC based on τ·LA[115](Fig.S2E).The difference of THz-QCL radiation absorption of plant leaves at different frequencies can also be used to determine the thickness of the leaf water layer and then the RWC of leaves [116] (Fig.S2F).Cecilia et al.[117] used a new type of leaf water meter to measure the dehydration level of the leaf,and found a negative linear correlation between dehydration level and RWC.Details of imaging technologies and data processing methods are listed in Table 2,and Fig.S2 shows a representative of the current research status of LWC estimation.
Table 2Typical modern imaging technologies and data processing methods in LWC estimation.
Plants contain a variety of leaf pigments.Chlorophyll is the main pigment used by green plants for photosynthesis.Photosynthesis of plants is divided into three parts:(1)light energy absorption,transmission,and conversion;(2) electron transfer and photophosphorylation;and (3) carbon assimilation.Because chlorophyll mediates the light absorption part,leaf chlorophyll content (LCC) affects the photosynthetic reaction rate,the synthesis of organic compounds and ultimately the biomass of plants[118].Usually,LCC presents a downward trend under stress such as drought and salinity,so monitoring of it can indicate plant health status [119].Various imaging sensors,data analysis,and modeling techniques for LCC are detailed in Table S5.
Conventional methods for measuring LCC include spectrophotometry and chlorophyll meter.Spectrophotometry requires grinding,filtering,and adding organic solvent to the leaves,followed by measurement of the absorbance of chlorophyll in a specific wavelength range.The method is destructive,complex,and inefficient.SPAD-502(Konica Minolta,Japan)is a commonly used chlorophyll meter.It can accurately obtain the SPAD value representing the relative LCC by measuring leaf absorbance at 650 and 940 nm.However,the SPAD meter can measure only at a certain point on leaves.To obtain a representative SPAD value of leaves,it is necessary to measure at different points of leaves and calculate the average value.The conventional methods are inefficient and the application environment is limited,whereas modern imaging instruments can accurately,quickly,and nondestructively estimate plant chlorophyll content at multiple scales [120].Using RGB images,estimation of plant chlorophyll content by SVM,PLSR,ridge regression,and other methods has been shown [121] to be a simple and reliable method based on various color features extracted from images.Among these researches,in view of the diversity in plant phenotype traits viewed from different angles,Zhang et al.[122]used a phenotyping platform equipped with a charge coupled device (CCD) camera to perform multi-angle imaging on a single Duspan willow seedling,increasing the estimation accuracy of the best ridge regression model and realizing the visualization of the distribution of chlorophyll content.Considering the influence of light environment on imaging effect,optical devices can be set to ensure stable light conditions,so as to improve the accuracy of LCC estimation [123].
In addition to the estimation of LCC using image information obtained by visible light camera,one-dimensional spectral information is also widely used in the determination of chlorophyll content.Building prediction models based on spectral information is one ofthe most common methods.The spectral information obtained by hyperspectral imaging technology is comprehensive and rich,but there are problems such as noise and spectral baseline shift.Spectral preprocessing can solve this problem.Feature band extraction algorithms can remove spectral bands with low correlation with LCC and highlight spectral bands that are highly correlated with LCC.These two methods improve the prediction performance of the model[124].The second-derivative-partial least squares regression(2-Der-PLSR) model [125] based on the optimal wavelength has achieved outstanding results in estimating the content of chlorophyllaand b and carotenoids in tea.The chlorophyll content prediction model based on the spectral band data related to chlorophyll content is vulnerable to influence by external factors such as soil,light,and leaf structure,but VIs obtained by combining two or more corresponding spectral bands by differencing,normalization,and other methods can effectively eliminate these impacts.Chlorophyll prediction based on VIs gives better performance than prediction based on a single relevant band[126,127].
The reflectance spectrum of plant leaves reflects not only physiological and biochemical parameters inside the leaves,but also affected by leaf morphology(such as leaf inclination angle and leaf surface characteristics).The specular reflectance of leaves with several tilt angles is considered to be a factor reducing the performance of the LCC estimation model of the leaves by some researchers.To solve this problem,researchers have used reflectance difference ratio(MDATT) [128],improved MDATT index (IMDATT)[129],and other indexes.The LCC linear regression model based on MDATT and IMDATT is not affected by observation angle or leaf surface traits.
At the whole-plant level,LCC varies by vertical leaf position,usually increasing first and then decreasing with the increase of leaf positions.This is because the leaves in the top layer are not fully developed,and the photosynthesis of the leaves at the bottom layer is reduced by the shading effect of the upper layer.Plant yellowing diseases usually first appear in lower leaves and gradually extend to upper leaves,leading to low chlorophyll in the upper leaves.For this reason,it is desirable to study the vertical distribution of plant chlorophyll at the whole-plant level.Wu et al.[130]used a monitoring device equipped with a hyperspectral radiometer to collect spectral data of corn at multiple tilt angles,estimated the correlation between each vegetation index and the SPAD value of each layer of leaves in the vertical direction,and identified the optimal monitoring angle of LCC of each leaf layer at different growth stages.
Commonly used imaging technologies for LCC measurement,such as RGB camera and spectral imaging technology,are based on 2D image information and one-dimensional spectral information to establish the estimation model of LCC.Because a single plant has complex and diverse 3D morphology,more accurate and reliable phenotypic information can be obtained by use of 3D point clouds.However,conventional 3D point clouds of plants include only color and coordinate information,LCC cannot be effectively attained based on superior VIs.Using RGB,depth,and multispectral images of plants at multiple angles,Sun et al.[131]constructed a multispectral 3D point cloud of tomato plants using Fourier transform and ICP registration,obtained VIs from the point cloud,and established a prediction model for leaf SPAD value,and quantified the spatial distribution of SPAD values of a tomato plant canopy.Remote sensing technology can monitor plant chlorophyll content and its changes in a wide range of time and space.It has higher throughput and flexibility than ground-proximity imaging technology.Commonly used remote sensing technologies in estimating plant chlorophyll content include satellite remote sensing and UAV remote sensing.Multispectral data from Sentinel-2 combined with a PLSR model has been used to evaluate various chlorophyll contents in a forest canopy [132].Sentinel-2 multispectral data has high spatial and temporal resolution,which can provide long-term and continuous spectral information about a plant canopy.UAV platforms can carry a variety of imaging sensors(hyperspectral camera[133,134],RGB camera[136],and multispectral camera[135,136]),affording high flexibility,low cost,and versatility.With a UAV platform,a linear model based on NDVI [135,136]and renormalized difference vegetation index (RDVI) [136]achieves satisfactory prediction of chlorophyll content.
Chlorophyll content information can be captured by chlorophyll fluorescence signals.A modern imaging sensor for measuring chlorophyll fluorescence information is based on pulse amplitude modulation(PAM),which employs modulation and pulse technology.Commonly used instruments are PAM-2000/PAM-2100,IMAGING-PAM,and DUAL-PAM-100.They can acquire chlorophyll fluorescence information in a light environment and measure photochemical and non-photochemical quenching.For this reason,fluorescence imaging is also widely used in chlorophyll content estimation [137].In comparison with data from a single sensor,the chlorophyll prediction model based on combined data from multiple imaging sensors was better [138],and inclusion of the day after sowing and specific leaf weight in image-based models further improved chlorophyll prediction accuracy.
N,P and K influence plant growth and physiological metabolism.N is a major component of chlorophyll and other compounds.When plants are short of nitrogen,photosynthesis of plants will be weakened,leading to yellowing of plant leaves and reduction of photosynthetic products and reducing yield and quality.The deficiency of P will lead to slowed or stopped plant growth,green and lusterless leaves,and poor root development.K activates various enzymes and protein synthesis,maintains cell osmotic potential,and acts ongs,supporting plant drought resistance.Its deficiency will lead to yellowing,scorching,necrosis,and abscission of plant leaves.However,excessive fertilizer application will cause seedling burning and reduced survival.Overuse of fertilizer causes environmental pollution,and residues in agricultural and forest products as well as soils and waterways [139].To avoid these problems,it is desirable to optimize the content of N,P and K in plant leaves(LNC,LPC,LKC)in real time.Table S6 summarizes the imaging sensors and data analysis methods used for various applications in LNC,LPC,and LKC prediction.
As the main nutrient elements in plant growth,N,P and K are measured mainly by visual observation and chemical analysis.Visual observation method is used by trained technicians to judge LNC,LPC,LKC stress abnormal leaf color.This method requires them to have professional knowledge,but the leaf color and symptom types are very complex.Moreover,humans are often subjective in judging the type and degree of stress,which increases the difficulty in diagnosing the stress of LNC,LPC and LKC.The chemical analysis method requires destructive sampling of leaves,and the chemical reagents used produce toxic gases and pollute the environment,with low efficiency,complex operation,destructiveness,and other shortcomings.Many studies have shown that some nutrient elements in leaves are highly correlated with plant photosynthesis and chlorophyll content.For this reason,it is more desirable to indirectly study LNC of plant through chlorophyll meter(SPAD-502,etc.),but SPAD can conduct point sampling only on plant leaves,and chemical analysis and chlorophyll meter can measure nutrient element content only at the leaf level,with limited application scope.
Before N,P and K stress can be seen with the naked eye,the interaction between light and leaves also varies with changes in LNC,LPC,and LKC,reflected mainly in changes in leaf reflectivity and transmittance.Exploiting this feature,modern spectral imaging technology has been widely applied to the measurement of nutrient element content at all scales of plants[140,141].As mentioned above,hyperspectral imaging sensors can acquire a wealth of information,but there are spectral noise,data redundancy,and collinearity problems.For this reason,the preprocessing of original spectral data and the extraction of characteristic bands have been used in many studies [142-147] to improve the performance and accuracy of prediction of LNC,LPC,and LKC.Among these studies,the prediction model is constructed mainly from PLSR,SVR and MLR [148].On this basis,researchers have realized visualization of LNC distribution at the leaf level [142,143] and crown level[143] (Fig.5B,D).For leaves with large petioles (such as lettuce),the K content is higher in petioles than in green leaves [146].As with the vegetation index for predicting the chlorophyll content of leaves,the VIs are formed according to the combination of multiple bands associated with LNC,LPC and LKC.The nutrient element content can also be estimated with reasonable accuracy by linear regression based on VIs containing more spectral information[149-152].Li et al.[152] also investigated the effect of LAI and chlorophyll,N,and K content,and other parameters on the P estimation model.Because wheat leaves are too narrow to fully cover the measuring points of the hyperspectral leaf clip,the measured spectral data are noisy.To solve this problem,Yang et al.[153]used a normalization method to correct the spectral reflectance of the narrow leaves,and estimated the K content in wheat leaves per unit weight and per unit area respectively using PLSR and RF.PLSR gave the highest accuracy,the normalized spectrum improved the accuracy of the model,and theR2of the per unit area model was lower than that of the per-unit weight model.The sensitive bands of LKC in rice are in the SWIR region,the dual-band VIs based on this area effectively predict rice LKC,and a three-band index with 700 nm and 704 nm (red edge band) added increased the accuracy and reliability of rice LKC estimation[150].The above studies focused on the LNC,LPC,and LKC of plants at the leaf or near-canopy level.Remote sensing can realize the research at the level of the plot area.In particular,data with fine spatial and temporal resolution can be acquired with the UAV platform,advancing precision agricultural management [154,155].
Phenology affects the reflectivity of plant leaves.The absorption and reflection of plant leaves at specific wavelengths differ among growth stages.Plant cell states,canopy structures,and biochemical functions differ among growth stages.The accuracy of prediction can be improved by analyzing specific optimal spectral variables in each growth stage and establishing a prediction model for nutrient element content in corresponding stages [156-158].Wang et al.[159] used a hyperspectral camera carried by UAV to obtain rice canopy reflectance,estimated the LNC,plant nitrogen content(PNC),and other parameters of rice using five methods,including a VI-based linear regression model and the PLSR model,and evaluated the impact of growth stage on estimation accuracy,PLSR and ML performed better in full growth stages.According to the above research,the prediction model of N,P,and K content based on various spectral characteristics has become mature and widely used in modern agricultural management.It appeared that VIs could be susceptible to the impact of background,canopy structure,etc.,and lacked wide applicability to varying growth stages.When canopy density is high,prediction models of physiological and biochemical parameters based on spectral characteristics tend to be saturated,reducing the reliability of the model.Plant leaves have varying proportions of N,P and K,which will affect the color,roughness and other characteristics of the leaves.Texture features can reflect the spatial distribution of image color or brightness.Thus,texture features can increase the sensitivity of image data to N,P and K content,thus improving the accuracy and stability of the prediction model[160,161](Fig.5E).In addition to considering texture features,some researchers have also studied the impact of ecological factor [162],sampling season and sampling location[163]on the prediction model of nutrient contents in plant leaves.These factors further optimized the model performance.
The quality of images and spectral data is greatly affected by light conditions.Owing to leaf occlusion and light incidence angle,the solar radiation received by canopy leaves is uneven,leading to sunlit and shaded leaves.Most studies ignore shaded leaves and focus on sunlit leaves,and elements such as N are closely associated with photosynthesis.Often N is mobilized to upper leaves with higher photosynthetic rate,giving rise to higher N content than in lower leaves.For the same reason,symptoms associated with N deficiency occur in lower leaves first.Jiang et al.[164]studied winter wheat LNC using a near-ground hyperspectral imaging system,evaluating spectral indices(SIs),textural indices(TIs),and spectral and textural indices (STIs) of the whole images,all leaves,sunlit leaves,and shaded leaves.The linear regression model based on the STIs index of all leaves was the best,showing that texture features could improve the performance of LNC prediction models.The vertical distribution of LNC in winter rape was studied using a hyperspectral radiometer,which showed that it decreased from top to bottom.Characteristic bands were extracted based on the original spectrum and first derivative reflectance (FDR),and LNC regression models for each layer were established.The FDR-PLS model and SVM-FDR model were superior [165].Most studies obtain spectral data only from a vertical angle.To optimize the richness and reliability of spectral data collection and study the correlation between multi angle spectral data and LNC,Li et al.[166]measured the multi-angle spectral reflectance of winter wheat leaves using a Vis/NIR spectral imager,and found that LNC showed the highest correlation with 0°spectral reflectance.The accuracy of the LNC estimation model established by multi-angle composite vegetation index (MACVI) based on conventional VIs and multiangle spectral data has been improved(Fig.5A).
As mentioned above,LNC is highly correlated with chlorophyll content,and leaf chlorophyll content can be estimated based on hyperspectral data,thus indirectly revealing the content of each nutrient element.A diagnostic model of N/Mg/K deficiency was established according to the leaf chlorophyll distribution for timely fertilization management [167] (Fig.5C).It is a reliable method to estimate LCC based on color features acquired from RGB images[168].However,in contrast to spectral imaging technology,visible light cameras acquire only three channels of R,G,and B,and because the range of wavelengths acquired is relatively small,it is rarely used for estimation of nutrient content.Different plants and different growth stages of the same plant have different spectral sensitivity to certain nutrients,reducing the universality of the model.Chlorophyll fluorescence technology can overcome this shortcoming and produce different fluorescence intensities under different nutrient stresses.Researchers have used laser-induced fluorescence(LIF) systems[169,170] and chlorophyll fluorometers[171] to estimate fluorescence parameters of leaves and establish estimation models of LNC and LPC.The combination of multiple sensors can collect more abundant multivariate data,and obtain color features,spectral features,canopy structure and other information from image,spectral and depth data.Combining these data can further optimize model performance [172,173].Fig.5 shows representative applications in LNC,LPC,and LKC estimation.
In recent years,imaging sensor technologies for plant phenotyping have emerged and advanced rapidly,but still face problems and limitations.Here are some prospects for addressing these problems.
Existing phenotypic information collection systems can measure only single or a few phenotypic data,while external factors such as global climate change make plant phenotypes more complex and diverse.For developing crop varieties with resistance to drought,waterlogging,disease,and insects as well as salt,alkalinity,and other stresses,it is necessary to combine multiple kinds of phenotypic information.Multi-sensor fusion can be achieved using a phenotypic information collection system to complete parallel measurement of multiple phenotyping information.At present,most of the fusion achieved in plant phenotyping research is at the data level.There is less research on feature-level fusion and decision-level fusion.Much data have yet to be fully mined.Fused multi-sensors used by most researchers are similar and have limitations (such as visible image and depth information fusion).Future multi-sensor fusion technology needs to continue to move towards feature fusion and decision fusion,and continue to explore the fusion of multiple types of sensors (such as spectrum and LiDAR fusion,and chlorophyll fluorescence and depth information fusion),so as to provide technical and information support for determining more accurate plant breeding strategies.
With the continuous enrichment of data collected by plant phenotype information monitoring systems,the demands for processing of phenotypic data will also increase drastically,including image processing,feature extraction and model building.Optical sensors are readily affected by the light environment,and an unstable light environment will reduce the accuracy of acquired images and spectral information.It is desirable to improve the accuracy and efficiency of phenotypic data processing,as well as flexibility when light and background changes.Optimization of phenotypic data processing requires a combination of multidisciplinary knowledge.The establishment of a multidisciplinary team of experts and decision support system can lead to real-time highthroughput monitoring of phenotypic information and highprecision and efficient phenotypic data processing.
One limiting factor for some existing portable leaf-monitoring devices is their weak ability to gather as much data per leaf as possible because of the inter-leaf water and nutrient distribution variances.Besides,the development and application standards of many plant phenotype monitoring systems are applicable only to themselves.There are no unified development and application standards among the systems,reducing their universality.It is desirable to specify the development and application technical standards of plant phenotype monitoring system in detail.
The measurement accuracy of most conventional imaging sensors is readily affected by the imaging environment(light,temperature,humidity),and unfavorable weather (rain,snow) will limit the outdoor use of conventional imaging sensors.Some imaging sensors are bulky and not easy to carry.New flexible sensors should be applied to leaf phenotype monitoring to achieve flexible,multi-functional and continuous acquisition of leaf phenotype information.For example,wearable sensors with the advantages of lightweight,high elasticity can meet the needs of long-time continuous monitoring of plant leaf traits,and have received much attention.As a new type of flexible material,hydrogel has excellent mechanical properties,high flexibility,good fit with leaf surface,and high biocompatibility,and is expected to become a platform for a new generation of wearable sensors.
In the whole growth cycle of plants,leaves are important organs with long residence time,sensitive and strong response.The leaf phenotype can be used to evaluate plant growth status of plants.Multi-scale spatial and temporal phenotyping has revealed that leaf growth changes at a given scale cannot be directly inferred from a lower scale or easily scaled up to the whole-plant level.However,roots and stems are crucial in plant nutrition transmission,transpiration,and other processes.Flowers,fruits,seeds and other organs play a decisive role in the specific growth stage of plants,for example,plants reproduce through flowers.Combining plant genomics and cytology to deeply study plant phenomics,the internal causes of plant phenotypes can be illuminated.Studying plant phenotypes from the perspective of multiple organs and multiple levels would lead to more comprehensive and accurate analysis of plant growth status.
Existing plant leaf phenotypic information analysis technology is used mostly for crop phenotypic analysis.Compared with crops,trees are characterized by large height,large numbers of leaves,and long growth cycle.It is more challenging to obtain complete and accurate leaf phenotypic information for trees.The shape and characteristics of the leaves of coniferous forest species further increase the difficulty of collecting phenotypic information.The measurement of leaf angular distribution for trees with large and curvy leaves could be very time-consuming,if not impossible[174].However,forests have high ecological and economic value.It is desirable to increase research on phenotyping of forest trees,break through the research bottleneck of coniferous species,and develop phenotype monitoring systems suitable for forests with the aim of selecting and cultivating excellent tree varieties.
Plants are faced with various biological stresses(diseases,insect pests) and abiotic stresses (drought,salt stress,high temperature stress).One major problem of applying images for pathogen infection is that,in the field,several disorders can occur,including other diseases,nutrition and water problems.Because these problems usually produce similar effects in multispectral images,in general it is not possible to pinpoint their cause using a single method,and rather it is possible only to infer that there is something wrong.The limitations can be targeted in future research by collecting comprehensive information such as plant images,spectra,temperature,and gas molecule release intensity and rate.Machine learning-the science of programming computers so they can learn from datahas been applied to identifying specific stress types from observed phenotypes.
The development of imaging sensor technologies has advanced plant leaf trait phenotyping.The key advantages of image-based methods include high throughput,high efficiency,high accuracy,and non-destructiveness,overcoming the shortcomings of conventional measurement methods.
Plant leaf morphological traits are generally acquired with visible-light cameras,while physiological and biochemical traits are usually acquired via spectral imaging technology.Thermal imaging is frequently used for stomatal conductance and water content parameters associated with drought stress.Spectral imaging technology offers abundant data and high efficiency,but it is expensive,redundant and complex to process,and especially when it is acquired by a UAV,complex offline processing of data is required.Visible light imaging technology has the advantages of low cost and simple data processing,but its low spectral resolution has limited its use in the analysis of leaf physiological and biochemical parameters.Other modern imaging technologies,such as CT,OCT,and terahertz imaging,are superior in efficiency and accuracy,but their high cost prevents their widespread use.Multi-sensor fusion technology can combine the advantages of various imaging technologies for leaf trait estimation.As the cost of imaging sensors continues to decline,sensor combination is anticipated to play a more important role in leaf phenotyping.
Data processing technology involves image processing,data analysis,and model building.Image processing can be divided into 2D and 3D.Image processing in 2D includes ROI extraction,noise removal,and image segmentation.Image processing in 3D includes 3D point cloud reconstruction and point cloud clustering and segmentation.Data analysis includes correlation analysis,spectral data preprocessing,and feature band extraction.The most commonly used statistical analyses are ANOVA and Pearson correlation analysis.Spectral data preprocessing methods include normalization,differentiation,SNV,and savitzky-golay smoothing (SG).Because feature band extraction is based on the contribution of bands to the prediction of phenotypic parameters,it is related to correlation analysis.PLSR,SVR,RF,and MLR are commonly used in model building.
Huichun Zhang:Writing-review&editing.Lu Wang:Writing-original draft.Xiuliang Jin:Writing -review &editing.Liming Bian:Conceptualization.Yufeng Ge:Writing -review &editing.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
This work was supported by the National Natural Science Foundation of China (32171790 and 32171818),Jiangsu Province Modern Agricultural Machinery Equipment and Technology Demonstration Promotion Project (NJ2020-18),Key Research and Development Program of Jiangsu Province (BE2021307),and Qinglan Project Foundation of Jiangsu province and 333 Project of Jiangsu Province.
Supplementary data for this article can be found online at https://doi.org/10.1016/j.cj.2023.04.014.