学术中心
国际植物表型学会系列webinar第10期
发布时间:
2020-10-21
来源:
植物表型圈
作者:
慧诺瑞德
题目:Special Session: Low-Cost Sensors and Vectors for Plant Phenotyping
报告人:Antoine Fournier、Olivier Pieters & Salma Samiei
时间:10月30日(周五)20:00-21:00
直播平台:百博智慧直播间植物表型频道
报告人介绍:
Antoine Fournier:
Towards Low-Cost Hyperspectral Single-Pixel Imaging for Plant Phenotyping
Olivier Pieters:
Gloxinia—An Open-Source Sensing Platform to Monitor the Dynamic Responses of Plants
Salma Samiei:
Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection
直播内容:
Abstracts:
1)
Hyperspectral imaging techniques have been expanding considerably in recent years. The cost of current solutions is decreasing, but these high-end technologies are not yet available for moderate to low-cost outdoor and indoor applications. We have used some of the latest compressive sensing methods with a single-pixel imaging setup. Projected patterns were generated on Fourier basis, which is well-known for its properties and reduction of acquisition and calculation times. A low-cost, moderate-flow prototype was developed and studied in the laboratory, which has made it possible to obtain metrologically validated reflectance measurements using a minimal computational workload. these measurements, it was possible to discriminate plant species the rest of a scene and to identify biologically contrasted areas within a leaf. This prototype gives access to easy-to-use phenotyping and teaching tools at very low-cost.
2)
The study of the dynamic responses of plants to short-term environmental changes is becoming increasingly important in basic plant science, phenotyping, breeding, crop management, and modelling. These short-term variations are crucial in plant adaptation to new environments and, consequently, in plant fitness and productivity. Scalable, versatile, accurate, and low-cost data-logging solutions are necessary to advance these fields and complement existing sensing platforms such as high-throughput phenotyping. However, current data logging and sensing platforms do not meet the requirements to monitor these responses. Therefore, a new modular data logging platform was designed, named Gloxinia. Different sensor boards are interconnected depending upon the needs, with the potential to scale to hundreds of sensors in a distributed sensor system. To demonstrate the architecture, two sensor boards were designed—one for single-ended measurements and one for lock-in amplifier d measurements, named Sylvatica and Planalta, respectively. To evaluate the performance of the system in small setups, a small-scale trial was conducted in a growth chamber. Expected plant dynamics were successfully captured, indicating proper operation of the system. Though a large scale trial was not performed, we expect the system to scale very well to larger setups. Additionally, the platform is open-source, enabling other users to easily build upon our work and perform application-specific optimisations.
3)
Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automatic apple segmentation (Dice 0.85), apple counting (88 percent of probability of good detection, and 0.09 true-negative rate), and apple localization (a shift error of fewer than 3 pixels) with eye-tracking systems. This is obtained by simply applying the areas of interest captured by the egocentric devices to standard, non-supervised image segmentation. We especially stress the importance in terms of time of using such eye-tracking devices on head-mounted systems to jointly perform image acquisition and automatic annotation. A gain of time of over 10-fold by comparison with classical image acquisition followed by manual image annotation is demonstrated.
推荐新闻
视频展示