MMS(Mobile Mapping System) :정밀지도 구축 시스템
HD Map is organized into five layers.
현재 제공되고 있는 인터넷 맵 수준의 지도
We think of the basic road network data offered by web map services as being the bottom most layer.
여러 센서에서 수집된 데이터로 생성됨. 결과물은 3D 포인트 클라우드 이다. 이 정보들은 후처리 후 물체위치를 저장한다.
is composed of raw sensor data collected by raw sensor data from lidar, various cameras, GPS, and IMUs. The output is a dense 3D point cloud, and this data is post-processed to produce derived map objects that are stored in the geometric map.
공간에 대한 3D 정보를 지니고 있음. SLAM기술을 이용하여 수집된 센서 데이터의 좌표계 정합 및 위치를 표시 한다. 핑크색이 위치 정보 이다.
The geometric map layer contains 3D information of the world. This information is organized in very high detail to support precise calculations. Raw sensor data from lidar, various cameras, GPS, and IMUs is processed using simultaneous localization and mapping (SLAM) algorithms to first build a 3D view of the region explored by the mapping data collect run. The outputs of the SLAM algorithm are an aligned dense 3D point cloud and a very precise trajectory that the mapping vehicle took. The vehicle trajectory is shown in pink. Each of the 3D points is colored using the colors observed for that 3D point in the corresponding camera images.
포인트 클라우드는 후처리후 objects를 추출하여 정보를 저장 한다. 주요 물체는 voxelized geometric maps와 ground map이다.
The 3D point cloud is post-processed to produce derived map objects that are stored in the geometric map. Two important derived objects are the voxelized geometric maps and a ground map.
The voxelized geometric map is produced by segmenting the point cloud into voxels that are as small as 5cm x 5cm x 5cm. During real-time operation, the geometric map is the most efficient way to access point cloud information. It offers a good trade-off between accuracy and speed.
Segmentation algorithms identify 3D points in the point cloud for building a model of the ground, defined as the driveable surface part of the map. These ground points are used to build a parametric model of the ground in small sections. The ground map is key for aligning the subsequent layers of the map, such as the semantic map.
지오메트릭 지도 위에 시멘틱 오브젝트를 표기 한다. 시멘틱 오브젝트는 2D(차선, 교차로, 주차공간) 이나 3D(정지신호, 신호등, etc) 등이 될수 있다. 안전을 위해 주고 사용된다. 시멘틱 오브젝트에는 제한속도나 차선 변경 규칙등 추가 정보들이 포함되어 있다.
Layer is built upon the geometric map layer, by adding semantic objects. Semantic objects can be either 2D or 3D such as lane boundaries, intersections, parking spots, stop signs, traffic lights, etc. that are used for driving safely. These objects contain rich information such as traffic speeds, lane change restrictions etc.
The semantic map layer builds on the geometric map layer by adding semantic objects. Semantic objects include various traffic 2D and 3D objects such as lane boundaries, intersections, crosswalks, parking spots, stop signs, traffic lights, etc. that are used for driving safely. These objects contain rich metadata associated with them such as speed limits and turn restrictions for lanes. While the 3D point cloud might contain all of the pixels and voxels that represent a traffic light, it is in the semantic map layer that a clean 3D object identifying the 3D location and bounding box for the traffic light and its various components are stored. We use a combination of heuristics, computer vision, and point classification algorithms to generate hypotheses for these semantic objects and their metadata. The output of these algorithms isn’t accurate enough for us to produce a high fidelity map. Human operators post-process these hypotheses via rich visualization and annotation tools to both validate the quality and fix any misses. For example, to identify traffic lights, we first run a traffic light detector on the camera images. Visual SLAM is used to process multiple camera images to get a coarse location of the traffic light in 3D. Lidar points in the local neighborhood of this location are matched and processed to produce the bounding box and orientation of the traffic light and its sub-components. We also employ heuristics for solving simpler problems. One area where we’ve found heuristics to be useful is in the generation of lane hypotheses, yield relationships, and connectivity graphs at intersections. There is a lot of structure in how these are setup for roads, especially since there are local laws that ensure consistency. Feedback from the human curation and quality assurance steps is used to keep these up to date.
The geometric and semantic map layers provide information about the static and physical parts of the world that are important to the self-driving vehicle. They are built at a very high fidelity and there is very little ambiguity about what the ground truth is. At Level 5, we view the map as a component that not only captures our understanding of the physical and static parts of the world, but also dynamic and behavioral aspects of the environment. The map priors layer and real-time knowledge layer represent this information. Information in these layers is computed not only from logs from the AV fleet, but also from the Lyft ridesharing network comprising millions of Lyft drivers. This scale is necessary to achieve high coverage of the map priors and ensure freshness of the real-time information.
동적 정보나 보행자의 행동 정보들이 포함된다(신호등의 색깔 바뀌는 순서, 색깔 바뀌는 평균 시간, 주차장에서의 평균 차량 이동 속도) . 자율 주행 차는 이러한 사전 정보와 실시간 입력 정보를 이용한다.
contains dynamic information and human behavior data. Examples such as the order in which traffic lights change, the average wait times in a typical day at the lights, the probability of a vehicle at a parking spot, the average speeds of vehicles at parking spots etc. Autonomy algorithms commonly consume these priors in models as inputs or features and combined with other real-time information.
The map priors layer contains derived information about dynamic elements and also human driving behavior. Information here can pertain to both semantic and geometric parts of the map. For example, derived information such as the order in which traffic lights at an intersection cycle through their various states e.g. (red, protected-left, green, yellow, red) or (red, green, protected-left, yellow, red) and the amount of time spent in each state are encoded in the map priors layer. Time and day of week dimensions are used as keys to support multiple settings. These priors are approximate and serve as hints to the onboard autonomy systems. Another example is parking priors in the map. These parking priors are used by the prediction and planning systems to determine object velocities and make appropriate decisions. Parking priors are represented as polygonal regions on the lanes with metadata that capture the probability of encountering a parked vehicle at that location in the lane. When the AV encounters a stationary vehicle in a map region with high parking prior, then it will more aggressively explore plans that route the AV around the vehicle and demote plans that queue up the AV behind the vehicle. Similarly, knowing where people normally park allows perception systems to be more cautious to car doors opening and detected pedestrians as they might be getting in and out of cars. Unlike information in the geometric and semantic layers of the map, the information in the map priors layer is designed to be approximate and act as hints. Autonomy algorithms commonly consume these priors in models as inputs or features and combined with other real-time information.
가장 최상위층으로 동적인 실시간 교통 정보를 나타낸다. 이정보들은 차량간에 공유 될수 있다.
is the top-most layer in the map that is dynamically updated contains real-time traffic information. This data can also be shared in real time between the fleet of autonomous vehicles.
The real-time layer is the top most layer in the map and is designed to be read/write capable. This is the only layer in the map designed to be updated while the map is in use by the AV serving a ride. It contains real-time traffic information such as observed speeds, congestion, newly discovered construction zones, etc. The real-time layer is designed to support gathering and sharing of real-time global information between a whole fleet of AVs.
정밀 맵과 자율주행 기술](http://www.krnet.or.kr/board/data/dprogram/2278/H2-3_�ΰ���.pdf) : ppt, 2018, ETRI