FOSS4G-Asia 2023 Seoul

Kyoungsook Kim


Sessions

11-28
14:00
240min
OGC API – Moving Features, an introduction with MF-API Server based on pygeoapi and MobilityDB
Wijae Cho, Taehoon Kim, Hirofumi Hayashi, Tsubasa Shimizu, Tran Thuan Bang, Kyoungsook Kim

Moving feature data can represent various phenomena, including vehicles, people, animals, weather patterns, and more. Conceptually, Moving Features are dynamic geospatial data that change location and possibly other characteristics over time.

OGC API – Moving Features (OGC API – MF) provides a standard and interoperable way to manage such data, with valuable applications in transportation management, disaster response, environmental monitoring, and beyond. OGC API – MF also includes operations for filtering, sorting, and aggregating moving feature data based on location, time, and other properties.

This workshop will get you started with OGC API – MF and its implementation in MF-API Server that is based on pygeoapi and MobilityDB, covering the following questions:
- What is the core concept of OGC API – MF (and OGC MF-JSON format)?
- How to implement OGC API – MF with pygeoapi and MobilityDB?
- How can we visualize its results with STINUUM (with CesiumJS)?
- How can we implement a new feature that hasn't been implemented yet?

The below open sources will be used in this workshop:
- MF-API Server based on pygeoapi: https://github.com/aistairc/mf-api
- OGC API – Moving Features official GitHub repository: https://github.com/opengeospatial/ogcapi-movingfeatures
- MobilityDB (and its Python driver, PyMEOS, and MEOS): https://github.com/MobilityDB
- STINUUM: https://github.com/aistairc/mf-cesium
The installation of each program will use a Docker file.

Lastly, you can check many helpful information about OGC API – MF here: https://ogcapi.ogc.org/movingfeatures/

General Track(Talks, Online Talks, Lightning Talks, Workshops)
Taepyeong Hall
11-29
14:10
20min
API Implementation of the OGC API Moving Feature
Taehoon Kim, Hirofumi Hayashi, Tsubasa Shimizu, Tran Thuan Bang, Kyoungsook Kim

We created a REST API to register, search, and delete spatio-temporal data based on OGC API-Features, an international standard specification of the Open Geospatial Consortium (OGC), an international standardization organization for geospatial information, and the Moving Feature Encoding Extension-JSON (MF-JSON) specification, an international OGC standard specification developed mainly by AIST.
To create the API, we built an OGC-API server using pygeoapi and PyMEOS.PostgreSQL was used as the DB, and mobilityDB was used as an extension library for storing spatio-temporal data.For spatio-temporal data were stored using MobilityDB-specific type formats (TBool, TText, TInt, and TFloat).When converting MF-JSON for spatio-temporal data using PyMEOS functions, some tag names were not supported for conversion.Therefore, we implemented an additional process to convert the tag name to one that can be successfully registered before executing the PyMEOS function.
Also, at the time of this implementation, there were no source code modification guidelines for pygeoapi, so the API implementation was realized by directly modifying the source code of the lib directory that handles the API request processing.

General Track(Talks, Online Talks, Lightning Talks, Workshops)
Taepyeong Hall
11-30
11:20
20min
Accelerating the 3D Point Clouds Annotation Task with Deep Learning and Collaboration
Wijae Cho, Taehoon Kim, Kyoungsook Kim

The point cloud is becoming a valuable resource for building Digital Twins, the virtual representation of a real-world physical feature, with the LiDAR developments. The semantic information of each point (such as walls, floors, doors, windows, etc.) is essential to handle and analyzing point clouds. However, the raw point cloud captured from the LiDAR sensor doesn’t have semantics, and the annotation task for adding semantics is time-consuming and needs expert experience with commercial software. Therefore, we set up two goals for accelerating the 3D point clouds annotation task:
- Machine-based assistants, such as AI techniques, and
- Task-based collaboration.

Recently, deep learning has been used to derive semantic classes by automated classification and segmentation. Therefore, it can be used to address the manual task (bottleneck) in the traditional annotation process. Also, the data format for deploying is essential for collaboration. It should be easy to understand and efficient to share.

PCAS (Point Cloud Annotation System) is a 3D point cloud annotation system that enhances the quality and speed of annotation tasks with deep learning techniques. This system can be summarized as four key points:
- This system mainly develops based on Potree[1], an open-source library for point cloud visualization. It also utilizes various open sources, such as open3d[2] and torch-points3d[3], for handling point clouds with deep learning modules.
- This system supports three tools to accelerate annotation work: semi-automatic labeling with a deep learning module, 3D shape object-based labeling, and normal vector-based cluster labeling.
- The annotation task can be easily shared and tracked, similar to GitHub.
- The annotation task results are stored as an HDF5 file based on a predefined profile[4] for the labeled point clouds, easily sharing and managing semantic information.
This talk will introduce the four main points above, including the structure of PCAS with other open sources, the design purpose, and problems and solutions encountered during development.

References:
[1] Potree, WebGL point cloud viewer for large datasets, https://github.com/potree/potree
[2] torch-points3d, Pytorch framework for doing deep learning on point clouds, https://github.com/torch-points3d/torch-points3d
[3] Open3D, A Modern Library for 3D Data Processing, https://github.com/isl-org/Open3D
[4] The HDF5 profile for labeled point cloud data, https://docs.ogc.org/dp/21-077.html

General Track(Talks, Online Talks, Lightning Talks, Workshops)
Workshop Room