CGW 2023: COMPUTER GRAPHICS WORKSHOP 2023
PROGRAM FOR TUESDAY, JULY 11TH
Days:
next day
all days

View: session overviewtalk overview

09:00-09:30 Session 1: 報到與註冊

報到櫃台註冊及領取資料袋

09:30-09:40 Session 2: 開幕式

開幕式:

  • 大會榮譽主席致詞
  • 大會主席致詞
Chair:
Wen-Chieh Lin 林文杰 (國立陽明交通大學, Taiwan)
09:40-10:40 Session 3: 主題演講(一)
Chair:
Yu-Shuen Wang 王昱舜 (陽明交通大學, Taiwan)
09:40
Sabarish Babu (Clemson University, United States)
Lessons Learned in Near-Field Interactions with Virtual Humans, Affordance and Perception-Action, and 3D Interaction for Training and Education in MR

ABSTRACT. In this keynote, I will first discuss lessons learned from a body of research on emotion contagion and visual attention in human-virtual human interaction. In a virtual human simulation designed to educate nurses in recognizing the signs and symptoms of rapid deterioration, we investigated the effects of animation, appearance and interaction fidelity on the emotional reactions and visual attention behaviors of trainees in simulated dyadic and crowd scenarios. Our results and lessons learned have implications to the design of virtual humans in inter-personal simulations for personal space education. Next, I will discuss a body of work investigating dimensional symmetry and interaction fidelity continuum on near-field fine motor skills training in VR for technical skills education in the aviation and automotive curriculum. We designed and evaluated the effects of interaction, dimensional and system fidelity in near-field virtual reality simulations for motor skills training and education, in domains such as precision metrology and mechanical skills acquisition. Finally, I will discuss our some of our key findings in static and dynamic affordances in VR and MR, as well as perception-action coordination research with implications for near and medium field psychomotor skills training and education. I will end the talk summarizing our contributions, and highlight the key takeaways and recommendations for the design of virtual human and mixed reality simulations for near-field fine motor skills training and inter-personal skills education.

10:40-11:00 點心時間

中場休息時間,可以於前堂取用點心,至後堂休息室享用點心。

11:00-12:00 Session 4: 主題演講(二)
Chair:
Chun-Fa Chang 張鈞法 (臺灣師範大學, Taiwan)
11:00
Tzu-Mao Li 李子懋 (University of California, San Diego, United States)
Differentiable Visual Computing

ABSTRACT. While neural networks have become powerful tools for processing visual data, their generality raises several challenges. Firstly, most modern architectures work on 2D, and it is difficult to embed 3D knowledge. Secondly, neural networks are by design over-parametrized and have millions or billions of parameters. It is challenging to make the networks run fast for high-resolution images and videos on mobile devices. Finally, neural networks are difficult to debug and control as their behaviors are mostly governed by their parameters and the training data. On the other hand, classical visual computing algorithms that explicitly model the computation are less impacted by these issues. Still, they often do not apply as broadly as modern data-driven methods. A major focus of our research is to connect classical graphics algorithms with modern data-driven methods, by making graphics algorithms differentiable to enable optimization and inference. Making graphics algorithms differentiable lead to new challenges: How do we derive the correct derivatives in the first place, when there can be discontinuities and boundary conditions involved? How do we efficiently compute the derivatives? How do we build systems to make derivation and implementation of differentiation easier? I will talk about our recent efforts into addressing these challenges. These research efforts include contributions in fields of forward/inverse rendering, image processing, physical simulation, and programming languages/systems.

12:00-13:30 午餐

中午用餐時間,老師及圖學會員請至五樓校史館無紙化會議室,學生請至後堂休息區用餐。

12:00-13:30 Session 5: 會員大會

一、電腦圖學學會年度報告。

二、如何協助女性參與圖學相關研究工作。

Chairs:
I-Chen Lin 林奕成 (國立陽明交通大學, Taiwan)
Wen-Chieh Lin 林文杰 (國立陽明交通大學, Taiwan)
13:30-14:10 Session 6: 技術論壇
  • 演講:NVIDIA GeForce RTX 筆記型電腦:引領 STEM 學習革命、Omniverse 與 AI 實現數位雙生
  • 討論議題如何協助女性進入圖學相關領域職場
Chair:
Hung-Kuang Chen 陳宏光 (國立勤益科技大學, Taiwan)
13:30
Alan Rau 饒紘宇 (NVIDIA, Taiwan)
NVIDIA GeForce RTX 筆記型電腦:引領 STEM 學習革命、Omniverse 與 AI 實現數位雙生

ABSTRACT. TBA

14:10-14:30 午茶時間

中場休息時間,可以於前堂取用點心,至後堂休息室享用點心。

14:30-16:10 Session 7A: 論文發表(一)

投影片上傳連結:7A-投影片上傳區,論文分享連結:7A-論文分享區

Chair:
Chuan-Kai Yang 楊傳凱 (國立臺灣科技大學, Taiwan)
14:30
林妙 (國立臺灣大學, Taiwan)
沈奕超 (東京大學, Japan)
秦孝媛 (國立臺灣大學, Taiwan)
陳若曦 (國立臺灣大學, Taiwan)
陳炳宇 (國立臺灣大學, Taiwan)
應用色票與色彩和諧引導之向量圖標著色
PRESENTER: 林妙

ABSTRACT. 我們提出了一種無需光柵化即可為向量圖標著色的方法。給定向量圖標的輪廓以及包含五種顏色的色票,我們的方法會根據色票生成向量圖標的多種著色結果。考慮到圖標是由各種幾何元素所構成的一種抽象表現,因此我們除了關注每條曲線的局部特徵,也將曲線之間的連接關係作為全局特徵納入考量。此外,我們結合兩個控制項來調節顏色之間的多樣性與和諧性,使圖標的顏色組合能夠更符合人類偏好與感知。與過去不同的是,我們的方法完全是在向量格式的條件下進行,不須經過任何光柵圖與向量圖之間的轉換。我們和過去基於像素之光柵圖標的上色方法進行比較,以證明我們提出的上色工具與著色結果是足夠有效的。

14:50
彭心睿 (國立臺灣海洋大學, Taiwan)
何宗家 (國立臺灣海洋大學, Taiwan)
楊冠文 (國立臺灣海洋大學, Taiwan)
游承儒 (國立臺灣海洋大學, Taiwan)
葉之霆 (國立臺灣海洋大學, Taiwan)
林士勛 (國立臺灣海洋大學, Taiwan)
3D模型網格平滑化與簡化重建
PRESENTER: 彭心睿

ABSTRACT. 當我們呈現一個模型時,其模型結構的複雜度與呈現裝置消耗的效能有關;進一步地,如果要呈現多個模型時,所需的效能也會隨之提升。若減少場景中距離觀察者較遠模型的複雜度,僅保留特徵外觀,對觀察其模型並不會有太大落差,且可以有效降低所消耗的效能來使得程式運作更順暢。因此,本論文提出一種架構針對模型的表面網格重新進行處理,以平均三角形網格取代原始模型表面網格,再以新的分群方式篩選出冗餘面後來建出相似模型,最後透過最佳化程序來得出簡化且具有符合原始模型特徵的簡化模型。

15:10
許世楨 (國立臺北科技大學, Taiwan)
林立森 (國立臺北科技大學, Taiwan)
謝東儒 (國立臺北科技大學, Taiwan)
UNetCNX:基於 ConvNeXt 編碼的 U-Net 用於 3D 心臟肌肉影像
PRESENTER: 林立森

ABSTRACT. 患有肥厚型心肌病的患者,需要進行過量心肌切除手術以緩解症狀。術者需要考慮患者心肌的厚度來確定需要切除的心肌數量,切除過量心肌會導致手術失敗。從患者的計算機斷層掃描 (CT) 構建 3D 列印心臟模型,有助醫生術前練習,以降低風險。從計算機斷層掃描中構建心臟 3D 模型,需要對每層掃描進行心肌影像分割,然後藉由 marching cubes 技術生成 3D 心臟模型。手動分割心肌是耗時且勞力密集,本研究的目標是利用監督式深度學習實現自動心肌影像分割。本研究的訓練資料集包含 15 個 CT 掃描的心肌標記真實值。本研究提出了一種新的深度學習架構模型 UNetCNX,用於 CT 心肌分割。以 CT 影像作為輸入,並輸出心肌的 3D 影像分割。本研究使用 U-Net 的架構模型,將編碼器替換為 3D ConvNeXt 區塊,通過跳躍連接將不同尺寸的特徵與解碼器結合。最後在心肌分割上取得了良好的結果,Dice 分數為 0.881,HD95 分數為 11.000,優於其他網絡架構,如 3D U-Net、Attention U-Net、CoTr、UNETR 與 Swin UNETR。

15:30
林姿廷 (國立台灣科技大學, Taiwan)
楊傳凱 (國立台灣科技大學, Taiwan)
結合室內設計指南進行智慧空間規劃
PRESENTER: 林姿廷

ABSTRACT. 當一個新的傢俱物品要放入現有的環境中時,用戶通常沒有什麼依據的將物件隨意擺放,但是許多傢俱物品通常需要周圍有開放空間才能使用,在用戶隨意擺放的情況下,很容易就造成空間不足的情況發生,為解決上述情況,本文希望可以結合室內設計指南給予用戶新的傢俱物品的擺放建議。

為上述目的,本論文提出一種基於人體生理學施加的約束條件,透過搜尋常見傢俱組(如桌子、椅子等)找到最常見的幾個擺放位置,再以一個成本函數去計算這些位置的擺放成本,並依照成本做排序,找到最低成本的擺放方式,給予使用者傢俱物品的擺放建議。

15:50
楊雯筑 (國立臺灣科技大學, Taiwan)
楊傳凱 (國立臺灣科技大學, Taiwan)
社群媒體影片爬蟲與影片去識別化
PRESENTER: 楊雯筑

ABSTRACT. 現今網路發達,資訊傳播快速,使得在社群媒體上分享知識技能、日常生活不再是件難事。在各式各樣的分享媒介中,影片成為許多人的選擇,用以與他人分享各式訊息,然而享受著此等便利時,創作者亦可能必須要面對資訊安全的問題。例如影片可能被非法下載、改編、分享,又如知名人物的肖像遭有心人士製成色情影片,造成他人名譽、隱私、心靈上備受侵害,或又如詐欺、假消息等問題亦層出不窮。因此,本論文除蒐集社群媒體的影片相關資訊外,也將站在保護隱私的角度,對於影片人臉的部分進行去識別化,讓上傳影片者,可以在隱私權等權利免於遭受侵害的情況下,放心地製作及分享自己的影片。 為達到上述目的,本論文利用爬蟲的方式抓取 Facebook 以及 TikTok 的公開影片資訊,並於爬取前,對影片網址做前處理,以提升爬取的效能,且能夠 100% 避免重複爬取相同網址。在蒐集資料的同時,本論文亦會針對儲存到影片資料庫的影片進行影像處理,利用人臉提取、校正,以取得影像中的人臉特徵資料(如:臉部座標、性別、年齡等),並儲存於人臉資料庫中。欲去識別化的影片則會透過物件偵測與人臉提取的方式獲取人臉特徵資料,然後與人臉資料庫既有的資料進行比對,篩選出最為合適的來源人臉,再利用特徵融合的方式處理欲進行去識別化的影片。經實驗後,本系統產生的結果不僅不會被認定是相同的人,與未先進行影片比對即直接進行去識別化的影片相比,也能有效降低被認為是造假的機率。

14:30-16:10 Session 7B: 成果分享(一)

投影片上傳連結:7B-投影片上傳區,論文分享連結:7B-論文分享區

Chair:
Shih-Ching Yeh 葉士青 (國立中央大學, Taiwan)
14:30
Thi-Ngoc-Hanh Le (國立成功大學, Taiwan)
Ya-Hsuan Chen (國立成功大學, Taiwan)
Tong-Yee Lee (國立成功大學, Taiwan)
Structure-aware Video Style Transfer with Map art
PRESENTER: Thi-Ngoc-Hanh Le

ABSTRACT. Changing the style of an image/video while preserving its content is a crucial criterion to access a new neural style transfer algorithm. However, it is very challenging to transfer a new map art style to a certain video in which “content” comprises a map background and animation objects. In this article, we present a novel comprehensive system that solves the problems in transferring map art style in such video. Our system takes as input an arbitrary video, a map image, and an off-the-shelf map art image. It then generates an artistic video without damaging the functionality of the map and the consistency in details. To solve this challenge, we propose a novel network, Map Art Video Network (MAViNet), the tailored objective functions, and a rich training set with rich animation contents and different map structures. We have evaluated our method on various challenging cases and many comparisons with those of the related works. Our method substantially outperforms state-of-the-art methods in terms of visual quality and meets the mentioned criteria in this research domain.

14:55
Dong-Yi Wu (國立成功大學, Taiwan)
Thi-Ngoc-Hanh Le (國立成功大學, Taiwan)
Sheng-Yi Yao (國立成功大學, Taiwan)
Yun-Chen Lin (國立成功大學, Taiwan)
Tong-Yee Lee (國立成功大學, Taiwan)
Image Collage on Arbitrary Shape via Shape-Aware Slicing and Optimization
PRESENTER: Dong-Yi Wu

ABSTRACT. —Image collage is a very useful tool for visualizing an image collection. Most of the existing methods and commercial applications for generating image collages are designed on simple shapes, such as rectangular and circular layouts. This greatly limits the use of image collages in some artistic and creative settings. Although there are some methods that can generate irregularly-shaped image collages, they often suffer from severe image overlapping and excessive blank space. This prevents such methods from being effective information communication tools. In this paper, we present a shape slicing algorithm and an optimization scheme that can create image collages of arbitrary shapes in an informative and visually pleasing manner given an input shape and an image collection. To overcome the challenge of irregular shapes, we propose a novel algorithm, called Shape-Aware Slicing, which partitions the input shape into cells based on medial axis and binary slicing tree. Shape-Aware Slicing, which is designed specifically for irregular shapes, takes human perception and shape structure into account to generate visually pleasing partitions. Then, the layout is optimized by analyzing input images with the goal of maximizing the total salient regions of the images. To evaluate our method, we conduct extensive experiments and compare our results against previous work. The evaluations show that our proposed algorithm can efficiently arrange image collections on irregular shapes and create visually superior results than prior work and existing commercial tools.

15:20
Chih-Hsuan Chen (國立台東大學, Taiwan)
Chia-Ru Chung (國立中央大學, Taiwan)
Hsuan-Yu Yang (國立中央大學, Taiwan)
Shih-Ching Yeh (國立中央大學, Taiwan)
Eric Hsiao-Kuang Wu (國立中央大學, Taiwan)
Hsin-Jung Ting (國立台東大學, Taiwan)
Virtual Reality-Based Supermarket for Intellectual Disability Classification, Diagnostics and Assessment
PRESENTER: Shih-Ching Yeh

ABSTRACT. Possible symptoms of intellectual disability (ID) include delayed physical development that becomes more pronounced as the disability progresses, delayed development of gross and fine motor skills, sensory perception problems, and difficulty grasping the integrity of objects. Although there is no cure or reversal, research has shown that extensive training and learning can lead to easier social integration, but the human demands of diagnosis and the cost of training often result in overburdened families of origin, an unmanageable workload for teachers, and high social costs. Therefore, it is important to conduct efficient, effective, and economical assessments in a safe and reproducible training environment. Currently, the assessment of intellectual disability relies on intelligence tests such as the Wechsler Intelligence Scale (WIS) and the Vineland Adaptive Behavior Scale (VABS). With the rapid development of virtual reality (VR) and machine learning (ML), we created a virtual supermarket and then collected data in three areas, including eye movements, brain waves, and behaviors. We also propose an intelligent executive function evaluation using ML to develop a more objective and automatic evaluation model based on real data through physiological data obtained from user reflections. Statistical analysis of the obtained data showed that some data metrics derived from behavioral information differed significantly between ID patients and healthy participants. This shows that it is possible to perform classification through neural networks, even at multiple levels, which may prove effective for vocational training through VR.

15:45
黃柏叡 (國立陽明交通大學, Taiwan)
Chien-Chou Wong (國立陽明交通大學, Taiwan)
Cheng-En Cai (國立陽明交通大學, Taiwan)
Hao-Ming Tsai (國立陽明交通大學, Taiwan)
Guan-Ting Liu (國立陽明交通大學, Taiwan)
Sai-Keung Wong (國立陽明交通大學, Taiwan)
Generation of cart-pulling animation in a multiagent environment using deep learning
PRESENTER: 黃柏叡

ABSTRACT. In this article, we propose a framework to generate cart-pulling animation using deep learning in a multiagent environment. Mainly, two workers pull a cart using ropes and interact with crowd agents which exhibit following, wandering, and evasion behaviors. The main idea is to train a policy to learn an individual behavior of the workers and crowd agents. Furthermore, the challenge is that as the ropes are flexible, rewards are designed carefully so that the workers pull the ropes in a collaborative and consistent manner. Hence, the workers can pull the cart while avoiding collision with the crowd agents and surrounding static objects. In the stage of animation generation, we assign the policies deliberately to the workers and the crowd agents so that they interact with each other naturally. We conducted experiments on animal characters and the system could produce animations of characters with diverse behaviors.

18:00-20:00 晚宴 Banquet

因經費有限,今年晚宴不含學生。限教師、會員、及贊助商等,具晚宴邀請函者參加。