Zhigang Deng, Ph.D.

Moores Professor of Computer Science
Director of Graduate Studies,
Director of Computer Graphics
      & Interactive Media Lab,
Department of Computer Science,
University of Houston, Houston, TX

Office:
Phillip G. Hoffman Hall (PGH), Rm 228
Houston, TX 77204-3010

Tel: +1 713 743 1018
Fax: +1 713 743 3335

Email (preferred way to contact me):
zdeng4 (at) central.uh.edu
OR zhigang.deng (at) gmail.com

Mailing Address:
Phillip G. Hoffman Hall, Department of Computer Science,
3551 Cullen Blvd, Room 501,
Houston, TX, 77204-3010


home     projects     vita     group     demo

Featured Projects

Butterfly Modeling and Simulation

An efficient and practical model to simulate butterfly flights including wing-abdomen interaction in various conditions and environments, based on parametric maneuvering functions and force-based models

Real-time Video-based Facial Performance Capture and Manipulation

Automatic algorithms to real-time capture 3D high-quality facial mesh sequences as well as video-based facial expression manipulation and face swapping

Analysis and Modeling Human Behaviors in Multiparty Conversations

Quantifying, understanding, and modeling human behaviors and gestures when they are involved with multiparty conversations

Skeletal Rigging from Mesh/Point Sequences

Using the input of an animated mesh sequence, this approach can automatically extract its optimal skeleton structure, joint positions, linear blend skinning weights, and the corresponding bone transformations.

Visual Traffic Simulation, Modeling, and Evaluation

Design of novel and efficient algorithms and systems to simulate large-scale realistic traffic in various road and urban settings, including mixed traffic, lane-changing, simulated traffic evaluation, and intersectional traffic simulation.

3D Face Modeling from In-the-wild Images or Video

Automatic methods to reconstruct 3D face models with fine-scale details from a single in-the-wild image or video, including automatically augment coarse-scale 3D faces with synthesized fine-scale geometric wrinkles.

Interactive Complex Object Modeling from Multi-view Images

development of a novel interactive modeling approach based on multi-view images including interactive 3D modeling and stochastic motion parameter estimation.

Hexahedral Mesh Generation, Optimization, and Evaluation

A first and complete pipeline to optimize the global structure of a hex-mesh, by identifying and removing those valid removal base-complex sheets that contain misaligned singularities, and a volumetric partitioning strategy based on a generalized sweeping framework to seamlessly partition the volume of an input triangle mesh into a collection of deformed cuboids.

Crowd Simulation and Evacuation

New algorithms and systems to simulate crowds or group behavior in unexpected hazard situations, computational model of crowd emotion evolution, or simulated crowds by integrating physical strength consumption

Mesh Animation Compression

Efficient algorithms to compress mesh animation sequences through adaptive spatio-temporal segmentations in order to support progressive transmission over networks

Two-layer Sparse Compression of Dense-Weight Blend Skinning

It introduces an efficient two-layer sparse compression technique to substantially reduce the computational cost of a dense-weight skinning model, with insignificant loss of its visual quality.

Smooth Skinning Decomposition with Rigid Bones

The SSDR model can effectively approximate the skin deformation of nearly articulated models as well as highly deformable models by a low number of rigid bones and a sparse, convex bone-vertex weight map.

Eye and Head Animations

Data-driven synthesis algorithms for realistic and coordinated eye and head movements based on (live or pre-recorded) speech input

Visual Speech Animation Synthesis

Automated algorithms and systems to generate expressive visual speech animations based on novel typed or spoken input
]

Live Speech Driven Facial Animation

Fully-automated algorithms to generate realistic lip-sync and head-and-eye animations based on live speech input alone, which can run in real-time on an off-the-shelf computer.

Human Motion Compression and Retrieval

Effective algorithms for compressing and retrieving character motions by hierarchically extracting meaningful motion patterns from dataset, or by performing sparse dictionary learning from dataset.

Insect Swarm Simulation

A highly efficient field-based, GPU friendly approach to realistically simulate behavior patterns of a variety of insect swarms (including flying insects, ants, etc.) under various scenarios.

Creative Virtual Tree Modeling and Morphing

A new method to efficiently generate a set of morphologically diverse and inspiring virtual trees through hierarchical topology-preserving blending, aiming to facilitate designers' creativity production, and a new way to morph between two distinct trees.

Feature-preserving Mesh Denoising

a novel scheme for robust feature-preserving denoising for 3D mesh models in order to remove noise while maximally preserving geometric features

Sketch-based 3D Modeling

Data-driven 3D modeling algorithms and systems to rapidly create 3D objects or 3D realistic textured faces based on freeform 2D occluding contours or freeform 2D sketch input

Perceptually Guided Facial Animation

A perceptually guided computational framework is introduced to generate expressive facial animation, and quantitatively and automatically evaluate visual speech animation.

Visual-Haptic Interfacing for Robot-Assisted Minimally Invasive Cardiac Surgery

A novel human-in-the-loop, visual haptic interfacing system for planning and performing real-time MRI guided, robot-assisted, minimally invasive surgical interventions on the beating heart, with particular focus on aortic valve implantations.

Tele-mentoring for Minimally Invasive Surgeries

Design and development of a framework to facilitate tele‐mentoring between an operating surgeon and a remote surgeon for minimally invasive surgeries.
Other Projects

Data-driven Guzheng Playing Animation

A fully automatic, deep learning based framework to synthesize realistic upper body animations based on novel Guzheng music input. We design a generative adversarial network based approach to capture the temporal relationship between Guzheng music and human motion.

Marker Optimization for Facial Motion Acquisition and Deformation

An approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences

Blendshape Facial Animation

Algorithms for automating or reducing time-consuming tuning efforts of the blendshape animation approach, including target interference reducing, cross-model mapping, and model construction.

Interactive Cage Generation for Deformation

An efficient complete pipeline to generate high quality cages for 3D models with arbitrary topological complexities, including high genus or complex models.

Indoor Layout Modeling and Generation

Automated methods to synthesize plausible indoor scene layouts, including motion planning for convertible indoor scene layout design and layout programming via virtual navigation detectors

Total Human Modeling based on Single RGB-D Shot

An efficient algorithm and pipeline to fully recover the 3D human body pose and shape including head and hand positions from a single RGB-D shot, based on machine learning algorithms.

Facial Animation Editing and Transferring

Efficient and automated algorithms and systems for editing and transferring facial motions across different models or modalities, with intuitive and little user interaction.

Freestyle Group Formation Generation for Agent-based Crowd Simulation

An intuitive and efficient approach to generate arbitrary and precise group formations by sketching 2D formation boundaries in a freeform manner.

Context-Aware Motion Diversification and Perception for Crowds

A novel scheme for dynamically controlling motion styles of agents to increase the motion variety of a crowd.

Online Mocap Marker Labeling for Interacting Targets

An online, automated mocap marker labeling approach for multiple interacting articulated targets by fitting rigid bodies and exploiting trained structure and motion models

3D Geometric Model Watermarking

A novel, robust, and high-capacity watermarking method for 3D Meshes with arbitrary connectivities in spatial domain based on affine invariants, by embedded the watermark as affine-invariant length ratios of one diagonal segment to the residing diagonal intersected by the other one in a coplanar convex quadrilateral.

Energy-efficient GPU and Mobile Graphics Computing

Design of novel algorithms to statistically model and predict the power consumption of GPU-based computing, and perform in-depth quantitative analysis on the power consumption and runtime performance aspects of graphics computing and applications on mainstream smartphone devices.

Image-based Face Illumination Transfer

An image-based technique that transfers illumination from a source face image to a target face image, without any prior information regarding the lighting conditions or the 3D geometries of the underlying faces.

Virtual Garment Simulation and Fitting

Novel algorithms and system to simulate virtual garments and fitting for virtual humans


User-centric Interfaces for Mobile Computing

Design various user-centric user interfaces for mobile computing, including an automated avatar interface is driven by the text messages exchanged between chatting participants, as well as single-handed mobile keyboard design.

Improving Gaming Experience by Incorporating Physiological Feedback and Gamers' Profiles

Novel methodology to improve users' gaming experience by real-time monitoring their physiological signals or their individualized gamer profile, and thus automatically adjusting the difficulty level of computer games.

Perceptual Understanding of Synthetic Facial Expressions, Animations, and Head Motion

Investigate the region-significance and culture-dependence of facial expressions and movements, in terms of subject identification, emotion perception of 3D facial expression, and visual perception of avatar head movements.

Analysis of Mutimodal Emotion Recognition

Comprehensive analysis and understanding of the strengths, limitations, and combinational aspects of mutimodal emotion recognition algorithms and systems, based on multi-channles of humans including visual facial expression channel and acoustic speech channel.

Spine Tracking and Segmentation in Neuron Images

Novel automated algorithms to accurately track, detect, segment a large number of dentrite spines from time-lapse confocal microscopy neuron images, in both 2D image level and reconstructed 3D geometric level.

Self-represented Avatars for Teen Physical Activities

Study the feasibility and effectiveness of using self-represented avatars for increasing teen physical activities.

Visualization and Planning of Neurosurgical Interventions

A highly efficient, GPU-accelerated visualization and planning technique to allow interactive, quantitative or semi-quantitative comparison among various points of entrance for neurosurgical procedures that require straight access.

Interactive Teeth Segmentation

Interactive geometric algorithm to accurately segment the upper and lower teeth from 3D dental models that are directly reconstructed from 2D CT dental images, with intuitive and little user intervention.

Intermittency and Kinematics of Arm Movements

Investigate two potential sources for the origins of movement intermittency, and quantitatively understand the kinematics of mirror and parallel robot-assisted movements and rehabilitation.


Our Generous Sponsors