Publications

Scene understanding

 
To understand what is happening in the world, it helps to first understand how the world is physically organized. Where are the walls and floor? How is the furniture arranged? Where are the people? And how do all of these aspects relate to each other?

Layout Estimation of Highly Cluttered Indoor Scenes using Geometric and Semantic Cues
Y.-W. Chao, W. Choi, C. Pantofaru, and S. Savarese
In Proc. of the International Conference on Image Analysis and Processing (ICIAP), 2013.

Understanding Indoor Scenes using 3D Geometric Phrases
W. Choi, Y.-W. Chao, C. Pantofaru, and S. Savarese
In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
Oral presentation – 3.2% acceptance rate.
Project website

A Discriminative Model for Learning Semantic and Geometric Interactions in Indoor Scenes
W. Choi, Y.-W. Chao, C. Pantofaru, and S. Savarese
In the Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Scene Understanding Workshop (SUNw), 2013.

Detecting and tracking people

 
People are often the most important elements of visual scenes. Detecting and tracking people can enable multiple applications, from photo editing systems that know to prioritize people, to robotic systems which can efficiently and effectively interact with people. This work touches upon multiple issues such as: effectively combining multiple detectors and data modalities to more robustly detect people in changing environments, collecting data of people cohabiting with robotic systems, and labeling large datasets for training vision algorithms.

Discovering Groups of People in Images
W. Choi, Y.-W. Chao, C. Pantofaru, and S. Savarese
In Proc. of the European Conference on Computer Vision (ECCV), 2014.

An adaptable system for RGB-D based human body detection and pose estimation
K. Buys, C. Cagniart, A. Baksheev, T. De Laet, J. De Schutter, C. Pantofaru
In Journal of Visual Communication and Image Representation, 2013.

A General Framework for Tracking Multiple People from a Moving Camera
W. Choi, C. Pantofaru, and S. Savarese
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2012.
Project website

Detecting and Tracking People using an RGB-D Camera via Multiple Detector Fusion
W. Choi, C. Pantofaru, and S. Savarese
In Workshop on Challenges and Opportunities in Robot Perception, at the International Conference on Computer Vision (ICCV), 2011.

A Side of Data with My Robot: Three Datasets for Mobile Manipulation in Human Environments
M. Ciocarlie, C. Pantofaru, K. Hsiao, G. Bradski, P. Brook, and E. Dreyfuss
IEEE Robotics & Automation Magazine, Special Issue: Towards a WWW for Robots, Volume 18, Issue 2, June 2011.

User Observation & Dataset Collection for Robot Training
C. Pantofaru
In Proc. of Human-Robot Interaction (HRI), 2011.

Using Depth Information to Improve Face Detection
W. Burgin, C. Pantofaru, and W.D. Smart
In Proc. of Human-Robot Interaction (HRI), 2011.

Fast Hand Gesture Recognition for Real-Time Teleconferencing Applications
J.W. MacLean, R. Herpers, C. Pantofaru, L. Wood, K. Derpanis, D. Topalovic, and J.K. Tsotsos
In Proc. of the workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS) in conjunction with the IEEE International Conference on Computer Vision (ICCV), 2001.

Active Visual Control by Stereo Active Vision Interface (SAVI)
R. Herpers, K. Derpanis, C. Pantofaru, D. Topalovic, J.W. MacLean, G. Verghese, A. Jepson, and J.K. Tsotsos
In Proc. in Artificial Intelligence, GI Workshop on Dynamische Perzeption, 2000.


Human-Robot Interaction

Do we need robots in our lives? What for? How will we behave around those robots and how should we design them to be efficient and effective when they interact with us? By studying how people complete everyday tasks or interact with technology, we can motivate future algorithm and system design. By testing new prototypes with users early and often, we can focus our research in productive directions.

Programming Robots at the Museum
C. Pantofaru, A. Hendrix, A. Paepcke, D. Thomas, S. Marzouk, and S. Elliott
In Proc. of the International Conference on Interaction Design and Children, 2013.

Robots for Humanity: Using Assistive Robotics to Empower People with Disabilities
T. Chen, M. Ciocarlie, S. Cousins, P. Grice, K. Hawkins, K. Hsiao, C. Kemp, C.-H. King, D. Lazewatsky, A. Leeper, H. Nguyen, A. Paepcke, C. Pantofaru, W. Smart, and L. Takayama
IEEE Robotics & Automation Magazine, Special issue on Assistive Robotics, 2013.

Robots for Humanity: User-Centered Design for Assistive Mobile Manipulation
T. Chen, M. Ciocarlie, S. Cousins, P. Grice, K. Hawkins, K. Hsiao, C. Kemp, C.-H. King, D. Lazewatsky, A. Leeper, H. Nguyen, A. Paepcke, C. Pantofaru, W. Smart, and L. Takayama
In Video Proc. of Intelligent Robots and Systems (IROS), 2012.

Making technology homey: Finding sources of satisfaction and meaning in home automation
L. Takayama, C. Pantofaru, D. Robson, B. Soto, and M. Barry
In Proc. of Ubiquitous Computing (UbiComp), 2012.

Exploring the Role of Robots in Home Organization
C. Pantofaru, L. Takayama, T. Foote, and B. Soto
In Proc. of Human-Robot Interaction (HRI), 2012.

Need Finding: A Tool for Directing Robotics Research and Development
C. Pantofaru and L. Takayama
In The Workshop on Human-Robot Interaction, at the Robotics: Science and Systems (RSS) Conference, 2011.

Help Me Help You: Interfaces for Personal Robots
I. Goodfellow, N. Koenig, M. Muja, C. Pantofaru, A. Sorokin, and L. Takayama
In Proc. of Human Robot Interaction (HRI), 2010.

Influences on Proxemic Behaviors in Human-Robot Interaction
L. Takayama and C. Pantofaru. In Proc. of Intelligent Robots and Systems (IROS), 2009.


Object detection and segmentation

Recognizing object classes and localizing object instances with pixel-accuracy maps are difficult problems in computer vision. Since classes of deformable objects can take a wide variety of shapes in any given image, it is impossible to search over all spatial supports – a method for reducing the number of pixel sets to be examined is required. One method for proposing accurate spatial support for objects and features is data-driven pixel grouping via unsupervised image segmentation. The goals of this work are to define and address the issues associated with incorporating image segmentation into an object recognition framework.

Studies in Using Image Segmentation to Improve Object Recognition
C. Pantofaru
Ph.D. Thesis, The Robotics Institute, Carnegie Mellon University, 2008.

Object Recognition by Integrating Multiple Image Segmentations
C. Pantofaru, C. Schmid, and M. Hebert
In Proc. European Conference on Computer Vision (ECCV), 2008.

Toward Objective Evaluation of Image Segmentation Algorithms
R. Unnikrishnan, C. Pantofaru, and M. Hebert
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Volume 29, Number 6, p.929-944, June 2007.

Discriminative Cluster Refinement: Improving Object Category Recognition Given Limited Training Data
L. Yang, R. Jin, C. Pantofaru, and R. Sukthankar
In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.

A framework for learning to recognize and segment object classes using weakly supervised training data
C. Pantofaru and M. Hebert
In Proc. of the British Machine Vision Conference (BMVC), 2007.

Combining Regions and Patches for Object Class Localization
C. Pantofaru, Gy. Dorko, C. Schmid, and M. Hebert
In Proc. of the Beyond Patches workshop (BP) in conjunction with the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2006.

A Comparison of Image Segmentation Algorithms
C. Pantofaru and M. Hebert
The Robotics Institute, Carnegie Mellon University, Number CMU-RI-TR-05-40, 2005.

A Measure for Objective Evaluation of Image Segmentation Algorithms
R. Unnikrishnan, C. Pantofaru, and M. Hebert
In Proc. of the Workshop on Empirical Evaluation Methods in Computer Vision in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005.


Other projects

Temporal Synchronization of Multiple Audio Signals
J. Kammerl, N. Birkbeck, S. Inguva, D. Kelly, A. J. Crawford, H. Denman, A. Kokaram, and C. Pantofaru
In Proc. of the International Conference on Signal Processing (ICASSP), 2014.

Towards Autonomous Robotic Butlers: Lessons Learned with the PR2
J. Bohren, R.B. Rusu, E.G. Jones, E. Marder-Eppstein, C. Pantofaru, M. Wise, L. Mosenlechner, W. Meeussen, and S. Holzer
In the International Conference on Robotics and Automation (ICRA), 2011.

Toward Generating Labeled Maps from Color and Range Data for Robot Navigation
C. Pantofaru, R. Unnikrishnan, and M. Hebert
In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2003.


Programmable logic devices

General algorithms for placing components and routing connections on programmable logic devices work quite well, however they are not perfect. Sometimes automated algorithms fail and human intervention is required. Other times there are are external constraints on layout such as the speed of a given connection. The work below presents methods for obtaining user constraints on system layout and integrating them into automated layout algorithms.

Method and apparatus for utilizing constraints for the routing of a design on a programmable logic device
V. Betz, C. Pantofaru, and J. Swartz
US Patent 7757197, issued to Altera Corporation, submitted May 29, 2003, issued July 13, 2010.

Method and apparatus for implementing soft constraints in tools used for designing systems on programmable logic devices
T.P. Borer, G. Quan, S.D. Brown, D.P. Singh, C. Sanford, V. Betz, C. Pantofaru and J. Swartz
US Patent 7194720, issued to Altera Corporation, submitted July 11, 2003, issued March 20, 2007.