Invited Speakers

Jonathan Kelly

University of Toronto

Institute for Aerospace Studies

4925 Dufferin Street

Toronto, Ontario, Canada

Telephone: +1 (416) 667-7708

WebJonathan Kelly

E-mailjkelly@utias.utoronto.ca 

Presentation "Cheap or Robust? SLAM, DATMO, and the Development of Practical Self-Driving Wheelchair Technology"

To date, self-driving experimental wheelchair technologies have been either inexpensive or robust, but not both. However, to be practical and to achieve real-world acceptance, both qualities are fundamentally essential. Currently, there are over 5 million power wheelchairs in use in Canada and the United States and an estimated 20 million such chairs deployed in G20 countries - a cost-effective autonomous wheelchair navigation system would dramatically enhance mobility for many users, and, in turn, positively impact their quality of life.

In this talk, I will describe our recent efforts to develop a fully autonomous navigation system for fielded power wheelchairs, motivated by the desire to provide improved mobility to a vulnerable population segment: individuals with severe upper-body motor disabilities, caused by MS, ALS, Parkinson's disease, or quadriplegia, for example. These users are unable to operate the standard wheelchair joystick interface, and so instead must rely on other types of assistive control devices, such as sip-and-puff switches, which are typically extremely difficult and tedious to use. Our system, which can be retrofitted to existing power wheelchairs, is capable of autonomous and semi-autonomous mapping and navigation, and relies on low-cost RGB-D sensors and wheel odometry only. I will discuss the market constraints that have driven our design (in cooperation with our industrial partner); although sensing and computing hardware costs have decreased dramatically, making commercial production feasible, the system must still operate under very strict resource constraints.

In order to enable robust navigation at an affordable price, we have built a software suite (based on ROS) that includes a state-of-the-art SLAM package incorporating moving obstacle detection and tracking capabilities. I will discuss the overall system architecture and the operation of the SLAM and DATMO components, as well as our bespoke software modules for reliable doorway traversal, hazard detection, and path planning in tight spaces. Our maps include geometric, photometric, and obstacle information; in the talk, I will briefly review the framework and data structures used to ensure we that remain within the resource budget.

Finally, I will give an outline of our roadmap for future platform extensions to introduce new capabilities, such as semantic navigation using voice commands. In the spirit of the workshop, our belief is that there are tremendous opportunities to exploit information sharing between platforms (e.g., a fleet of wheelchairs at a large care facility simultaneously updating a shared map, etc.). Cloud technologies also have the potential to further drive down costs, through data sharing, data reuse, and cloud-based processing, while allowing far more capable assistive platforms to be deployed.

Short biography

Dr. Kelly is an Assistant Professor at the University of Toronto Institute for Aerospace Studies, where he directs the Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory (http://starslab.ca/). Prior to joining the University of Toronto, he was a postdoctoral researcher in the Robust Robotics Group at the Massachusetts Institute of Technology, working with Prof. Nick Roy. Dr. Kelly received his PhD degree in 2011 from the University of Southern California, under the supervision of Prof. Gaurav Sukhatme. At USC, he was a member of the first class of Annenberg Fellows; the Annenberg award supports prospective leaders in the fields of engineering and media sciences. Prior to graduate school, he was a software engineer at the Canadian Space Agency in Montreal, Canada.  His research interests lie primarily in the areas of sensor fusion, estimation, and machine learning for navigation and mapping, applied to both robots and human-centred assistive technologies.

 

Davide Scaramuzza

University of Zurich

Robotics and Perception Group

Andreasstrasse 15, Office 2.10

8050 Zurich, Switzerland

Telephone: +41 44 635 24 09

WebDavide Scaramuzza

E-mailsdavide@ifi.uzh.ch 

Presentation "Towards Agile Flight of Vision-controlled Micro Flying Robots: from Frame-based to Event-based Vision"

Autonomous quadrotors will soon play a major role in search-and-rescue and remote-inspection missions, where a fast response is crucial. Quadrotors have the potential to navigate quickly through unstructured environments, enter and exit buildings through narrow gaps, and fly through collapsed buildings. However, their speed and maneuverability are still far from those of birds. Indeed, agile navigation through unknown, indoor environments poses a number of challenges for robotics research in terms of perception, state estimation, planning, and control. In this talk, I will give an overview of my research activities on visual navigation of quadrotors, from slow navigation (using standard frame-based cameras) to agile flight (using event-based cameras). Topics covered will be: visual inertial state estimation, absolute scale determination, monocular dense reconstruction, active vision and control, event-based vision.

Short biography

Davide Scaramuzza (born in 1980, Italian) is Assistant Professor of Robotics at the University of Zurich, where he does research at the intersection of robotics and computer vision. He did his PhD in robotics and computer vision at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis). From 2009 to 2012, he led the European project "sFly", which introduced the world’s first autonomous navigation of micro drones using visual-inertial sensors and onboard computing. For his research contributions, he was awarded the IEEE Robotics and Automation Society Early Career Award, the SNSF-ERC Starting Grant, and a Google Research Award. He coauthored the book "Introduction to Autonomous Mobile Robot" (published by MIT Press) and more than 100 papers on robotics and perception. In 2015, he co-founded a venture, called Zurich-Eye, dedicated to the commercialization of visual-inertial navigation solutions for mobile robots. In September 2016, this became Facebook-Oculus VR Switzerland. He was also the strategic advisor of Dacuda 3D division, dedicated to inside-out VR solutions, which was acquired by Magic Leap in 2017.

 

Juan Andrade Cetto

Institut de Robòtica i Informàtica Industrial

Mobile Robotics and Intelligent Systems 

Parc Tecnològic de Barcelona. C/ Llorens i Artigas 4-6, 

08028, Barcelona, Spain 

Telephone: +93 4017336

WebJuan Andrade Cetto

E-mailcetto@iri.upc.edu 

Presentation "Perception and control for aerial manipulators"

The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms.

A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), limiting its applications. Furthermore, localization methods with on-board sensors, imported from other robotics fields such as SLAM, often require large computational units and become a handicap in vehicles where size, load, and power consumption are important restrictions. In this talk we will present an alternative method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors.

With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but also other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this talk I will also present a such control law defining a task to drive the vehicle using visual information, and some other to guarantee robot integrity during flight, to improve platform stability or to increase arm operability.

Short biography

Juan Andrade Cetto is Research Associate (Científico Titular) of the Spanish National Research Council and Director of the Institut de Robòtica i Informàtica Industrial, CSIC-UPC. He develops computational models to solve various computer vision and robotics problems. A Fulbright Scholar at Purdue University (MSEE), he got his PhD in 2003 from UPC and received the EURON Georges Giralt Best European PhD Award in 2004 for his contributions to the observability of the SLAM problem.

He has directed 7 PhD theses in diverse topics related to mobile robotics and computer vision, and together with his students he has developed several 2D and 3D solutions to the SLAM problem, object recognition methods, and perception systems for aerial manipulation. He is the author of several top ranked robotics and computer vision journal and conference publications.

He has directed the contribution of CSIC to the EU projects FP7 ARCAS and H2020 AEROARMS on the topic of aerial manipulation, and to the EU projects FP7 CargoANTs and H2020 LOGIMATIC on the topic of autonomous navigation and control of large vehicles in cargo container ports and terminals. Under his leadership, IRI received in 2017 the María de Maeztu Seal of Excellence, a 2M€ distinction from the Spanish State Research Agency for its research program on human-centred robotics.

 

Denis Garagić

Chief Scientist

BAE Systems

600 District Avenue, Burlington

Massachusetts, USA

Telephone: +44 1252 373232

WebDenis Garagić

E-maildenis.garagic@baesystems.com 

Presentation "CoOperative Navigation for GPS-Denied Environments via Simultaneous Online Localization Exchange"

Teams of coordinating autonomous low-cost air platforms have potential to fill in critical gaps in national defense capabilities in heavily contested environments, day or night, under GPS-compromised conditions (e.g., sporadically available GPS). Accurate relative localization, i.e., estimating the position and orientation (pose) of each platform in the team, is a key prerequisite for successfully accomplishing these tasks. One approach for localizing a platform in GPS-denied environments is to utilize onboard sensors that measure the vehicle’s motion with respect to the surrounding environment to track its pose. Since each platform typically carries its own sensors, one potential strategy is to have each team member localize independently. However, if the platforms cooperate, they will be more effective in accomplishing their required tasks because of their improved collective localization accuracy. In heterogeneous teams, this is particularly effective since the vehicle with the least accurate sensors can gain localization accuracy comparable to the vehicle with the highest quality sensors. Our CoOperative Navigation via Simultaneous Online Localization Exchange (CONSOLE) approach is one method for realizing this strategy. CONSOLE is a new adaptive all-source cooperative navigation approach that exploits information received by each platform equipped with a low-cost strapdown IMU, GPS unit, and an inter-agent ranging sensor to resolve each platform’s relative position and orientation to enhance GPS-free navigation. CONSOLE provides a distributed cooperative airborne navigation approach for estimating 3-D pose (position and orientation) of multiple cooperating platforms. CONSOLE leverages noisy, intermittently available inter-agent measurements (e.g., noisy measurements of the relative pose between pairs of platforms) during prolonged, or complete, loss of GPS and without other georeferenced measurements (e.g., not relying on the use of any maps, or the ability to recognize landmarks in the environment). CONSOLE provides platform agnostic cooperative navigation based on nonlinear pose-graph optimization solved as an iterative nonlinear optimization on an underlying Riemannian manifold without requiring a suitable linear parameterization of the manifold.

Short biography

Dr. Denis Garagić is a Chief Scientist in the Sensor Processing and Exploitation (SPX) group under BAE Systems Technology Solutions. He is a key innovator, guiding SPX’s creation of cognitive computing solutions that provide machine intelligence and anticipatory intelligence to solve challenges across any domain for multiple United States Department of Defense customers including DARPA, the services, and the intelligence community. Denis has 20 years of experience in the areas of autonomous cooperative control for unmanned vehicles; game theory for distributed and hierarchical multilevel decision-making, agent-based modeling and simulation; artificial intelligence and machine learning for multi-sensor data fusion, complex scene understanding, motion activity pattern learning and prediction, learning communications signal behavior, speech recognition and automated text generation.  Denis has been a Technical Review Authority, Principal Investigator, or Research Lead on numerous Department of Defense programs. He has written a number of journal articles and conference papers, with several patents in applied controls for autonomous robotics and generic machine learning inference with applications to streaming data processing. He received his B.S. and M.S. (Mechanical Engineering & Technical Cybernetics; Applied Mathematics) degrees from The Czech Technical University, Prague, Czech Republic, and his Ph.D. in Mechanical Engineering – System Dynamics & Controls from The Ohio State University. Denis also received an Executive Certificate in Strategy & Innovation from the MIT Sloan School of Management.

 

Tomas Krajnik

Czech Technical University in Prague

Faculty of Electrical Engineering,

Department of Computer Science

Karlovo nam. 13

12135 Prague, Czechia

Telephone: +420 224 357 337

WebTomas Krajnik

E-mailtomas.krajnik@fel.cvut.cz 

Presentation "FreMEn: Frequency Map Enhancement for Long-Term Autonomy of Mobile Robots"

While robotic mapping of static environments has been widely studied, life-long mapping in non-stationary environments is still an open problem. We present an approach for long-term representation of natural environments, where many of the observed changes are caused by pseudo-periodic factors, such as seasonal variations, or humans performing their daily chores. Rather than using a fixed probability value, our method models the uncertainty of the elementary environment states by their persistence and frequency spectra. This allows to integrate sparse and irregular observations obtained during long-term deployments of mobile robots into memory-efficient models that reflect the recurring patterns of activity in the environment. The frequency-enhanced spatio-temporal models allow to predict the future environment states, which improves the efficiency of mobile robot operation in changing environments. In a series of experiments performed over periods of weeks to years, we demonstrate that the proposed approach improves mobile robot localization, path and task planning, activity recognition and allows for life-long spatio-temporal exploration.

Short biography

Tomas Krajnik is an assistant professor at the Czech Technical University in Prague, where he is currently establishing a research group aimed at the problems related to long-term autonomy of mobile robots in naturally changing environments. During his earlier work, he developed a robust visual navigation algorithm that allows autonomous operation of aerial and ground robots in outdoor environments that exhibit seasonal appearance variations. Later, he proposed to model the uncertainty of environment states by their frequency spectra, which improves long-term mobile robot autonomy in changing environments. As part of his work, he also implemented software libraries for fast visual tracking and UAV control, which were used by the roboticists of the NASA, EPFL, KIT, AIT etc, and contributed to the success of the joint CZ-USA-UK team during the Mohammed bin Zayeed Robotics Challenge. He cooperated with several research institutes all across the globe and was invited to present his work at world-leading laboratories including CSAIL@MIT or GRASP@UPENN. Tomas Krajnik participated in a number of EU and local projects and before returning to Czechia, he was part of the Lincoln Centre for Autonomous Systems Research, lead by prof. Tom Duckett.

 

Robert Cupec

University of Osijek

Faculty of Electrical Engineering, Computer Science and Information Technology 

Robotics and 3D Vision Group

Kneza Trpimira 2B,

31000 Osijek, Croatia

Telephone: +385 31 224 740

WebRobert Cupec

E-mailrobert.cupec@etfos.hr 

Presentation "Indoor Place Recognition Using RGB-D Camera Based on Planar Surfaces and Line Segments"

Place recognition is one of fundamental problems in mobile robotics. Efficient and reliable place recognition solutions covering a wide range of environment types would open a door for numerous applications of intelligent mobile machines in industry, traffic, public services, household etc. This presentation considers indoor place recognition based on matching of planar surfaces and straight edges extracted from depth images obtained by an RGB-D camera. A place recognition approach will be presented, which is based on matching sets of surface and line features extracted from depth images provided by a 3D camera to features of the same type contained in a previously created environment model. The environment model consists of a set of local models representing particular locations in the modeled environment. Each local model consists of planar surface segments and line segments representing the edges of objects in the environment. A computationally efficient pose hypothesis generation method ranks the features according to their potential contribution to the pose information, thereby reducing the time needed for obtaining accurate pose estimation. Furthermore, a robust probabilistic method for selecting the best pose hypothesis is proposed which allows matching of partially overlapping point clouds with gross outliers. The proposed approach is experimentally tested on a benchmark dataset containing depth images acquired in indoor environment with changes in lighting conditions and presence of moving objects. Furthermore, the applicability of color/texture descriptors in a global localization system based on planar surface segments will be discussed. A systematic study of application of color/texture information in the discussed place recognition approach has shown that this information can be used to prune potential surface pairs in the initial feature correspondence phase resulting in considerable decrease of a number of generated hypotheses and that by including color/texture information in the the hypothesis evaluation phase can make the correct hypotheses significantly more pronounced.

Short biography

Robert Cupec graduated at the Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia in 1995, where he received the M.Sc. degree in 1999. He received Dr. –Ing. degree from the Institute of Automatic Control Engineering, Technische Universität München, Munich, Germany, in 2005. Currently, he is employed as a full professor at the Faculty of Electrical Engineering, Josip Juraj Strossmayer University of Osijek, Croatia. His research interests include robotics and robot vision. He is the principal investigator on the project Advanced 3D Perception for Mobile Robot Manipulators supported by Croatian Science Foundation.

 

Dražen Brščić

University of Rijeka

Faculty of Engineering

Vukovarska 58

51000 Rijeka, Croatia

Telephone: +385 51 505719

WebDražen Brščić

E-maildbrscic@riteh.hr 

Presentation "People tracking for enabling human-robot interaction in large public spaces"

If we want social robots to get out of the labs and have successful interactions with humans in populated public spaces one requirement is reliable detection and tracking of people. However, this should not be done just in the immediate surroundings of the robot, but in a much larger space. One reason for that is that people expect the robot to notice them and react while they are still far away from it. Yet another different motivation is that collection of tracking data in a large public area allows the robot to model and understand the typical behaviour of people in that space, which is essential for obtaining smooth and natural interactions. Our approach was to fix 3D range sensors in the environment and develop a system that combines multiple sensors for continuous tracking of all people inside the covered area. This system was installed in a 900 m2 area of a shopping and business centre in Osaka, Japan. We used it to make a year-long collection of pedestrian data and model the movement and behaviour of people. Moreover, the tracking was used to implement some novel types of human-robot interactions. The talk will present the obtained results and discuss directions of further development.

Short biography

Dražen Brščić received his Dr.Eng. degree in Electrical engineering from The University of Tokyo in 2008. From 2008 to 2010 he worked at the Technical University of Munich, and from 2011 he was a at ATR in Japan. Since last year he joined the Faculty of Engineering at the University of Rijeka in Croatia as Assistant Professor. He has received several awards for his work, including two Best paper awards at the Human-Robot Interaction (HRI) conferences in 2013 and 2015. His research interests include mobile robotics, people tracking, and human-robot interaction.