The development of mobile service robots can be divided into two distinct golden eras. First, the “Don’t Go There” era (1962–2010), where service robots were confined to cages, with any form of human-robot interaction strictly prohibited. Second, the “Get Out Of The Way” era (2011–2021), where the robots freed from their cages, yet humans were still advised to avoid any direct interaction with them. Today, we are entering a third era (2022–present), where advances in engineering and artificial intelligence (AI) have made it possible for robots to co-exist with humans. From on-road traffic to pedestrian sidewalks, from hospitals and airports to our homes, from warehouses to university campuses, these service robots have the potential to improve people’s everyday lives.
Our longstanding goal is to unlock the full potential of mobile service robots in the real world so that they can perform tasks as efficiently as we can. We envision a new paradigm, which we call Human-Like Mobility, that enable robots to navigate in complex human environments, not just safely, but also confidently, gracefully, and in an agile manner.
Humans are safe, but at the same time, move confidently with grace and agility in order to be efficient and comfortable. For example, a more impatient individual will overtake a slow-moving group in front and squeeze through narrow gaps even if it means means light contact with other people or objects. Robots today sacrifice efficiency for safety, and cannot be fully deployed in complex human environments.
We work on developing multi-robot algorithms and systems that simultaneously give liveness, agility, and safety guarantees in fully decentralized settings in unstructured environments. We routinely work in the sub-areas of optimal control, machine learning, and reinforcement learning, and testing out our ideas both in simulation as well as on real physical robots.
Some active projects are:
Recent advances in robot learning, large vision-language models, and transformer-based architectures have made it possible for robots to interact with humans via complex natural language instructions. We are leveraging these ideas with the goal of building a next-generation paradigm for human-robot interaction where humans can interact with robots seamlessly and perform complex tasks via mind and body gestures.
At the same time, we are designing algorithms for robots to understand human intent. As humans, we always infer other people’s intent without them ever having to verbally indicate so. For instance, we can tell a person’s state of mind, simply from their style of walking, their gestures, their facial expressions, etc. Can we leverage these non-verbal cues for social robot navigation?
Some active projects are:
Baseline Traffic Model
Our Traffic Model
Over the past decade, we have made immense progress in autonomous driving in developed nations like U.S.A, Europe, UK, etc. But we are still far from achieving the same success in developing nations like India, where the traffic is far denser, far more heterogeneous, and far more chaotic. Our research focuses on developing autonomous driving and ADAS for traffic in these regions.
Some active projects are